content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Problem when using MemoryDC
Why does my code print the lines gray instead of black?
import wx
class MyFrame(wx.Frame):
def __init__(self,*args,**kwargs):
wx.Frame.__init__(self,*args,**kwargs)
self.panel=wx.Panel(self,-1,size=(1000,1000))
self.Bind(wx.EVT_PAINT, self.on_paint)
self.Bind(wx.EVT_SIZE, self.on_size)
self.bitmap=wx.EmptyBitmapRGBA(1000,1000,255,255,255,255)
dc=wx.MemoryDC()
dc.SelectObject(self.bitmap)
dc.SetPen(wx.Pen(wx.NamedColor("black"),10,wx.SOLID))
dc.DrawCircle(0,0,30)
dc.DrawLine(40,40,70,70)
dc.Destroy()
self.Show()
def on_size(self,e=None):
self.Refresh()
def on_paint(self,e=None):
dc=wx.PaintDC(self.panel)
dc.DrawBitmap(self.bitmap,0,0)
dc.Destroy()
if __name__=="__main__":
app=wx.PySimpleApp()
my_frame=MyFrame(parent=None,id=-1)
app.MainLoop()
A:
Beside the frame/panel paint problem already pointed out the color problem is due to the alpha channel of the 32 bit bitmap.
I remember having read to use wx.GCDC instead of wx.DC.
A:
Ok I tested with newer version of wx(2.8.9.2)
and Now I wonder why it is even working on your side.
you are trying to paint Panel but overriding the paint event of Frame
instead do this
self.panel.Bind(wx.EVT_PAINT, self.on_paint)
and all will be fine
|
Problem when using MemoryDC
|
Why does my code print the lines gray instead of black?
import wx
class MyFrame(wx.Frame):
def __init__(self,*args,**kwargs):
wx.Frame.__init__(self,*args,**kwargs)
self.panel=wx.Panel(self,-1,size=(1000,1000))
self.Bind(wx.EVT_PAINT, self.on_paint)
self.Bind(wx.EVT_SIZE, self.on_size)
self.bitmap=wx.EmptyBitmapRGBA(1000,1000,255,255,255,255)
dc=wx.MemoryDC()
dc.SelectObject(self.bitmap)
dc.SetPen(wx.Pen(wx.NamedColor("black"),10,wx.SOLID))
dc.DrawCircle(0,0,30)
dc.DrawLine(40,40,70,70)
dc.Destroy()
self.Show()
def on_size(self,e=None):
self.Refresh()
def on_paint(self,e=None):
dc=wx.PaintDC(self.panel)
dc.DrawBitmap(self.bitmap,0,0)
dc.Destroy()
if __name__=="__main__":
app=wx.PySimpleApp()
my_frame=MyFrame(parent=None,id=-1)
app.MainLoop()
|
[
"Beside the frame/panel paint problem already pointed out the color problem is due to the alpha channel of the 32 bit bitmap.\nI remember having read to use wx.GCDC instead of wx.DC.\n",
"Ok I tested with newer version of wx(2.8.9.2) \nand Now I wonder why it is even working on your side.\nyou are trying to paint Panel but overriding the paint event of Frame\ninstead do this\nself.panel.Bind(wx.EVT_PAINT, self.on_paint)\n\nand all will be fine\n"
] |
[
1,
0
] |
[] |
[] |
[
"graphics",
"python",
"wxpython"
] |
stackoverflow_0000845071_graphics_python_wxpython.txt
|
Q:
Why can't I pass a direct reference to a dictionary value to a function?
Earlier today I asked a question
about passing dictionary values to a function. While I understand now how to accomplish what I was trying to accomplish the why question (which was not asked) was never answered. So my follow up is why can't I
def myFunction(newDict['bubba']):
some code to process the parameter
Is it simply because the parser rules do not allow this? I Googled for +Python +function +"allowable parameters" and did not find anything useful so I would appreciate any information.
I am oversimplifying what is going on. I have a dictionary that is structured like
myDict={outerkey1:(innerkey1:value,innerkey2:value,. . .innerkeyn:value),outerkey2:(innerkey1:value,innerkey2:value,. . .innerkeyn:value),. . .}
As I said, I know how to do what I wanted-I got a very helpful answer. But I started wondering why
def myFunction(outerkey,myDict[outerkey]):
gives a syntax error which finely it occurred to me that it has to be a parsing problem.
A:
Yes, the parser will reject this code.
Parameter lists are used in function definitions to bind identifiers within the function to arguments that are passed in from the outside on invocation.
Since newDict['bubba'] is not a valid identifier, this doesn't make any sense -- you need to provide it as an invocation argument instead of a function parameter, since function parameters can only be identifiers.
Because you seem interested in the formal grammar, here are the relevant parts:
funcdef ::=
[decorators] "def" funcname "(" [parameter_list] ")"
":" suite
parameter_list ::=
(defparameter ",")*
(~~"*" identifier [, "**" identifier]
| "**" identifier
| defparameter [","] )
defparameter ::=
parameter ["=" expression]
identifier ::=
(letter|"_") (letter | digit | "_")*
In fact, the construct you are trying to use as an identifier is a subscription:
subscription ::=
primary "[" expression_list "]"
A:
It looks like you might be confused between the definition of a function and calling that function. As a simple example:
def f(x):
y = x * x
print "x squared is", y
By itself, this function doesn't do anything (ie. it doesn't print anything). If you put this in a source file and run it, nothing will be output. But then if you add:
f(5)
you will get the output "x squared is 25". When Python encounters the name of the function followed by actual parameters, it substitutes the value(s) of those actual parameters for the formal parameter(s) in the function definition. In this case, the formal parameter is called x. Then Python runs the function f with the value of x as 5. Now, the big benefit is you can use the same function again with a different value:
f(10)
prints "x squared is 100".
I hope I've understood the source of your difficulty correctly.
A:
You can only pass references to objects.
a = { 'a' : 123, 'b' : [ 1, 2, 3 ] }
def f(dictionary_value):
dictionary_value.append(4)
f( a['b'] )
Does what you expect. Dictionary element a['b'] is passed "by reference" to the function f which updates the dictionary element.
|
Why can't I pass a direct reference to a dictionary value to a function?
|
Earlier today I asked a question
about passing dictionary values to a function. While I understand now how to accomplish what I was trying to accomplish the why question (which was not asked) was never answered. So my follow up is why can't I
def myFunction(newDict['bubba']):
some code to process the parameter
Is it simply because the parser rules do not allow this? I Googled for +Python +function +"allowable parameters" and did not find anything useful so I would appreciate any information.
I am oversimplifying what is going on. I have a dictionary that is structured like
myDict={outerkey1:(innerkey1:value,innerkey2:value,. . .innerkeyn:value),outerkey2:(innerkey1:value,innerkey2:value,. . .innerkeyn:value),. . .}
As I said, I know how to do what I wanted-I got a very helpful answer. But I started wondering why
def myFunction(outerkey,myDict[outerkey]):
gives a syntax error which finely it occurred to me that it has to be a parsing problem.
|
[
"Yes, the parser will reject this code.\nParameter lists are used in function definitions to bind identifiers within the function to arguments that are passed in from the outside on invocation.\nSince newDict['bubba'] is not a valid identifier, this doesn't make any sense -- you need to provide it as an invocation argument instead of a function parameter, since function parameters can only be identifiers.\nBecause you seem interested in the formal grammar, here are the relevant parts:\nfuncdef ::= \n [decorators] \"def\" funcname \"(\" [parameter_list] \")\"\n \":\" suite\n\nparameter_list ::= \n (defparameter \",\")*\n (~~\"*\" identifier [, \"**\" identifier]\n | \"**\" identifier\n | defparameter [\",\"] )\n\ndefparameter ::= \n parameter [\"=\" expression]\n\nidentifier ::= \n (letter|\"_\") (letter | digit | \"_\")*\n\nIn fact, the construct you are trying to use as an identifier is a subscription:\nsubscription ::= \n primary \"[\" expression_list \"]\"\n\n",
"It looks like you might be confused between the definition of a function and calling that function. As a simple example:\ndef f(x):\n y = x * x\n print \"x squared is\", y\n\nBy itself, this function doesn't do anything (ie. it doesn't print anything). If you put this in a source file and run it, nothing will be output. But then if you add:\nf(5)\n\nyou will get the output \"x squared is 25\". When Python encounters the name of the function followed by actual parameters, it substitutes the value(s) of those actual parameters for the formal parameter(s) in the function definition. In this case, the formal parameter is called x. Then Python runs the function f with the value of x as 5. Now, the big benefit is you can use the same function again with a different value:\nf(10)\n\nprints \"x squared is 100\".\nI hope I've understood the source of your difficulty correctly.\n",
"You can only pass references to objects.\na = { 'a' : 123, 'b' : [ 1, 2, 3 ] }\n\ndef f(dictionary_value):\n dictionary_value.append(4)\n\nf( a['b'] )\n\nDoes what you expect. Dictionary element a['b'] is passed \"by reference\" to the function f which updates the dictionary element.\n"
] |
[
6,
2,
2
] |
[] |
[] |
[
"dictionary",
"function",
"parameters",
"python"
] |
stackoverflow_0000855514_dictionary_function_parameters_python.txt
|
Q:
Is this Python producer-consumer lockless approach thread-safe?
I recently wrote a program that used a simple producer/consumer pattern. It initially had a bug related to improper use of threading.Lock that I eventually fixed. But it made me think whether it's possible to implement producer/consumer pattern in a lockless manner.
Requirements in my case were simple:
One producer thread.
One consumer thread.
Queue has place for only one item.
Producer can produce next item before the current one is consumed. The current item is therefore lost, but that's OK for me.
Consumer can consume current item before the next one is produced. The current item is therefore consumed twice (or more), but that's OK for me.
So I wrote this:
QUEUE_ITEM = None
# this is executed in one threading.Thread object
def producer():
global QUEUE_ITEM
while True:
i = produce_item()
QUEUE_ITEM = i
# this is executed in another threading.Thread object
def consumer():
global QUEUE_ITEM
while True:
i = QUEUE_ITEM
consume_item(i)
My question is: Is this code thread-safe?
Immediate comment: this code isn't really lockless - I use CPython and it has GIL.
I tested the code a little and it seems to work. It translates to some LOAD and STORE ops which are atomic because of GIL. But I also know that del x operation isn't atomic when x implements __del__ method. So if my item has a __del__ method and some nasty scheduling happens, things may break. Or not?
Another question is: What kind of restrictions (for example on produced items' type) do I have to impose to make the above code work fine?
My questions are only about theoretical possibility to exploit CPython's and GIL's quirks in order to come up with lockless (i.e. no locks like threading.Lock explicitly in code) solution.
A:
Trickery will bite you. Just use Queue to communicate between threads.
A:
Yes this will work in the way that you described:
That the producer may produce a skippable element.
That the consumer may consume the same element.
But I also know that del x operation isn't atomic when x implements del method. So if my item has a del method and some nasty scheduling happens, things may break.
I don't see a "del" here. If a del happens in consume_item then the del may occur in the producer thread. I don't think this would be a "problem".
Don't bother using this though. You will end up using up CPU on pointless polling cycles, and it is not any faster than using a queue with locks since Python already has a global lock.
A:
This is not really thread safe because producer could overwrite QUEUE_ITEM before consumer has consumed it and consumer could consume QUEUE_ITEM twice. As you mentioned, you're OK with that but most people aren't.
Someone with more knowledge of cpython internals will have to answer you more theoretical questions.
A:
I think it's possible that a thread is interrupted while producing/consuming, especially if the items are big objects.
Edit: this is just a wild guess. I'm no expert.
Also the threads may produce/consume any number of items before the other one starts running.
A:
You can use a list as the queue as long as you stick to append/pop since both are atomic.
QUEUE = []
# this is executed in one threading.Thread object
def producer():
global QUEUE
while True:
i = produce_item()
QUEUE.append(i)
# this is executed in another threading.Thread object
def consumer():
global QUEUE
while True:
try:
i = QUEUE.pop(0)
except IndexError:
# queue is empty
continue
consume_item(i)
In a class scope like below, you can even clear the queue.
class Atomic(object):
def __init__(self):
self.queue = []
# this is executed in one threading.Thread object
def producer(self):
while True:
i = produce_item()
self.queue.append(i)
# this is executed in another threading.Thread object
def consumer(self):
while True:
try:
i = self.queue.pop(0)
except IndexError:
# queue is empty
continue
consume_item(i)
# There's the possibility producer is still working on it's current item.
def clear_queue(self):
self.queue = []
You'll have to find out which list operations are atomic by looking at the bytecode generated.
A:
The __del__ could be a problem as You said. It could be avoided, if only there was a way to prevent the garbage collector from invoking the __del__ method on the old object before We finish assigning the new one to the QUEUE_ITEM. We would need something like:
increase the reference counter on the old object
assign a new one to `QUEUE_ITEM`
decrease the reference counter on the old object
I'm afraid, I don't know if it is possible, though.
|
Is this Python producer-consumer lockless approach thread-safe?
|
I recently wrote a program that used a simple producer/consumer pattern. It initially had a bug related to improper use of threading.Lock that I eventually fixed. But it made me think whether it's possible to implement producer/consumer pattern in a lockless manner.
Requirements in my case were simple:
One producer thread.
One consumer thread.
Queue has place for only one item.
Producer can produce next item before the current one is consumed. The current item is therefore lost, but that's OK for me.
Consumer can consume current item before the next one is produced. The current item is therefore consumed twice (or more), but that's OK for me.
So I wrote this:
QUEUE_ITEM = None
# this is executed in one threading.Thread object
def producer():
global QUEUE_ITEM
while True:
i = produce_item()
QUEUE_ITEM = i
# this is executed in another threading.Thread object
def consumer():
global QUEUE_ITEM
while True:
i = QUEUE_ITEM
consume_item(i)
My question is: Is this code thread-safe?
Immediate comment: this code isn't really lockless - I use CPython and it has GIL.
I tested the code a little and it seems to work. It translates to some LOAD and STORE ops which are atomic because of GIL. But I also know that del x operation isn't atomic when x implements __del__ method. So if my item has a __del__ method and some nasty scheduling happens, things may break. Or not?
Another question is: What kind of restrictions (for example on produced items' type) do I have to impose to make the above code work fine?
My questions are only about theoretical possibility to exploit CPython's and GIL's quirks in order to come up with lockless (i.e. no locks like threading.Lock explicitly in code) solution.
|
[
"Trickery will bite you. Just use Queue to communicate between threads.\n",
"Yes this will work in the way that you described:\n\nThat the producer may produce a skippable element.\nThat the consumer may consume the same element.\n\n\nBut I also know that del x operation isn't atomic when x implements del method. So if my item has a del method and some nasty scheduling happens, things may break. \n\nI don't see a \"del\" here. If a del happens in consume_item then the del may occur in the producer thread. I don't think this would be a \"problem\".\nDon't bother using this though. You will end up using up CPU on pointless polling cycles, and it is not any faster than using a queue with locks since Python already has a global lock.\n",
"This is not really thread safe because producer could overwrite QUEUE_ITEM before consumer has consumed it and consumer could consume QUEUE_ITEM twice. As you mentioned, you're OK with that but most people aren't.\nSomeone with more knowledge of cpython internals will have to answer you more theoretical questions.\n",
"I think it's possible that a thread is interrupted while producing/consuming, especially if the items are big objects.\nEdit: this is just a wild guess. I'm no expert.\nAlso the threads may produce/consume any number of items before the other one starts running.\n",
"You can use a list as the queue as long as you stick to append/pop since both are atomic.\nQUEUE = []\n\n# this is executed in one threading.Thread object\ndef producer():\n global QUEUE\n while True:\n i = produce_item()\n QUEUE.append(i)\n\n# this is executed in another threading.Thread object\ndef consumer():\n global QUEUE\n while True:\n try:\n i = QUEUE.pop(0)\n except IndexError:\n # queue is empty\n continue\n\n consume_item(i)\n\nIn a class scope like below, you can even clear the queue.\nclass Atomic(object):\n def __init__(self):\n self.queue = []\n\n # this is executed in one threading.Thread object\n def producer(self):\n while True:\n i = produce_item()\n self.queue.append(i)\n\n # this is executed in another threading.Thread object\n def consumer(self):\n while True:\n try:\n i = self.queue.pop(0)\n except IndexError:\n # queue is empty\n continue\n\n consume_item(i)\n\n # There's the possibility producer is still working on it's current item.\n def clear_queue(self):\n self.queue = []\n\nYou'll have to find out which list operations are atomic by looking at the bytecode generated.\n",
"The __del__ could be a problem as You said. It could be avoided, if only there was a way to prevent the garbage collector from invoking the __del__ method on the old object before We finish assigning the new one to the QUEUE_ITEM. We would need something like:\nincrease the reference counter on the old object\nassign a new one to `QUEUE_ITEM`\ndecrease the reference counter on the old object\n\nI'm afraid, I don't know if it is possible, though.\n"
] |
[
6,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"locking",
"producer_consumer",
"python",
"thread_safety"
] |
stackoverflow_0000854906_locking_producer_consumer_python_thread_safety.txt
|
Q:
A ListView of checkboxes in PyQt
I want to display a QListView where each item is a checkbox with some label. The checkboxes should be visible at all times. One way I can think of is using a custom delegate and QAbstractListModel. Are there simpler ways? Can you provide the simplest snippet that does this?
Thanks in advance
A:
I ended up using the method provided by David Boddie in the PyQt mailing list. Here's a working snippet based on his code:
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
from random import randint
app = QApplication(sys.argv)
model = QStandardItemModel()
for n in range(10):
item = QStandardItem('Item %s' % randint(1, 100))
check = Qt.Checked if randint(0, 1) == 1 else Qt.Unchecked
item.setCheckState(check)
item.setCheckable(True)
model.appendRow(item)
view = QListView()
view.setModel(model)
view.show()
app.exec_()
Note: changed the call of setData with a check role to setCheckState and used setCheckable instead of flags.
A:
If you are writing your own model, just include the Qt.ItemIsUserCheckable
flag in the return value from the flags() method, and ensure that you return
a valid value for the Qt.CheckStateRole from the data() method.
If you use the QStandardItemModel class, include the Qt.ItemIsUserCheckable
flag in those you pass to each item's setFlags() method, and set the check
state for the Qt.CheckStateRole with its setData() method.
In an interactive Python session, type the following:
from PyQt4.QtGui import *
model = QStandardItemModel()
item = QStandardItem("Item")
item.setFlags(Qt.ItemIsUserCheckable | Qt.ItemIsEnabled)
item.setData(QVariant(Qt.Checked), Qt.CheckStateRole)
model.appendRow(item)
view = QListView()
view.setModel(model)
view.show()
|
A ListView of checkboxes in PyQt
|
I want to display a QListView where each item is a checkbox with some label. The checkboxes should be visible at all times. One way I can think of is using a custom delegate and QAbstractListModel. Are there simpler ways? Can you provide the simplest snippet that does this?
Thanks in advance
|
[
"I ended up using the method provided by David Boddie in the PyQt mailing list. Here's a working snippet based on his code:\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\nimport sys\nfrom random import randint\n\n\napp = QApplication(sys.argv)\n\nmodel = QStandardItemModel()\n\nfor n in range(10): \n item = QStandardItem('Item %s' % randint(1, 100))\n check = Qt.Checked if randint(0, 1) == 1 else Qt.Unchecked\n item.setCheckState(check)\n item.setCheckable(True)\n model.appendRow(item)\n\n\nview = QListView()\nview.setModel(model)\n\nview.show()\napp.exec_()\n\nNote: changed the call of setData with a check role to setCheckState and used setCheckable instead of flags.\n",
"If you are writing your own model, just include the Qt.ItemIsUserCheckable\nflag in the return value from the flags() method, and ensure that you return\na valid value for the Qt.CheckStateRole from the data() method.\nIf you use the QStandardItemModel class, include the Qt.ItemIsUserCheckable\nflag in those you pass to each item's setFlags() method, and set the check\nstate for the Qt.CheckStateRole with its setData() method.\nIn an interactive Python session, type the following:\nfrom PyQt4.QtGui import *\n\nmodel = QStandardItemModel()\nitem = QStandardItem(\"Item\")\nitem.setFlags(Qt.ItemIsUserCheckable | Qt.ItemIsEnabled)\nitem.setData(QVariant(Qt.Checked), Qt.CheckStateRole)\nmodel.appendRow(item)\n\nview = QListView()\nview.setModel(model)\nview.show()\n\n"
] |
[
23,
13
] |
[] |
[] |
[
"pyqt",
"python",
"qitemdelegate",
"qlistview",
"qt"
] |
stackoverflow_0000846684_pyqt_python_qitemdelegate_qlistview_qt.txt
|
Q:
launch a process off a mysql row insert
I need to launch a server side process off a mysql row insert. I'd appreciate some feedback/suggestions. So far I can think of three options:
1st (least attractive): My preliminary understanding is that I can write a kind of "custom trigger" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!
2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.
3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.
I'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.
I'd appreciate any feedback/suggestion
Thanks in advance.
A:
Write an insert trigger which duplicates inserted rows to a secondary table. Periodically poll the secondary table for rows with an external application/cronjob; if any rows are in the table, delete them and do your processing (or set a 'processing started' flag and only delete from the secondary table upon successful processing).
This will work very nicely for low to medium insert volumes. If you have a ton of data coming at your table, some kind of custom trigger in C is probably your only choice.
A:
I had this issue about 2 years ago in .NET and I went with the 3rd approach. However, looking back at it, I'm wondering if looking into Triggers with PhpMyAdmin & MySQL isn't the approach to look into.
|
launch a process off a mysql row insert
|
I need to launch a server side process off a mysql row insert. I'd appreciate some feedback/suggestions. So far I can think of three options:
1st (least attractive): My preliminary understanding is that I can write a kind of "custom trigger" in C that could fire off a row insert. In addition to having to renew my C skills this would requite a (custom?) recompile of MySQl ... yuck!
2nd (slightly more attractive): I could schedule a cron task server side of a program that I write that would query the table for new rows periodically. This has the benefit of being DB and language independent. The problem with this is that I suffer the delay of the cron's schedule.
3rd (the option I'm leading with): I could write a multi threaded program that would query the table for changes on a single thread, spawning new threads to process the newly inserted rows as needed. This has all the benefits of option 2 with less delay.
I'll also mention that I'm leaning towards python for this task, as easy access to the system (linux) commands, as well as some in house perl scripts, is going to be very very useful.
I'd appreciate any feedback/suggestion
Thanks in advance.
|
[
"Write an insert trigger which duplicates inserted rows to a secondary table. Periodically poll the secondary table for rows with an external application/cronjob; if any rows are in the table, delete them and do your processing (or set a 'processing started' flag and only delete from the secondary table upon successful processing).\nThis will work very nicely for low to medium insert volumes. If you have a ton of data coming at your table, some kind of custom trigger in C is probably your only choice.\n",
"I had this issue about 2 years ago in .NET and I went with the 3rd approach. However, looking back at it, I'm wondering if looking into Triggers with PhpMyAdmin & MySQL isn't the approach to look into.\n"
] |
[
4,
0
] |
[] |
[] |
[
"linux",
"mysql",
"perl",
"python"
] |
stackoverflow_0000856173_linux_mysql_perl_python.txt
|
Q:
Is there a "one-liner" way to get a list of keys from a dictionary in sorted order?
The list sort() method is a modifier function that returns None.
So if I want to iterate through all of the keys in a dictionary I cannot do:
for k in somedictionary.keys().sort():
dosomething()
Instead, I must:
keys = somedictionary.keys()
keys.sort()
for k in keys:
dosomething()
Is there a pretty way to iterate through these keys in sorted order without having to break it up in to multiple steps?
A:
for k in sorted(somedictionary.keys()):
doSomething(k)
Note that you can also get all of the keys and values sorted by keys like this:
for k, v in sorted(somedictionary.iteritems()):
doSomething(k, v)
A:
Can I answer my own question?
I have just discovered the handy function "sorted" which does exactly what I was looking for.
for k in sorted(somedictionary.keys()):
dosomething()
It shows up in Python 2.5 dictionary 2 key sort
A:
Actually, .keys() is not necessary:
for k in sorted(somedictionary):
doSomething(k)
or
[doSomethinc(k) for k in sorted(somedict)]
|
Is there a "one-liner" way to get a list of keys from a dictionary in sorted order?
|
The list sort() method is a modifier function that returns None.
So if I want to iterate through all of the keys in a dictionary I cannot do:
for k in somedictionary.keys().sort():
dosomething()
Instead, I must:
keys = somedictionary.keys()
keys.sort()
for k in keys:
dosomething()
Is there a pretty way to iterate through these keys in sorted order without having to break it up in to multiple steps?
|
[
"for k in sorted(somedictionary.keys()):\n doSomething(k)\n\nNote that you can also get all of the keys and values sorted by keys like this:\nfor k, v in sorted(somedictionary.iteritems()):\n doSomething(k, v)\n\n",
"Can I answer my own question?\nI have just discovered the handy function \"sorted\" which does exactly what I was looking for.\n\nfor k in sorted(somedictionary.keys()):\n dosomething()\n\nIt shows up in Python 2.5 dictionary 2 key sort\n",
"Actually, .keys() is not necessary:\nfor k in sorted(somedictionary):\n doSomething(k)\n\nor \n[doSomethinc(k) for k in sorted(somedict)]\n\n"
] |
[
20,
8,
7
] |
[] |
[] |
[
"iterator",
"python",
"syntactic_sugar"
] |
stackoverflow_0000327191_iterator_python_syntactic_sugar.txt
|
Q:
how are exceptions compared in an except clause
In the following code segment:
try:
raise Bob()
except Fred:
print "blah"
How is the comparison of Bob and Fred implemented?
From playing around it seems to be calling isinstance underneath, is this correct?
I'm asking because I am attempting to subvert the process, specifically I want to be able to construct a Bob such that it gets caught by execpt Fred even though it isn't actually an instance of Fred or any of its subclasses.
A couple of people have asked why I'm trying to do this...
We have a RMI system, that is built around the philosophy of making it as seamless as possible, here's a quick example of it in use, note that there is no socket specific code in the RMI system, sockets just provided a convenient example.
import remobj
socket = remobj.RemObj("remote_server_name").getImport("socket")
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(("", 0))
print "listening on port:", s.getsockname()[1]
s.settimeout(10)
try:
print "received:", s.recv(2048)
except socket.timeout:
print "timeout"
Now in this particular example the except doesn't work as expected because the raised object is not an instance of socket.timeout, it's an instance of one of our proxy helper classes.
A:
I believe that your guess is correct in how the comparison works, and the only way to intercept that is to add Fred as a base class to Bob. For example:
# Assume both Bob and Fred are derived from Exception
>>> class Bob(Bob, Fred):
... pass
...
>>> try:
... raise Bob()
... except Fred:
... print 'blah'
blah
As far as I know, this is the only way to make it work as you wrote it. However, if you simply rewrote the except: line as
... except (Bob, Fred):
It would catch both Bob and Fred, without requiring the modification of the definition of Bob.
A:
I want to be able to construct a Bob
such that it gets caught by execpt
Fred even though it isn't actually an
instance of Fred or any of its
subclasses.
Well, you can just catch 'Exception' - but this is not very pythonic. You should attempt to catch the correct exception and then fall back on the general Exception (which all exceptions are subclassed from) as a last resort. If this does not work for you then something has gone terribly wrong in your design phase.
See this note from Code Like A Pythonista
Note: Always specify the exceptions to
catch. Never use bare except clauses.
Bare except clauses will catch
unexpected exceptions, making your
code exceedingly difficult to debug.
However, one of the idioms in the Zen of Python is
Special cases aren't special enough to break the rules.
Although practicality beats purity.
A:
>>> class Fred(Exception):
pass
>>> class Bob(Fred):
pass
>>> issubclass(Bob, Fred)
True
>>> issubclass(Fred, Bob)
False
>>> try:
raise Bob()
except Fred:
print("blah")
blah
So basically the exception is caught because, Bob is subclass of Fred, I am assuming, either they must have implemented a logic similar to issubclass(Bob, Fred)
See, is this how you want to implement, watever you want. Ofcourse not in the init, but some other method.
>>> class Bob(Exception):
def __init__(self):
raise Fred
>>> try:
b = Bob()
except Fred:
print('blah')
blah
A:
I'm not clear how hiding your exception in socket.timeout adhears to the "seamless" philosophy? What's wrong with catching the expected exception as it's defined?
try:
print "received:", s.recv(2048)
except socket.timeout:
print "timeout"
except our_proxy_helper_class:
print 'crap!'
Or, if you really want to catch it as socket.timeout, why not just raise socket.timeout in our_proxy_helper_class?
raise socket.timeout('Method x timeout')
So when you raise socket.timeout in our_proxy_helper_class it should be caught by 'except socket.timeout'.
A:
In CPython at least, it looks like there's a COMPARE_OP operation with type 10 (exception match). There's unlikely anything you can do to hack around that calculation.
>>> import dis
>>> def foo():
... try:
... raise OSError()
... except Exception, e:
... pass
...
>>> dis.dis(foo)
2 0 SETUP_EXCEPT 13 (to 16)
3 3 LOAD_GLOBAL 0 (OSError)
6 CALL_FUNCTION 0
9 RAISE_VARARGS 1
12 POP_BLOCK
13 JUMP_FORWARD 21 (to 37)
4 >> 16 DUP_TOP
17 LOAD_GLOBAL 1 (Exception)
20 COMPARE_OP 10 (exception match)
23 JUMP_IF_FALSE 9 (to 35)
26 POP_TOP
27 POP_TOP
28 STORE_FAST 0 (e)
31 POP_TOP
5 32 JUMP_FORWARD 2 (to 37)
>> 35 POP_TOP
36 END_FINALLY
>> 37 LOAD_CONST 0 (None)
40 RETURN_VALUE
|
how are exceptions compared in an except clause
|
In the following code segment:
try:
raise Bob()
except Fred:
print "blah"
How is the comparison of Bob and Fred implemented?
From playing around it seems to be calling isinstance underneath, is this correct?
I'm asking because I am attempting to subvert the process, specifically I want to be able to construct a Bob such that it gets caught by execpt Fred even though it isn't actually an instance of Fred or any of its subclasses.
A couple of people have asked why I'm trying to do this...
We have a RMI system, that is built around the philosophy of making it as seamless as possible, here's a quick example of it in use, note that there is no socket specific code in the RMI system, sockets just provided a convenient example.
import remobj
socket = remobj.RemObj("remote_server_name").getImport("socket")
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(("", 0))
print "listening on port:", s.getsockname()[1]
s.settimeout(10)
try:
print "received:", s.recv(2048)
except socket.timeout:
print "timeout"
Now in this particular example the except doesn't work as expected because the raised object is not an instance of socket.timeout, it's an instance of one of our proxy helper classes.
|
[
"I believe that your guess is correct in how the comparison works, and the only way to intercept that is to add Fred as a base class to Bob. For example:\n# Assume both Bob and Fred are derived from Exception\n>>> class Bob(Bob, Fred):\n... pass\n... \n>>> try:\n... raise Bob()\n... except Fred:\n... print 'blah'\nblah\n\nAs far as I know, this is the only way to make it work as you wrote it. However, if you simply rewrote the except: line as\n... except (Bob, Fred):\n\nIt would catch both Bob and Fred, without requiring the modification of the definition of Bob.\n",
"\nI want to be able to construct a Bob\n such that it gets caught by execpt\n Fred even though it isn't actually an\n instance of Fred or any of its\n subclasses.\n\nWell, you can just catch 'Exception' - but this is not very pythonic. You should attempt to catch the correct exception and then fall back on the general Exception (which all exceptions are subclassed from) as a last resort. If this does not work for you then something has gone terribly wrong in your design phase.\nSee this note from Code Like A Pythonista\n\nNote: Always specify the exceptions to\n catch. Never use bare except clauses.\n Bare except clauses will catch\n unexpected exceptions, making your\n code exceedingly difficult to debug.\n\nHowever, one of the idioms in the Zen of Python is\n\nSpecial cases aren't special enough to break the rules.\n Although practicality beats purity.\n\n",
">>> class Fred(Exception):\n pass\n>>> class Bob(Fred):\n pass\n>>> issubclass(Bob, Fred)\nTrue\n>>> issubclass(Fred, Bob)\nFalse\n>>> try:\n raise Bob()\nexcept Fred:\n print(\"blah\")\n\n\nblah\n\nSo basically the exception is caught because, Bob is subclass of Fred, I am assuming, either they must have implemented a logic similar to issubclass(Bob, Fred)\nSee, is this how you want to implement, watever you want. Ofcourse not in the init, but some other method.\n>>> class Bob(Exception):\n def __init__(self):\n raise Fred\n\n>>> try:\n b = Bob()\nexcept Fred:\n print('blah')\n\n\nblah\n\n",
"I'm not clear how hiding your exception in socket.timeout adhears to the \"seamless\" philosophy? What's wrong with catching the expected exception as it's defined?\ntry:\n print \"received:\", s.recv(2048)\nexcept socket.timeout:\n print \"timeout\"\nexcept our_proxy_helper_class:\n print 'crap!'\n\nOr, if you really want to catch it as socket.timeout, why not just raise socket.timeout in our_proxy_helper_class?\nraise socket.timeout('Method x timeout')\n\nSo when you raise socket.timeout in our_proxy_helper_class it should be caught by 'except socket.timeout'.\n",
"In CPython at least, it looks like there's a COMPARE_OP operation with type 10 (exception match). There's unlikely anything you can do to hack around that calculation.\n\n>>> import dis\n>>> def foo():\n... try:\n... raise OSError()\n... except Exception, e:\n... pass\n... \n>>> dis.dis(foo)\n 2 0 SETUP_EXCEPT 13 (to 16)\n\n 3 3 LOAD_GLOBAL 0 (OSError)\n 6 CALL_FUNCTION 0\n 9 RAISE_VARARGS 1\n 12 POP_BLOCK \n 13 JUMP_FORWARD 21 (to 37)\n\n 4 >> 16 DUP_TOP \n 17 LOAD_GLOBAL 1 (Exception)\n 20 COMPARE_OP 10 (exception match)\n 23 JUMP_IF_FALSE 9 (to 35)\n 26 POP_TOP \n 27 POP_TOP \n 28 STORE_FAST 0 (e)\n 31 POP_TOP \n\n 5 32 JUMP_FORWARD 2 (to 37)\n >> 35 POP_TOP \n 36 END_FINALLY \n >> 37 LOAD_CONST 0 (None)\n 40 RETURN_VALUE \n"
] |
[
6,
1,
1,
1,
0
] |
[] |
[] |
[
"exception",
"python"
] |
stackoverflow_0000851012_exception_python.txt
|
Q:
Subclassing ctypes - Python
This is some code I found on the internet. I'm not sure how it is meant to be used. I simply filled members with the enum keys/values and it works, but I'm curious what this metaclass is all about. I am assuming it has something to do with ctypes, but I can't find much information on subclassing ctypes. I know EnumerationType isn't doing anything the way I'm using Enumeration.
from ctypes import *
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
@classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
And an enumeration probably done the wrong way.
class TOKEN(Enumeration):
_members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}
A:
A metaclass is a class used to create classes. Think of it this way: all objects have a class, a class is also an object, therefore, it makes sense that a class can have a class.
http://www.ibm.com/developerworks/linux/library/l-pymeta.html
To understand what this is doing, you can look at a few points in the code.
_members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}
globals()[key] = value
Here it takes every defined key in your dictionary: "T_UNDEF" "T_NUMBER" and makes them available in your globals dictionary.
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
Whenever you make an instance of your enum, it will check to see if the "value" is in your list of allowable enum names when you initialized the class. When the value is found, it sets the string name to self.name.
c_uint.__init__(self, value)
This is the actual line which sets the "ctypes value" to an actual c unsigned integer.
A:
That is indeed a weird class.
The way you are using it is correct, although another way would be:
class TOKEN(Enumeration):
T_UNDEF = 0
T_NAME = 1
T_NUMBER = 2
T_STRING = 3
T_OPERATOR = 4
T_VARIABLE = 5
T_FUNCTION = 6
(That's what the first 6 lines in __new__ are for)
Then you can use it like so:
>>> TOKEN
<Enumeration TOKEN>
>>> TOKEN(T_NAME)
<member T_NAME=1 of <Enumeration TOKEN>>
>>> T_NAME in TOKEN
True
>>> TOKEN(1).name
'T_NAME'
The from_param method seems to be for convenience, for writing methods that accept either an int or an Enumeration object. Not really sure if that's really its purpose.
I think this class is meant to be used when working with external APIs the use c-style enums, but it looks like a whole lot of work for very little gain.
|
Subclassing ctypes - Python
|
This is some code I found on the internet. I'm not sure how it is meant to be used. I simply filled members with the enum keys/values and it works, but I'm curious what this metaclass is all about. I am assuming it has something to do with ctypes, but I can't find much information on subclassing ctypes. I know EnumerationType isn't doing anything the way I'm using Enumeration.
from ctypes import *
class EnumerationType(type(c_uint)):
def __new__(metacls, name, bases, dict):
if not "_members_" in dict:
_members_ = {}
for key,value in dict.items():
if not key.startswith("_"):
_members_[key] = value
dict["_members_"] = _members_
cls = type(c_uint).__new__(metacls, name, bases, dict)
for key,value in cls._members_.items():
globals()[key] = value
return cls
def __contains__(self, value):
return value in self._members_.values()
def __repr__(self):
return "<Enumeration %s>" % self.__name__
class Enumeration(c_uint):
__metaclass__ = EnumerationType
_members_ = {}
def __init__(self, value):
for k,v in self._members_.items():
if v == value:
self.name = k
break
else:
raise ValueError("No enumeration member with value %r" % value)
c_uint.__init__(self, value)
@classmethod
def from_param(cls, param):
if isinstance(param, Enumeration):
if param.__class__ != cls:
raise ValueError("Cannot mix enumeration members")
else:
return param
else:
return cls(param)
def __repr__(self):
return "<member %s=%d of %r>" % (self.name, self.value, self.__class__)
And an enumeration probably done the wrong way.
class TOKEN(Enumeration):
_members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}
|
[
"A metaclass is a class used to create classes. Think of it this way: all objects have a class, a class is also an object, therefore, it makes sense that a class can have a class.\nhttp://www.ibm.com/developerworks/linux/library/l-pymeta.html\nTo understand what this is doing, you can look at a few points in the code.\n _members_ = {'T_UNDEF':0, 'T_NAME':1, 'T_NUMBER':2, 'T_STRING':3, 'T_OPERATOR':4, 'T_VARIABLE':5, 'T_FUNCTION':6}\n\nglobals()[key] = value\n\nHere it takes every defined key in your dictionary: \"T_UNDEF\" \"T_NUMBER\" and makes them available in your globals dictionary.\ndef __init__(self, value):\n for k,v in self._members_.items():\n if v == value:\n self.name = k\n break\n\nWhenever you make an instance of your enum, it will check to see if the \"value\" is in your list of allowable enum names when you initialized the class. When the value is found, it sets the string name to self.name.\nc_uint.__init__(self, value)\n\nThis is the actual line which sets the \"ctypes value\" to an actual c unsigned integer.\n",
"That is indeed a weird class.\nThe way you are using it is correct, although another way would be:\nclass TOKEN(Enumeration):\n T_UNDEF = 0\n T_NAME = 1\n T_NUMBER = 2\n T_STRING = 3\n T_OPERATOR = 4\n T_VARIABLE = 5\n T_FUNCTION = 6\n\n(That's what the first 6 lines in __new__ are for)\nThen you can use it like so:\n>>> TOKEN\n<Enumeration TOKEN>\n>>> TOKEN(T_NAME)\n<member T_NAME=1 of <Enumeration TOKEN>>\n>>> T_NAME in TOKEN\nTrue\n>>> TOKEN(1).name\n'T_NAME'\n\nThe from_param method seems to be for convenience, for writing methods that accept either an int or an Enumeration object. Not really sure if that's really its purpose.\nI think this class is meant to be used when working with external APIs the use c-style enums, but it looks like a whole lot of work for very little gain.\n"
] |
[
4,
3
] |
[] |
[] |
[
"ctypes",
"python"
] |
stackoverflow_0000855941_ctypes_python.txt
|
Q:
Getting local dictionary for function scope only in Python
I keep ending up at this situation where I want to use a dictionary very much like the one 'locals' gives back, but that only contains the variables in the limited scope of the function. Is there a way to do this in python?
A bit more about why I want to do this: I'm playing with Django and when I go to give my templates context, I am forced either to either manually make a dictionary (In violation with DRY principles) or pass in locals() which contains far more entries then are needed (wasteful). Is there perhaps something I'm missing with django which would alleviate the need of a python level solution?
To Clarify:
So, the case that I've hit repeatedly is where I have:
@render_to('my_template.html')
def myview(request):
var1 = #blahblah
var2 = #...
# do stuff with vars
return {'var1': val1,'var2':val2}
So instead of repeating those variables and naming conventions, I'll do:
@render_to('my_template.html')
def myview(request):
var1 = #blahblah
var2 = #...
# do stuff with vars
return locals()
Which I find cleaner, but I know its kind of sloppy since there are about 30 more entries in locals() then I actually need.
A:
I'm not sure I agree that making a dictionary is a violation of DRY, but if you really don't want to repeat anything at all, you could just define a 'context' dictionary at the top of the view and use dictionary keys instead of variables throughout the view.
def my_view(request):
context = {}
context['items'] = Item.objects.all()
context['anothervalue'] = context['items'][2].name
return render_to_response('template.html', context)
A:
How is passing a dictionary a violation of DRY? Django is all about DRY, so I doubt the standard behavior of it would directly violate it. In either case, however, I use a modified version of django-annoying to make the whole thing easier:
@render_to('my_template.html')
def myview(request):
# figure stuff out...
return {'var1':'val1','var2','val2'}
The render_to decorator takes care of the request context and all that good stuff. Works well.
If this doesn't help, I suggest rephrasing your question. Whatever you want to do messing around with locals() and such is rarely necessary especially in this kind of situation with Django.
A:
You say you don't like using locals() because it is "wasteful". Wasteful of what? I believe the dictionary it returns already exists, it's just giving you a reference to it. And even if it has to create the dictionary, this is one of the most highly optimized operations in Python, so don't worry about it.
You should focus on the code structure that best expresses your intention, with the fewest possibilities for error. The waste you are worried about is nothing to worry about.
A:
While I agree with many other respondents that passing either locals() or a fully specified dict {'var1':var1, 'var2': var2} is most likely OK, if you specifically want to "subset" a dict such as locals() that's far from hard either, e.g.:
loc = locals()
return dict((k,loc[k]) for k in 'var1 var2'.split())
|
Getting local dictionary for function scope only in Python
|
I keep ending up at this situation where I want to use a dictionary very much like the one 'locals' gives back, but that only contains the variables in the limited scope of the function. Is there a way to do this in python?
A bit more about why I want to do this: I'm playing with Django and when I go to give my templates context, I am forced either to either manually make a dictionary (In violation with DRY principles) or pass in locals() which contains far more entries then are needed (wasteful). Is there perhaps something I'm missing with django which would alleviate the need of a python level solution?
To Clarify:
So, the case that I've hit repeatedly is where I have:
@render_to('my_template.html')
def myview(request):
var1 = #blahblah
var2 = #...
# do stuff with vars
return {'var1': val1,'var2':val2}
So instead of repeating those variables and naming conventions, I'll do:
@render_to('my_template.html')
def myview(request):
var1 = #blahblah
var2 = #...
# do stuff with vars
return locals()
Which I find cleaner, but I know its kind of sloppy since there are about 30 more entries in locals() then I actually need.
|
[
"I'm not sure I agree that making a dictionary is a violation of DRY, but if you really don't want to repeat anything at all, you could just define a 'context' dictionary at the top of the view and use dictionary keys instead of variables throughout the view.\ndef my_view(request):\n context = {}\n context['items'] = Item.objects.all()\n context['anothervalue'] = context['items'][2].name\n return render_to_response('template.html', context)\n\n",
"How is passing a dictionary a violation of DRY? Django is all about DRY, so I doubt the standard behavior of it would directly violate it. In either case, however, I use a modified version of django-annoying to make the whole thing easier:\n@render_to('my_template.html')\ndef myview(request):\n # figure stuff out...\n return {'var1':'val1','var2','val2'}\n\nThe render_to decorator takes care of the request context and all that good stuff. Works well.\nIf this doesn't help, I suggest rephrasing your question. Whatever you want to do messing around with locals() and such is rarely necessary especially in this kind of situation with Django.\n",
"You say you don't like using locals() because it is \"wasteful\". Wasteful of what? I believe the dictionary it returns already exists, it's just giving you a reference to it. And even if it has to create the dictionary, this is one of the most highly optimized operations in Python, so don't worry about it.\nYou should focus on the code structure that best expresses your intention, with the fewest possibilities for error. The waste you are worried about is nothing to worry about.\n",
"While I agree with many other respondents that passing either locals() or a fully specified dict {'var1':var1, 'var2': var2} is most likely OK, if you specifically want to \"subset\" a dict such as locals() that's far from hard either, e.g.:\nloc = locals()\nreturn dict((k,loc[k]) for k in 'var1 var2'.split())\n\n"
] |
[
5,
4,
2,
2
] |
[] |
[] |
[
"django",
"django_views",
"locals",
"python",
"scope"
] |
stackoverflow_0000855259_django_django_views_locals_python_scope.txt
|
Q:
Running django on OSX
I've just completed the very very nice django tutorial and it all went swimmingly. One of the first parts of the tutorial is that it says not to use their example server thingie in production, my first act after the tutorial was thus to try to run my app on apache.
I'm running OSX 10.5 and have the standard apache (which refuses to run python) and MAMP (which begrudgingly allows it in cgi-bin). The problem is that I've no idea which script to call, in the tutorial it was always localhost:8000/polls but I've no idea how that's meant to map to a specific file.
Have I missed something blatantly obvious about what to do with a .htaccess file or does the tutorial not actually explain how to use it somewhere else?
A:
You probably won't find much joy using .htaccess to configure Django through Apache (though I confess you probably could do it if you're determined enough... but for production I suspect it will be more complicated than necessary). I develop and run Django in OS X, and it works quite seamlessly.
The secret is that you must configure httpd.conf to pass requests to Django via one of three options: mod_wsgi (the most modern approach), mod_python (second best, but works fine on OS X's Python 2.5), fastcgi (well... if you must to match your production environment).
Django's deployment docs have good advice and instruction for all three options.
If you are using the default OS X apache install, edit /etc/apache2/httpd.conf with the directives found in the Django docs above. I can't speak for MAMP, but if you build Apache from source (which is so easy on OS X I do wonder why anyone bothers with MAMP... my insecurities are showing), edit /usr/local/apache2/conf/httpd.conf.
A:
Unless you are planning on going to production with OS X you might not want to bother. If you must do it, go straight to mod_wsgi. Don't bother with mod_python or older solutions. I did mod_python on Apache and while it runs great now, it took countless hours to set up.
Also, just to clarify something based on what you said: You're not going to find a mapping between the url path (like /polls) and a script that is being called. Django doesn't work like that. With Django your application is loaded into memory waiting for requests. Once a request comes in it gets dispatched through the url map that you created in urls.py. That boils down to a function call somewhere in your code.
That's why for a webserver like Apache you need a module like mod_wsgi, which gives your app a spot in memory in which to live. Compare that with something like CGI where the webserver executes a specific script on demand at a location that is physically mapped between the url and the filesystem.
I hope that's helpful and not telling you something you already knew. :)
A:
If you're planning on running it for production (or non-development) mode another option is to do away with Apache and go with something lightweight like nginx. But if you're doing development work, you'll want to stick with the built-in server and PyDev in Eclipse. It's so much easier to walk through the server code line-by-line in the Eclipse debugger.
Installation instructions for nginx/Django here: http://snippets.dzone.com/tag/nginx
A:
Yet another option is to consider using a virtual machine for your development. You can install a full version of whatever OS your production server will be running - say, Debian - and run your Apache and DB in the VM.
You can connect to the virtual disk in the Finder, so you can still use TextMate (or whatever) on OSX to do your editing.
I've had good experiences doing this via VMWare Fusion.
|
Running django on OSX
|
I've just completed the very very nice django tutorial and it all went swimmingly. One of the first parts of the tutorial is that it says not to use their example server thingie in production, my first act after the tutorial was thus to try to run my app on apache.
I'm running OSX 10.5 and have the standard apache (which refuses to run python) and MAMP (which begrudgingly allows it in cgi-bin). The problem is that I've no idea which script to call, in the tutorial it was always localhost:8000/polls but I've no idea how that's meant to map to a specific file.
Have I missed something blatantly obvious about what to do with a .htaccess file or does the tutorial not actually explain how to use it somewhere else?
|
[
"You probably won't find much joy using .htaccess to configure Django through Apache (though I confess you probably could do it if you're determined enough... but for production I suspect it will be more complicated than necessary). I develop and run Django in OS X, and it works quite seamlessly.\nThe secret is that you must configure httpd.conf to pass requests to Django via one of three options: mod_wsgi (the most modern approach), mod_python (second best, but works fine on OS X's Python 2.5), fastcgi (well... if you must to match your production environment).\nDjango's deployment docs have good advice and instruction for all three options.\nIf you are using the default OS X apache install, edit /etc/apache2/httpd.conf with the directives found in the Django docs above. I can't speak for MAMP, but if you build Apache from source (which is so easy on OS X I do wonder why anyone bothers with MAMP... my insecurities are showing), edit /usr/local/apache2/conf/httpd.conf.\n",
"Unless you are planning on going to production with OS X you might not want to bother. If you must do it, go straight to mod_wsgi. Don't bother with mod_python or older solutions. I did mod_python on Apache and while it runs great now, it took countless hours to set up.\nAlso, just to clarify something based on what you said: You're not going to find a mapping between the url path (like /polls) and a script that is being called. Django doesn't work like that. With Django your application is loaded into memory waiting for requests. Once a request comes in it gets dispatched through the url map that you created in urls.py. That boils down to a function call somewhere in your code.\nThat's why for a webserver like Apache you need a module like mod_wsgi, which gives your app a spot in memory in which to live. Compare that with something like CGI where the webserver executes a specific script on demand at a location that is physically mapped between the url and the filesystem.\nI hope that's helpful and not telling you something you already knew. :)\n",
"If you're planning on running it for production (or non-development) mode another option is to do away with Apache and go with something lightweight like nginx. But if you're doing development work, you'll want to stick with the built-in server and PyDev in Eclipse. It's so much easier to walk through the server code line-by-line in the Eclipse debugger.\nInstallation instructions for nginx/Django here: http://snippets.dzone.com/tag/nginx\n",
"Yet another option is to consider using a virtual machine for your development. You can install a full version of whatever OS your production server will be running - say, Debian - and run your Apache and DB in the VM.\nYou can connect to the virtual disk in the Finder, so you can still use TextMate (or whatever) on OSX to do your editing.\nI've had good experiences doing this via VMWare Fusion.\n"
] |
[
5,
5,
3,
2
] |
[] |
[] |
[
"django",
"macos",
"python"
] |
stackoverflow_0000855408_django_macos_python.txt
|
Q:
How do I upload a files to google app engine app when field name is not known
I have tried a few options, none of which seem to work (if I have a simple multipart form with a named field, it works well, but when I don't know the name I can't just grab all files in the request...).
I have looked at Upload files in Google App Engine and it doesn't seem suitable (or to actually work, as someone mentioned the code snipped it untested).
A:
Check out the documentation on the Webob request object. File uploads are treated the same as other form fields, except they're a file upload object, rather than a string - so you can iterate over the available fields the same as any other POST.
|
How do I upload a files to google app engine app when field name is not known
|
I have tried a few options, none of which seem to work (if I have a simple multipart form with a named field, it works well, but when I don't know the name I can't just grab all files in the request...).
I have looked at Upload files in Google App Engine and it doesn't seem suitable (or to actually work, as someone mentioned the code snipped it untested).
|
[
"Check out the documentation on the Webob request object. File uploads are treated the same as other form fields, except they're a file upload object, rather than a string - so you can iterate over the available fields the same as any other POST.\n"
] |
[
0
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000855667_google_app_engine_python.txt
|
Q:
Does python have a call_user_func() like PHP?
Does python have a function like call_user_func() in PHP?
PHP Version:
call_user_func(array($object,$methodName),$parameters)
How do I achieve the above in Python?
A:
I don't see the problem, unless methodName is a string. In that case getattr does the job:
>>> class A:
... def func(self, a, b):
... return a + b
...
>>> a = A()
>>> getattr(a, 'func')(2, 3)
5
If object is also a string, then this would work, using globals or locals (but then you may have other, bigger, problems):
>>> getattr(locals()['a'], 'func')(2, 3)
5
>>> getattr(globals()['a'], 'func')(2, 3)
5
Edit: re your clarification. To initialise an object based on a string:
>>> class A:
... def __init__(self): print('a')
...
>>> class B:
... def __init__(self): print('b')
...
>>> clsStr = 'A'
>>> myObj = locals()[clsStr]()
a
I am not sure if this is really what you want though... unless you have many different classes, why not just perform string matching?
Another edit: Though the above works, you should seriously consider going with a solution such as provided by Ignacio Vazquez-Abrams. For one thing, by storing all possible classes in a dict, you avoid strange behaviour that may result from passing an incorrect string argument which just happens to match the name of a non-related class in the current scope.
A:
If you need to use classes from far-off places (and in fact, if you need any classes at all) then you're best off creating and using a dictionary for them:
funcs = {'Eggs': foo.Eggs, 'Spam': bar.Spam}
def call_func(func_name, *args, **kwargs):
if not func_name in funcs:
raise ValueError('Function %r not available' % (func_name,))
return funcs[func_name](*args, **kwargs)
A:
Picking which object to instantiate isn't that hard. A class is a first-class object and can be assigned to a variable or passed as an argument to a function.
class A(object):
def __init__( self, arg1, arg2 ):
etc.
class B(object):
def __init__( self, arg1, arg2 ):
etc.
thing_to_make = A
argList= ( some, pair )
thing_to_make( *argList )
thing_to_make = B
argList- ( another, pair )
thing_to_make( *argList )
def doSomething( class_, arg1, arg2 ):
thing= class_( arg1, arg2 )
thing.method()
print thing
All works nicely without much pain. You don't need a "call_user_function" sort of thing in Python
A:
For a given object 'obj', a given method name 'meth', and a given set of arguments 'args':
obj.__getattribute__(meth)(*args)
Should get you there.
Of course, it goes without saying - What the heck is it you want to do?
A:
You can call use getattr with locals() or globals() to do something similar
# foo.method() is called if it's available in global scope
getattr(globals()['foo'], 'method')(*args)
# Same thing, but for 'foo' in local scope
getattr(locals()['foo'], 'method')(*args)
See also: Dive Into Python on getattr(), on locals() and globals()
|
Does python have a call_user_func() like PHP?
|
Does python have a function like call_user_func() in PHP?
PHP Version:
call_user_func(array($object,$methodName),$parameters)
How do I achieve the above in Python?
|
[
"I don't see the problem, unless methodName is a string. In that case getattr does the job:\n>>> class A:\n... def func(self, a, b):\n... return a + b\n... \n>>> a = A()\n>>> getattr(a, 'func')(2, 3)\n5\n\nIf object is also a string, then this would work, using globals or locals (but then you may have other, bigger, problems):\n>>> getattr(locals()['a'], 'func')(2, 3)\n5\n>>> getattr(globals()['a'], 'func')(2, 3)\n5\n\n\nEdit: re your clarification. To initialise an object based on a string:\n>>> class A:\n... def __init__(self): print('a')\n... \n>>> class B:\n... def __init__(self): print('b')\n... \n>>> clsStr = 'A'\n>>> myObj = locals()[clsStr]()\na\n\nI am not sure if this is really what you want though... unless you have many different classes, why not just perform string matching?\n\nAnother edit: Though the above works, you should seriously consider going with a solution such as provided by Ignacio Vazquez-Abrams. For one thing, by storing all possible classes in a dict, you avoid strange behaviour that may result from passing an incorrect string argument which just happens to match the name of a non-related class in the current scope.\n",
"If you need to use classes from far-off places (and in fact, if you need any classes at all) then you're best off creating and using a dictionary for them:\nfuncs = {'Eggs': foo.Eggs, 'Spam': bar.Spam}\n\ndef call_func(func_name, *args, **kwargs):\n if not func_name in funcs:\n raise ValueError('Function %r not available' % (func_name,))\n return funcs[func_name](*args, **kwargs)\n\n",
"Picking which object to instantiate isn't that hard. A class is a first-class object and can be assigned to a variable or passed as an argument to a function.\nclass A(object):\n def __init__( self, arg1, arg2 ):\n etc.\n\nclass B(object):\n def __init__( self, arg1, arg2 ):\n etc.\n\nthing_to_make = A\nargList= ( some, pair )\nthing_to_make( *argList )\n\nthing_to_make = B\nargList- ( another, pair )\nthing_to_make( *argList )\n\ndef doSomething( class_, arg1, arg2 ):\n thing= class_( arg1, arg2 )\n thing.method()\n print thing\n\nAll works nicely without much pain. You don't need a \"call_user_function\" sort of thing in Python\n",
"For a given object 'obj', a given method name 'meth', and a given set of arguments 'args':\n obj.__getattribute__(meth)(*args)\n\nShould get you there.\nOf course, it goes without saying - What the heck is it you want to do?\n",
"You can call use getattr with locals() or globals() to do something similar\n# foo.method() is called if it's available in global scope\ngetattr(globals()['foo'], 'method')(*args)\n\n# Same thing, but for 'foo' in local scope\ngetattr(locals()['foo'], 'method')(*args)\n\nSee also: Dive Into Python on getattr(), on locals() and globals()\n"
] |
[
10,
4,
2,
1,
0
] |
[] |
[] |
[
"php",
"python"
] |
stackoverflow_0000856992_php_python.txt
|
Q:
Infinite recursion trying to check all elements of a TreeCtrl
I have a TreeCtrl in which more than one Item can be assigned the same object as PyData. When the object is updated, I want to update all of the items in the tree which have that object as their PyData.
I thought the following code would solve the problem quite neatly, but for some reason the logical test (current != self.GetFirstVisibleItem()) always returns true leading to infinite recursion. Can anyone explain why?
def RefreshNodes(self, obj, current=None):
print "Entered refresh"
current = current or self.GetFirstVisibleItem()
if current.IsOk():
print self.GetPyData(current).name
if self.GetPyData(current) == obj:
self.RefreshNode(current)
current = self.GetNextVisible(current)
if current != self.GetFirstVisibleItem():
self.RefreshNodes(obj, current)
Edit: the above is obviously part of a class based on wx.TreeCtrl
A:
How is the "next" item ever going to be the first item?
This appears to be a tautology. The next is never the first.
current = self.GetNextVisible(current)
current != self.GetFirstVisibleItem()
It doesn't appear that next wraps around to the beginning. It appears that next should return an invalid item (IsOk is False) at the end.
See http://wxpython.org/onlinedocs.php for information on this.
A:
There is no way for current != self.GetFirstVisibleItem() to be false. See comments below
def RefreshNodes(self, obj, current=None):
print "Entered refresh"
current = current or self.GetFirstVisibleItem()
if current.IsOk():
print self.GetPyData(current).name
if self.GetPyData(current) == obj:
self.RefreshNode(current)
#current = next visible item
current = self.GetNextVisible(current)
#current can't equal the first visible item because
# it was just set to the next visible item, which
# logically cannot be first
if current != self.GetFirstVisibleItem():
self.RefreshNodes(obj, current)
A:
Just realised the problem: if current is not a valid item, it's logical value is False.
Hence the line current = current or self.GetFirstVisibleItem() wraps back to the first item before current.IsOk() is called...
|
Infinite recursion trying to check all elements of a TreeCtrl
|
I have a TreeCtrl in which more than one Item can be assigned the same object as PyData. When the object is updated, I want to update all of the items in the tree which have that object as their PyData.
I thought the following code would solve the problem quite neatly, but for some reason the logical test (current != self.GetFirstVisibleItem()) always returns true leading to infinite recursion. Can anyone explain why?
def RefreshNodes(self, obj, current=None):
print "Entered refresh"
current = current or self.GetFirstVisibleItem()
if current.IsOk():
print self.GetPyData(current).name
if self.GetPyData(current) == obj:
self.RefreshNode(current)
current = self.GetNextVisible(current)
if current != self.GetFirstVisibleItem():
self.RefreshNodes(obj, current)
Edit: the above is obviously part of a class based on wx.TreeCtrl
|
[
"How is the \"next\" item ever going to be the first item? \nThis appears to be a tautology. The next is never the first.\n current = self.GetNextVisible(current)\n\n current != self.GetFirstVisibleItem()\n\nIt doesn't appear that next wraps around to the beginning. It appears that next should return an invalid item (IsOk is False) at the end.\nSee http://wxpython.org/onlinedocs.php for information on this.\n",
"There is no way for current != self.GetFirstVisibleItem() to be false. See comments below\ndef RefreshNodes(self, obj, current=None):\n print \"Entered refresh\"\n current = current or self.GetFirstVisibleItem()\n if current.IsOk():\n print self.GetPyData(current).name\n if self.GetPyData(current) == obj:\n self.RefreshNode(current)\n\n #current = next visible item\n current = self.GetNextVisible(current)\n\n #current can't equal the first visible item because\n # it was just set to the next visible item, which \n # logically cannot be first\n if current != self.GetFirstVisibleItem(): \n self.RefreshNodes(obj, current)\n\n",
"Just realised the problem: if current is not a valid item, it's logical value is False.\nHence the line current = current or self.GetFirstVisibleItem() wraps back to the first item before current.IsOk() is called...\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"python",
"wxpython"
] |
stackoverflow_0000857560_python_wxpython.txt
|
Q:
List Element without iteration
I want to know how to find an element in list without iteration
A:
The mylist.index("blah") method of a list will return the index of the first occurrence of the item "blah":
>>> ["item 1", "blah", "item 3"].index("blah")
1
>>> ["item 1", "item 2", "blah"].index("blah")
2
It will raise ValueError if it cannot be found:
>>> ["item 1", "item 2", "item 3"].index("not found")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: list.index(x): x not in list
You can also use the in keyword to determine if an item is in a list (but not the location):
>>> "not found" in ["item 1", "blah", "item 3"]
False
>>> "item 3" in ["item 1", "blah", "item 3"]
True
As Harper Shelby commented, Python will still have to iterate internally to find the items, but the index or in methods may be slightly quicker than doing it in Python, as the data-structures are implemented in C.. but more importantly,
"x" in mylist
..is much tidier than..
found = False
for cur in mylist:
if cur == "x":
found = True
A:
This is a somewhat strange question. Do you want a hotline with $someDeity, or do you want another function to do the iteration for you, or do you want a more efficient way to test membership?
In the first case I cannot help you. In the second case, have a look at the documentation on lists, specifically the index method. In the third case, create a set and use the in keyword. Note that set creation only pays off if you intend to search through the list many times.
>>> l = [1, 2, 3]
>>> l.index(2)
1
>>> 3 in l
True
>>> ls = set(l)
>>> 3 in ls
True
A:
You can also use in syntax:
>>> l = [1,2,3]
>>> 1 in l
True
|
List Element without iteration
|
I want to know how to find an element in list without iteration
|
[
"The mylist.index(\"blah\") method of a list will return the index of the first occurrence of the item \"blah\":\n>>> [\"item 1\", \"blah\", \"item 3\"].index(\"blah\")\n1\n>>> [\"item 1\", \"item 2\", \"blah\"].index(\"blah\")\n2\n\nIt will raise ValueError if it cannot be found:\n>>> [\"item 1\", \"item 2\", \"item 3\"].index(\"not found\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: list.index(x): x not in list\n\nYou can also use the in keyword to determine if an item is in a list (but not the location):\n>>> \"not found\" in [\"item 1\", \"blah\", \"item 3\"]\nFalse\n>>> \"item 3\" in [\"item 1\", \"blah\", \"item 3\"]\nTrue\n\nAs Harper Shelby commented, Python will still have to iterate internally to find the items, but the index or in methods may be slightly quicker than doing it in Python, as the data-structures are implemented in C.. but more importantly,\n\"x\" in mylist\n\n..is much tidier than..\nfound = False\nfor cur in mylist:\n if cur == \"x\":\n found = True\n\n",
"This is a somewhat strange question. Do you want a hotline with $someDeity, or do you want another function to do the iteration for you, or do you want a more efficient way to test membership?\nIn the first case I cannot help you. In the second case, have a look at the documentation on lists, specifically the index method. In the third case, create a set and use the in keyword. Note that set creation only pays off if you intend to search through the list many times.\n>>> l = [1, 2, 3]\n>>> l.index(2)\n1\n>>> 3 in l\nTrue\n>>> ls = set(l)\n>>> 3 in ls\nTrue\n\n",
"You can also use in syntax:\n>>> l = [1,2,3]\n>>> 1 in l\nTrue\n\n"
] |
[
9,
5,
4
] |
[] |
[] |
[
"find",
"iteration",
"list",
"python"
] |
stackoverflow_0000858109_find_iteration_list_python.txt
|
Q:
Dead-simple web authentication for a single user
I wrote a small internal web app using (a subset of) pylons. As it turns out, I now need to allow a user to access it from the web. This is not an application that was written to be web facing, and it has a bunch of gaping security holes.
What is the simplest way I can make sure this site is securely available to that user, but no one else?
I'm thinking something like apache's simple HTTP authentication, but more secure. (Is OpenID a good match?)
There is only one user. No need for any user management, not even to change password. Also, I trust the user not to damage the server (it's actually his).
If it was for me, I would just keep it behind the firewall and use ssh port forwarding, but I would like to have something simpler for this user.
EDIT: Hmm... judging by the answers, this should have been on serverfault. If a moderator is reading this, consider migrating it.
A:
if there's only a single user, using a certificate would probably be easiest.
A:
How about VPN? There should be plenty of user-friendly VPN clients. He might already be familiar with the technology since many corporations use them to grant workers access to internal network while on the road.
A:
Basic HTTP authentication can be bruteforced easily by tools like brutus. If his ip is static you can allow his ip and deny all others with htaccess.
|
Dead-simple web authentication for a single user
|
I wrote a small internal web app using (a subset of) pylons. As it turns out, I now need to allow a user to access it from the web. This is not an application that was written to be web facing, and it has a bunch of gaping security holes.
What is the simplest way I can make sure this site is securely available to that user, but no one else?
I'm thinking something like apache's simple HTTP authentication, but more secure. (Is OpenID a good match?)
There is only one user. No need for any user management, not even to change password. Also, I trust the user not to damage the server (it's actually his).
If it was for me, I would just keep it behind the firewall and use ssh port forwarding, but I would like to have something simpler for this user.
EDIT: Hmm... judging by the answers, this should have been on serverfault. If a moderator is reading this, consider migrating it.
|
[
"if there's only a single user, using a certificate would probably be easiest.\n",
"How about VPN? There should be plenty of user-friendly VPN clients. He might already be familiar with the technology since many corporations use them to grant workers access to internal network while on the road.\n",
"Basic HTTP authentication can be bruteforced easily by tools like brutus. If his ip is static you can allow his ip and deny all others with htaccess. \n"
] |
[
8,
4,
2
] |
[] |
[] |
[
"authentication",
"python",
"security",
"web_applications"
] |
stackoverflow_0000858149_authentication_python_security_web_applications.txt
|
Q:
Datetime issue in Django
I am trying to add the datetime object of a person. Whenever the birth year is less than year 1942, I get a strange error DataError: unable to parse time when reading the data back from the DB.
class Person(models.Model):
"""A simple class to hold the person info
"""
name = models.CharField(max_length=100)
born = models.DateTimeField(blank=True, null=True)
Whenever I try to add the born datetime object of a person born in the year 1929 and then try to read it, It fails.
Let me re-iterate that the data insertion works fine, but fails during the read. I am assuming that something wrong in going on inside the DB.
I did a set of tests and got to know that it fails whenever I add the person born in or before year 1940. It gives DataError: unable to parse time
I am using PostgreSQL.
Any sort of help would be appreciated. Thanks.
A:
The only thing I could come up with here can be found in the PostgreSQL docs. My guess is that Django is storing your date in a "reltime" field, which can only go back 68 years. My calculator verifies that 2009-68 == 1941, which seems very close to what you reported.
I would recommend looking over the schema of your table, by running the following command:
$ python manage.py sql appname
See if the datefield in question is a "reltime" field as I suspect. If this is the case, I'm at a loss about how to change it to a more compatable field without messing everything up.
|
Datetime issue in Django
|
I am trying to add the datetime object of a person. Whenever the birth year is less than year 1942, I get a strange error DataError: unable to parse time when reading the data back from the DB.
class Person(models.Model):
"""A simple class to hold the person info
"""
name = models.CharField(max_length=100)
born = models.DateTimeField(blank=True, null=True)
Whenever I try to add the born datetime object of a person born in the year 1929 and then try to read it, It fails.
Let me re-iterate that the data insertion works fine, but fails during the read. I am assuming that something wrong in going on inside the DB.
I did a set of tests and got to know that it fails whenever I add the person born in or before year 1940. It gives DataError: unable to parse time
I am using PostgreSQL.
Any sort of help would be appreciated. Thanks.
|
[
"The only thing I could come up with here can be found in the PostgreSQL docs. My guess is that Django is storing your date in a \"reltime\" field, which can only go back 68 years. My calculator verifies that 2009-68 == 1941, which seems very close to what you reported. \nI would recommend looking over the schema of your table, by running the following command:\n$ python manage.py sql appname\n\nSee if the datefield in question is a \"reltime\" field as I suspect. If this is the case, I'm at a loss about how to change it to a more compatable field without messing everything up.\n"
] |
[
6
] |
[] |
[] |
[
"datetime",
"django",
"postgresql",
"python"
] |
stackoverflow_0000858470_datetime_django_postgresql_python.txt
|
Q:
How to recognize whether a script is running on a tty?
I would like my script to act differently in an interactive shell session and when running with redirected stdout (for example when piped to some other command).
How do I recognize which of these two happen in a Python script?
Example of such behavior in existing program: grep --color=auto highlights matches when running in interactive shell, but doesn't when piped to something else.
A:
import os, sys
os.isatty(sys.stdout.fileno())
or
sys.stdout.isatty()
|
How to recognize whether a script is running on a tty?
|
I would like my script to act differently in an interactive shell session and when running with redirected stdout (for example when piped to some other command).
How do I recognize which of these two happen in a Python script?
Example of such behavior in existing program: grep --color=auto highlights matches when running in interactive shell, but doesn't when piped to something else.
|
[
"import os, sys\nos.isatty(sys.stdout.fileno())\n\nor\nsys.stdout.isatty()\n\n"
] |
[
87
] |
[] |
[] |
[
"python",
"shell"
] |
stackoverflow_0000858623_python_shell.txt
|
Q:
getopts Values class and Template.Substitute don't (immediately) work together
I have python code something like:
from string import Template
import optparse
def main():
usage = "usage: %prog options outputname"
p = optparse.OptionParser(usage)
p.add_option('--optiona', '-a', default="")
p.add_option('--optionb', '-b', default="")
options, arguments = p.parse_args()
t = Template('Option a is ${optiona} option b is ${optionb}')
print t.substitute(options)
But that gives me
AttributeError: Values instance has no attribute '__getitem__'
Because options is a Values and not a dictionary.
How do I neatly make this work?
(any other suggestions welcome, my pythonic sense is still being nurtured...)
A:
OptionParser.parse_args returns an object with the option variable names as attributes, rather than as dictionary keys. The error you're getting means that options does not support subscripting, which it would normally do by implementing __getitem__.
So, in other words, your options are at:
options.optiona
options.optionb
Rather than:
options['optiona']
options['optionb']
Template variable substitution expects a dict-like interface, so it's trying to find optiona and optionb using the latter approach.
Use vars as RoadieRich suggests in his answer to make the template substitution approach work. Alternatively, unless you really need a Template object, I'd recommend using a simple print:
print 'Option a is %s and option b is %s' % (options.optiona, options.optionb)
You can also combine the two approaches if you feel that named string parameters are better:
print 'Option a is %(optiona)s and option b is %(optionb)s' % vars(options)
A:
The following appears to work for me:
from string import Template
import optparse
def main():
usage = "usage: %prog options outputname"
p = optparse.OptionParser(usage)
p.add_option('--optiona', '-a', default="")
p.add_option('--optionb', '-b', default="")
options, arguments = p.parse_args()
t = Template('Option a is ${optiona} option b is ${optionb}')
print t.substitute(vars(options)) #I'm assuming the uppercase S was a typo.
|
getopts Values class and Template.Substitute don't (immediately) work together
|
I have python code something like:
from string import Template
import optparse
def main():
usage = "usage: %prog options outputname"
p = optparse.OptionParser(usage)
p.add_option('--optiona', '-a', default="")
p.add_option('--optionb', '-b', default="")
options, arguments = p.parse_args()
t = Template('Option a is ${optiona} option b is ${optionb}')
print t.substitute(options)
But that gives me
AttributeError: Values instance has no attribute '__getitem__'
Because options is a Values and not a dictionary.
How do I neatly make this work?
(any other suggestions welcome, my pythonic sense is still being nurtured...)
|
[
"OptionParser.parse_args returns an object with the option variable names as attributes, rather than as dictionary keys. The error you're getting means that options does not support subscripting, which it would normally do by implementing __getitem__.\nSo, in other words, your options are at:\noptions.optiona\noptions.optionb\n\nRather than:\noptions['optiona']\noptions['optionb']\n\nTemplate variable substitution expects a dict-like interface, so it's trying to find optiona and optionb using the latter approach.\nUse vars as RoadieRich suggests in his answer to make the template substitution approach work. Alternatively, unless you really need a Template object, I'd recommend using a simple print:\nprint 'Option a is %s and option b is %s' % (options.optiona, options.optionb)\n\nYou can also combine the two approaches if you feel that named string parameters are better:\nprint 'Option a is %(optiona)s and option b is %(optionb)s' % vars(options)\n\n",
"The following appears to work for me:\nfrom string import Template\nimport optparse\n\ndef main():\n usage = \"usage: %prog options outputname\"\n p = optparse.OptionParser(usage)\n p.add_option('--optiona', '-a', default=\"\")\n p.add_option('--optionb', '-b', default=\"\")\n options, arguments = p.parse_args()\n t = Template('Option a is ${optiona} option b is ${optionb}')\n print t.substitute(vars(options)) #I'm assuming the uppercase S was a typo.\n\n"
] |
[
6,
4
] |
[] |
[] |
[
"getopts",
"python",
"templates"
] |
stackoverflow_0000858784_getopts_python_templates.txt
|
Q:
How to redirect python warnings to a custom stream?
Let's say I have a file-like object like StreamIO and want the python's warning module write all warning messages to it. How do I do that?
A:
Try reassigning warnings.showwarning i.e.
#!/sw/bin/python2.5
import warnings, sys
def customwarn(message, category, filename, lineno, file=None, line=None):
sys.stdout.write(warnings.formatwarning(message, category, filename, lineno))
warnings.showwarning = customwarn
warnings.warn("test warning")
will redirect all warnings to stdout.
A:
I think something like this would work, although it's untested code and the interface looks like there is a cleaner way which eludes me at present:
import warnings
# defaults to the 'myStringIO' file
def my_warning_wrapper(message, category, filename, lineno, file=myStringIO, line=None):
warnings.show_warning(message, category, filename, lineno, file, line)
warnings._show_warning = my_warning_wrapper
A look inside Lib\warnings.py should help put you on the right track if that isn't enough.
A:
import sys
import StringIO
sys.stdout = StringIO.StringIO()
|
How to redirect python warnings to a custom stream?
|
Let's say I have a file-like object like StreamIO and want the python's warning module write all warning messages to it. How do I do that?
|
[
"Try reassigning warnings.showwarning i.e.\n#!/sw/bin/python2.5\n\nimport warnings, sys\n\ndef customwarn(message, category, filename, lineno, file=None, line=None):\n sys.stdout.write(warnings.formatwarning(message, category, filename, lineno))\n\nwarnings.showwarning = customwarn\nwarnings.warn(\"test warning\")\n\nwill redirect all warnings to stdout.\n",
"I think something like this would work, although it's untested code and the interface looks like there is a cleaner way which eludes me at present:\nimport warnings\n\n# defaults to the 'myStringIO' file\ndef my_warning_wrapper(message, category, filename, lineno, file=myStringIO, line=None):\n warnings.show_warning(message, category, filename, lineno, file, line) \n\nwarnings._show_warning = my_warning_wrapper\n\nA look inside Lib\\warnings.py should help put you on the right track if that isn't enough.\n",
"import sys\nimport StringIO\n\nsys.stdout = StringIO.StringIO()\n\n"
] |
[
20,
0,
0
] |
[] |
[] |
[
"io",
"python",
"warnings"
] |
stackoverflow_0000858916_io_python_warnings.txt
|
Q:
Getting pdb-style caller information in python
Let's say I have the following method (in a class or a module, I don't think it matters):
def someMethod():
pass
I'd like to access the caller's state at the time this method is called.
traceback.extract_stack just gives me some strings about the call stack.
I'd like something like pdb in which I can set a breakpoint in someMethod() and then type 'u' to go up the call stack and then examine the state of the system then.
A:
I figured it out:
import inspect
def callMe():
tag = ''
frame = inspect.currentframe()
try:
tag = frame.f_back.f_locals['self']._tag
finally:
del frame
return tag
|
Getting pdb-style caller information in python
|
Let's say I have the following method (in a class or a module, I don't think it matters):
def someMethod():
pass
I'd like to access the caller's state at the time this method is called.
traceback.extract_stack just gives me some strings about the call stack.
I'd like something like pdb in which I can set a breakpoint in someMethod() and then type 'u' to go up the call stack and then examine the state of the system then.
|
[
"I figured it out:\nimport inspect\n\ndef callMe():\n tag = ''\n frame = inspect.currentframe()\n try:\n tag = frame.f_back.f_locals['self']._tag\n finally:\n del frame\n\n return tag\n\n"
] |
[
1
] |
[] |
[] |
[
"pdb",
"python",
"stack"
] |
stackoverflow_0000859280_pdb_python_stack.txt
|
Q:
whoami in python
What is the best way to find out the user that a python process is running under?
I could do this:
name = os.popen('whoami').read()
But that has to start a whole new process.
os.environ["USER"]
works sometimes, but sometimes that environment variable isn't set.
A:
import getpass
print(getpass.getuser())
See the documentation of the getpass module.
getpass.getuser()
Return the “login name” of the user. Availability: Unix, Windows.
This function checks the environment variables LOGNAME, USER,
LNAME and USERNAME, in order, and
returns the value of the first one
which is set to a non-empty string. If
none are set, the login name from the
password database is returned on
systems which support the pwd module,
otherwise, an exception is raised.
A:
This should work under Unix.
import os
print(os.getuid()) # numeric uid
import pwd
print(pwd.getpwuid(os.getuid())) # full /etc/passwd info
|
whoami in python
|
What is the best way to find out the user that a python process is running under?
I could do this:
name = os.popen('whoami').read()
But that has to start a whole new process.
os.environ["USER"]
works sometimes, but sometimes that environment variable isn't set.
|
[
"import getpass\nprint(getpass.getuser())\n\nSee the documentation of the getpass module.\n\ngetpass.getuser()\nReturn the “login name” of the user. Availability: Unix, Windows.\nThis function checks the environment variables LOGNAME, USER,\nLNAME and USERNAME, in order, and\nreturns the value of the first one\nwhich is set to a non-empty string. If\nnone are set, the login name from the\npassword database is returned on\nsystems which support the pwd module,\notherwise, an exception is raised.\n\n",
"This should work under Unix.\nimport os\nprint(os.getuid()) # numeric uid\nimport pwd\nprint(pwd.getpwuid(os.getuid())) # full /etc/passwd info\n\n"
] |
[
94,
20
] |
[] |
[] |
[
"posix",
"python"
] |
stackoverflow_0000860140_posix_python.txt
|
Q:
How do I GROUP BY on every given increment of a field value?
I have a Python application. It has an SQLite database, full of data about things that happen, retrieved by a Web scraper from the Web. This data includes time-date groups, as Unix timestamps, in a column reserved for them. I want to retrieve the names of organisations that did things and count how often they did them, but to do this for each week (i.e. 604,800 seconds) I have data for.
Pseudocode:
for each 604800-second increment in time:
select count(time), org from table group by org
Essentially what I'm trying to do is iterate through the database like a list sorted on the time column, with a step value of 604800. The aim is to analyse how the distribution of different organisations in the total changed over time.
If at all possible, I'd like to avoid pulling all the rows from the db and processing them in Python as this seems a) inefficient and b) probably pointless given that the data is in a database.
A:
Create a table listing all weeks since the epoch, and JOIN it to your table of events.
CREATE TABLE Weeks (
week INTEGER PRIMARY KEY
);
INSERT INTO Weeks (week) VALUES (200919); -- e.g. this week
SELECT w.week, e.org, COUNT(*)
FROM Events e JOIN Weeks w ON (w.week = strftime('%Y%W', e.time))
GROUP BY w.week, e.org;
There are only 52-53 weeks per year. Even if you populate the Weeks table for 100 years, that's still a small table.
A:
To do this in a set-based manner (which is what SQL is good at) you will need a set-based representation of your time increments. That can be a temporary table, a permanent table, or a derived table (i.e. subquery). I'm not too familiar with SQLite and it's been awhile since I've worked with UNIX. Timestamps in UNIX are just # seconds since some set date/time? Using a standard Calendar table (which is useful to have in a database)...
SELECT
C1.start_time,
C2.end_time,
T.org,
COUNT(time)
FROM
Calendar C1
INNER JOIN Calendar C2 ON
C2.start_time = DATEADD(dy, 6, C1.start_time)
INNER JOIN My_Table T ON
T.time BETWEEN C1.start_time AND C2.end_time -- You'll need to convert to timestamp here
WHERE
DATEPART(dw, C1.start_time) = 1 AND -- Basically, only get dates that are a Sunday or whatever other day starts your intervals
C1.start_time BETWEEN @start_range_date AND @end_range_date -- Period for which you're running the report
GROUP BY
C1.start_time,
C2.end_time,
T.org
The Calendar table can take whatever form you want, so you could use UNIX timestamps in it for the start_time and end_time. You just pre-populate it with all of the dates in any conceivable range that you might want to use. Even going from 1900-01-01 to 9999-12-31 won't be a terribly large table. It can come in handy for a lot of reporting type queries.
Finally, this code is T-SQL, so you'll probably need to convert the DATEPART and DATEADD to whatever the equivalent is in SQLite.
A:
Not being familiar with SQLite I think this approach should work for most databases, as it finds the weeknumber and subtracts the offset
SELECT org, ROUND(time/604800) - week_offset, COUNT(*)
FROM table
GROUP BY org, ROUND(time/604800) - week_offset
In Oracle I would use the following if time was a date column:
SELECT org, TO_CHAR(time, 'YYYY-IW'), COUNT(*)
FROM table
GROUP BY org, TO_CHAR(time, 'YYYY-IW')
SQLite probably has similar functionality that allows this kind of SELECT which is easier on the eye.
|
How do I GROUP BY on every given increment of a field value?
|
I have a Python application. It has an SQLite database, full of data about things that happen, retrieved by a Web scraper from the Web. This data includes time-date groups, as Unix timestamps, in a column reserved for them. I want to retrieve the names of organisations that did things and count how often they did them, but to do this for each week (i.e. 604,800 seconds) I have data for.
Pseudocode:
for each 604800-second increment in time:
select count(time), org from table group by org
Essentially what I'm trying to do is iterate through the database like a list sorted on the time column, with a step value of 604800. The aim is to analyse how the distribution of different organisations in the total changed over time.
If at all possible, I'd like to avoid pulling all the rows from the db and processing them in Python as this seems a) inefficient and b) probably pointless given that the data is in a database.
|
[
"Create a table listing all weeks since the epoch, and JOIN it to your table of events.\nCREATE TABLE Weeks (\n week INTEGER PRIMARY KEY\n);\n\nINSERT INTO Weeks (week) VALUES (200919); -- e.g. this week\n\nSELECT w.week, e.org, COUNT(*)\nFROM Events e JOIN Weeks w ON (w.week = strftime('%Y%W', e.time))\nGROUP BY w.week, e.org;\n\nThere are only 52-53 weeks per year. Even if you populate the Weeks table for 100 years, that's still a small table.\n",
"To do this in a set-based manner (which is what SQL is good at) you will need a set-based representation of your time increments. That can be a temporary table, a permanent table, or a derived table (i.e. subquery). I'm not too familiar with SQLite and it's been awhile since I've worked with UNIX. Timestamps in UNIX are just # seconds since some set date/time? Using a standard Calendar table (which is useful to have in a database)...\nSELECT\n C1.start_time,\n C2.end_time,\n T.org,\n COUNT(time)\nFROM\n Calendar C1\nINNER JOIN Calendar C2 ON\n C2.start_time = DATEADD(dy, 6, C1.start_time)\nINNER JOIN My_Table T ON\n T.time BETWEEN C1.start_time AND C2.end_time -- You'll need to convert to timestamp here\nWHERE\n DATEPART(dw, C1.start_time) = 1 AND -- Basically, only get dates that are a Sunday or whatever other day starts your intervals\n C1.start_time BETWEEN @start_range_date AND @end_range_date -- Period for which you're running the report\nGROUP BY\n C1.start_time,\n C2.end_time,\n T.org\n\nThe Calendar table can take whatever form you want, so you could use UNIX timestamps in it for the start_time and end_time. You just pre-populate it with all of the dates in any conceivable range that you might want to use. Even going from 1900-01-01 to 9999-12-31 won't be a terribly large table. It can come in handy for a lot of reporting type queries.\nFinally, this code is T-SQL, so you'll probably need to convert the DATEPART and DATEADD to whatever the equivalent is in SQLite.\n",
"Not being familiar with SQLite I think this approach should work for most databases, as it finds the weeknumber and subtracts the offset \nSELECT org, ROUND(time/604800) - week_offset, COUNT(*)\nFROM table\nGROUP BY org, ROUND(time/604800) - week_offset\n\nIn Oracle I would use the following if time was a date column:\nSELECT org, TO_CHAR(time, 'YYYY-IW'), COUNT(*)\nFROM table\nGROUP BY org, TO_CHAR(time, 'YYYY-IW')\n\nSQLite probably has similar functionality that allows this kind of SELECT which is easier on the eye.\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"increment",
"iteration",
"python",
"sql",
"sqlite"
] |
stackoverflow_0000859489_increment_iteration_python_sql_sqlite.txt
|
Q:
How to debug a weird threaded open fifo issue?
A web service is configured to expose some of its data when receiving a USR1 signal. The signal will be sent by a xinetd server when it receives a request from a remote client, e.g. nc myserver 50666. When the web server receives USR1 signal, it opens a dedicated fifo pipe, writes its data to the pipe, and then close the pipe. In the meantime, the xinetd server reads the pipe and feeds to the remote client.
In most of times, they work nicely but occasionally, for some reason, the client will receive dup records. From the log, it seems like the pipe did not get closed properly and the cache is leftover, so when next time it serves, both previous and current are sent to the client. The problem is its not constantly happening when trying to reproduce, unluckily, I wasn't able to reproduce once.
The following are the simple snippets to demonstrate the process:
The web server: (webserver.py)
def SendStream(data, pipe):
try:
for i in data:
pipe.write(i + '\n')
pipe.flush()
finally:
pipe.close()
def Serve():
threading.Thread(target=SendStream, args=(data, pipe)).start()
The xinetd.d server: (spitter.py)
def Serve():
if not os.path.exists(PIPE_FILE):
os.mkfifo(PIPE_FILE)
os.kill(server_pid, signal.SIGUSR1)
for i in open(PIPE_FILE):
print i,
So what exactly happened to cause the dup? How to trigger it? The current fix I unlink the pipe file and recreate it every time to avoid any leftovers but I don't know if that's a proper solution.
A:
If you get two copies of splitter.py running at the same time, there will be trouble and almost anything that happens to you is legal. Try adding a process id value to webserver.py, ie:
pipe.write(str(os.getpid()) + i + '\n')
That might be illuminating.
A:
There isn't enough to debug here. You don't show how the server handles signals, or opens the pipe.
If at all possible I would recommend not using signals. They're hairy enough in C, nevermind with python's own peculiarities added on top.
A:
So the real problem is that there are multiple clients exist. The server has been queried/abused from other unknown clients which weren't initially being agreed with customers and sure it will break under the current design. A fix has been deployed to address the issue. So Andy's suspicion is right. Thanks guys!
|
How to debug a weird threaded open fifo issue?
|
A web service is configured to expose some of its data when receiving a USR1 signal. The signal will be sent by a xinetd server when it receives a request from a remote client, e.g. nc myserver 50666. When the web server receives USR1 signal, it opens a dedicated fifo pipe, writes its data to the pipe, and then close the pipe. In the meantime, the xinetd server reads the pipe and feeds to the remote client.
In most of times, they work nicely but occasionally, for some reason, the client will receive dup records. From the log, it seems like the pipe did not get closed properly and the cache is leftover, so when next time it serves, both previous and current are sent to the client. The problem is its not constantly happening when trying to reproduce, unluckily, I wasn't able to reproduce once.
The following are the simple snippets to demonstrate the process:
The web server: (webserver.py)
def SendStream(data, pipe):
try:
for i in data:
pipe.write(i + '\n')
pipe.flush()
finally:
pipe.close()
def Serve():
threading.Thread(target=SendStream, args=(data, pipe)).start()
The xinetd.d server: (spitter.py)
def Serve():
if not os.path.exists(PIPE_FILE):
os.mkfifo(PIPE_FILE)
os.kill(server_pid, signal.SIGUSR1)
for i in open(PIPE_FILE):
print i,
So what exactly happened to cause the dup? How to trigger it? The current fix I unlink the pipe file and recreate it every time to avoid any leftovers but I don't know if that's a proper solution.
|
[
"If you get two copies of splitter.py running at the same time, there will be trouble and almost anything that happens to you is legal. Try adding a process id value to webserver.py, ie:\npipe.write(str(os.getpid()) + i + '\\n')\nThat might be illuminating.\n",
"There isn't enough to debug here. You don't show how the server handles signals, or opens the pipe.\nIf at all possible I would recommend not using signals. They're hairy enough in C, nevermind with python's own peculiarities added on top.\n",
"So the real problem is that there are multiple clients exist. The server has been queried/abused from other unknown clients which weren't initially being agreed with customers and sure it will break under the current design. A fix has been deployed to address the issue. So Andy's suspicion is right. Thanks guys! \n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"fifo",
"inetd",
"multithreading",
"python",
"stream"
] |
stackoverflow_0000667500_fifo_inetd_multithreading_python_stream.txt
|
Q:
Emacs function to add symbol to __all__ in Python mode?
Is there an existing Emacs function that adds the symbol currently under the point to __all__ when editing Python code?
E.g., say the cursor was on the first o in foo:
# v---- cursor is on that 'o'
def foo():
return 42
If you did M-x python-add-to-all (or whatever) it would add 'foo' to __all__.
I didn't see one when I googled around, but, as always, maybe I'm missing something obvious.
Update
I tried out Trey Jackson's answer (thanks, Trey!) and made a few fixes/enhancements, in case anyone's interested (won't double-insert anymore, and puts __all__ in a more typical spot if it doesn't already exist):
(defun python-add-to-all ()
"Take the symbol under the point and add it to the __all__
list, if it's not already there."
(interactive)
(save-excursion
(let ((thing (thing-at-point 'symbol)))
(if (progn (goto-char (point-min))
(let (found)
(while (and (not found)
(re-search-forward (rx symbol-start "__all__" symbol-end
(0+ space) "=" (0+ space)
(syntax open-parenthesis))
nil t))
(setq found (not (python-in-string/comment))))
found))
(when (not (looking-at (rx-to-string
`(and (0+ (not (syntax close-parenthesis)))
(syntax string-quote) ,thing (syntax string-quote)))))
(insert (format "\'%s\', " thing)))
(beginning-of-buffer)
;; Put before any import lines, or if none, the first class or
;; function.
(when (not (re-search-forward (rx bol (or "import" "from") symbol-end) nil t))
(re-search-forward (rx symbol-start (or "def" "class") symbol-end) nil t))
(forward-line -1)
(insert (format "\n__all__ = [\'%s\']\n" thing))))))
A:
Not being a python programmer, I'm not sure this covers all the cases, but works for me in a simple case. It'll add the symbol to the array if the array exists, and create __all__ if it doesn't exist. Note: it does not parse the array to avoid double insertion.
(defun python-add-to-all ()
"take the symbol under the point and add to the __all__ routine"
(interactive)
(save-excursion
(let ((thing (thing-at-point 'word))
p)
(if (progn (goto-char (point-min))
(re-search-forward "^__all__ = \\[" nil t))
(insert (format "\"%s\", " thing))
(goto-char (point-min))
(insert (format "__all__ = [\"%s\"]\n" thing))))))
|
Emacs function to add symbol to __all__ in Python mode?
|
Is there an existing Emacs function that adds the symbol currently under the point to __all__ when editing Python code?
E.g., say the cursor was on the first o in foo:
# v---- cursor is on that 'o'
def foo():
return 42
If you did M-x python-add-to-all (or whatever) it would add 'foo' to __all__.
I didn't see one when I googled around, but, as always, maybe I'm missing something obvious.
Update
I tried out Trey Jackson's answer (thanks, Trey!) and made a few fixes/enhancements, in case anyone's interested (won't double-insert anymore, and puts __all__ in a more typical spot if it doesn't already exist):
(defun python-add-to-all ()
"Take the symbol under the point and add it to the __all__
list, if it's not already there."
(interactive)
(save-excursion
(let ((thing (thing-at-point 'symbol)))
(if (progn (goto-char (point-min))
(let (found)
(while (and (not found)
(re-search-forward (rx symbol-start "__all__" symbol-end
(0+ space) "=" (0+ space)
(syntax open-parenthesis))
nil t))
(setq found (not (python-in-string/comment))))
found))
(when (not (looking-at (rx-to-string
`(and (0+ (not (syntax close-parenthesis)))
(syntax string-quote) ,thing (syntax string-quote)))))
(insert (format "\'%s\', " thing)))
(beginning-of-buffer)
;; Put before any import lines, or if none, the first class or
;; function.
(when (not (re-search-forward (rx bol (or "import" "from") symbol-end) nil t))
(re-search-forward (rx symbol-start (or "def" "class") symbol-end) nil t))
(forward-line -1)
(insert (format "\n__all__ = [\'%s\']\n" thing))))))
|
[
"Not being a python programmer, I'm not sure this covers all the cases, but works for me in a simple case. It'll add the symbol to the array if the array exists, and create __all__ if it doesn't exist. Note: it does not parse the array to avoid double insertion.\n(defun python-add-to-all ()\n \"take the symbol under the point and add to the __all__ routine\"\n (interactive)\n (save-excursion\n (let ((thing (thing-at-point 'word))\n p)\n (if (progn (goto-char (point-min))\n (re-search-forward \"^__all__ = \\\\[\" nil t))\n (insert (format \"\\\"%s\\\", \" thing))\n (goto-char (point-min))\n (insert (format \"__all__ = [\\\"%s\\\"]\\n\" thing))))))\n\n"
] |
[
10
] |
[] |
[] |
[
"emacs",
"python"
] |
stackoverflow_0000860357_emacs_python.txt
|
Q:
How use __setattr__ & __getattr__ for map INI values?
I want to map a INI file as a python object. So if the file have:
[UserOptions]
SampleFile = sample.txt
SamplePort = 80
SampleInt = 1
Sample = Aja
SampleDate = 10/02/2008
Then I want:
c = Configuration('sample.ini')
c.UserOptions.SamplePort = 90
I'm looking to setattr but I get a recursion error.
This is what I have:
class Configuration:
def __init__ (self, fileName):
cp = SafeConfigParser()
cp.read(fileName)
self.__parser = cp
self.fileName = fileName
def __getattr__ (self, name):
if name in self.__parser.sections():
return Section(name, self.__parser)
else:
return None
def __str__ (self):
p = self.__parser
result = []
result.append('<Configuration from %s>' % self.fileName)
for s in p.sections():
result.append('[%s]' % s)
for o in p.options(s):
result.append('%s=%s' % (o, p.get(s, o)))
return '\n'.join(result)
class Section:
def __init__ (self, name, parser):
self.__name = name
self.__parser = parser
def __getattr__ (self, name):
if self.__dict__.has_key(name): # any normal attributes are handled normally
return __getattr__(self, item)
else:
return self.__parser.get(self.name, name)
def __setattr__(self, item, value):
"""Maps attributes to values.
Only if we are initialised
"""
if self.__dict__.has_key(item): # any normal attributes are handled normally
dict.__setattr__(self, item, value)
else:
self.__parser.set('UserOptions',item, value)
Now I wonder why in self.__parser.set('UserOptions',item, value) I get the error. I read in the pythons docs and I don't get what to do. I suspect that I need store a dict with the fields name and first look there but how?
A:
You are trying to get the sections on request. But it is much easier to iterate over sections and options and add them as attribute in __init__. I edited my example to support setattr as well. You problem is explained here you are assigning the attributes in __setattr__ while you should use __dict__ instead
from ConfigParser import SafeConfigParser
class Section:
def __init__(self, name, parser):
self.__dict__['name'] = name
self.__dict__['parser'] = parser
def __setattr__(self, attr, value):
self.__dict__[attr] = str(value)
self.parser.set(self.name, attr, str(value))
class Configuration(object):
def __init__(self, fileName):
self.__parser = SafeConfigParser()
self.__parser.read(fileName)
self.fileName = fileName
for section in self.__parser.sections():
setattr(self, section, Section(section, self.__parser))
for option in self.__parser.options(section):
setattr(getattr(self, section), option,
self.__parser.get(section, option))
def __getattr__(self, attr):
self.__parser.add_section(attr)
setattr(self, attr, Section(attr, self.__parser))
return getattr(self, attr)
def save(self):
f = open(self.fileName, 'w')
self.__parser.write(f)
f.close()
c = Configuration('config.ini')
print dir(c) -> will print all sections
print dir(c.UserOptions) -> will print all user options
print c.UserOptions.sampledate
c.new.value = 10
c.save()
A:
Your problem is in Section.__init__. When you set self.__name = name it calls your __setattr__ method, doesn't find the key in __dict__ so it goes to
self.__parser.set('UserOptions',item, value)
So now it needs self.__parser.
Which hasn't been set yet. So it tries to get it using __getattr__. Which sends it looking for self.__parser. Which hasn't been set yet. So it tries to get it using __getattr__. So... you get the point :-)
One way to avoid this is by adding a condition in Section.__setattr__ like this
if item.startswith('_') or self.__dict__.has_key(item):
^^^^^^^^^^^^^^^^^^^^^^^
...
which will make sure __name and __parser are set properly on initialization.
|
How use __setattr__ & __getattr__ for map INI values?
|
I want to map a INI file as a python object. So if the file have:
[UserOptions]
SampleFile = sample.txt
SamplePort = 80
SampleInt = 1
Sample = Aja
SampleDate = 10/02/2008
Then I want:
c = Configuration('sample.ini')
c.UserOptions.SamplePort = 90
I'm looking to setattr but I get a recursion error.
This is what I have:
class Configuration:
def __init__ (self, fileName):
cp = SafeConfigParser()
cp.read(fileName)
self.__parser = cp
self.fileName = fileName
def __getattr__ (self, name):
if name in self.__parser.sections():
return Section(name, self.__parser)
else:
return None
def __str__ (self):
p = self.__parser
result = []
result.append('<Configuration from %s>' % self.fileName)
for s in p.sections():
result.append('[%s]' % s)
for o in p.options(s):
result.append('%s=%s' % (o, p.get(s, o)))
return '\n'.join(result)
class Section:
def __init__ (self, name, parser):
self.__name = name
self.__parser = parser
def __getattr__ (self, name):
if self.__dict__.has_key(name): # any normal attributes are handled normally
return __getattr__(self, item)
else:
return self.__parser.get(self.name, name)
def __setattr__(self, item, value):
"""Maps attributes to values.
Only if we are initialised
"""
if self.__dict__.has_key(item): # any normal attributes are handled normally
dict.__setattr__(self, item, value)
else:
self.__parser.set('UserOptions',item, value)
Now I wonder why in self.__parser.set('UserOptions',item, value) I get the error. I read in the pythons docs and I don't get what to do. I suspect that I need store a dict with the fields name and first look there but how?
|
[
"You are trying to get the sections on request. But it is much easier to iterate over sections and options and add them as attribute in __init__. I edited my example to support setattr as well. You problem is explained here you are assigning the attributes in __setattr__ while you should use __dict__ instead\nfrom ConfigParser import SafeConfigParser\n\nclass Section:\n def __init__(self, name, parser):\n self.__dict__['name'] = name\n self.__dict__['parser'] = parser\n\n def __setattr__(self, attr, value):\n self.__dict__[attr] = str(value)\n self.parser.set(self.name, attr, str(value))\n\nclass Configuration(object):\n def __init__(self, fileName):\n self.__parser = SafeConfigParser()\n self.__parser.read(fileName)\n self.fileName = fileName\n for section in self.__parser.sections():\n setattr(self, section, Section(section, self.__parser))\n for option in self.__parser.options(section):\n setattr(getattr(self, section), option,\n self.__parser.get(section, option))\n\n def __getattr__(self, attr):\n self.__parser.add_section(attr)\n setattr(self, attr, Section(attr, self.__parser))\n return getattr(self, attr)\n\n def save(self):\n f = open(self.fileName, 'w')\n self.__parser.write(f)\n f.close()\n\nc = Configuration('config.ini')\n\nprint dir(c) -> will print all sections\nprint dir(c.UserOptions) -> will print all user options\nprint c.UserOptions.sampledate\n\nc.new.value = 10\nc.save()\n\n",
"Your problem is in Section.__init__. When you set self.__name = name it calls your __setattr__ method, doesn't find the key in __dict__ so it goes to\n self.__parser.set('UserOptions',item, value)\n\nSo now it needs self.__parser. \nWhich hasn't been set yet. So it tries to get it using __getattr__. Which sends it looking for self.__parser. Which hasn't been set yet. So it tries to get it using __getattr__. So... you get the point :-)\nOne way to avoid this is by adding a condition in Section.__setattr__ like this\nif item.startswith('_') or self.__dict__.has_key(item):\n ^^^^^^^^^^^^^^^^^^^^^^^\n ...\n\nwhich will make sure __name and __parser are set properly on initialization.\n"
] |
[
4,
4
] |
[] |
[] |
[
"configuration_files",
"python"
] |
stackoverflow_0000860744_configuration_files_python.txt
|
Q:
Lay out import pathing in Python, straight and simple?
If a group of Python developers wants to put their shared code somewhere, in a hierarchical structure, what's the structure, and what's the related "import" syntax?
Does java-style reference work in Python also? I.e., do directories correspond to dots?
What is standard setup for an internal-use-only library of Python code, and what's the syntax for imports from that library area, say 3 levels deep?
I've read Learning Python, saw PYTHONPATH, been fiddling with code for a few weeks now, love it, but I'm just dense on "import" beyond trivial cases. If too general send me back to the books.
A:
What we do.
Development
c:\someroot\project\thing__init__.py # makes thing a package
c:\someroot\project\thing\foo.py
c:\someroot\project\thing\bar.py
Our "environment" (set in a variety of ways
SET PYTHONPATH="C:\someroot\project"
Some file we're working on
import thing.foo
import thing.bar
Deployment
/opt/someroot/project/project-1.1/thing/init.py # makes thing a package
/opt/someroot/project/project-1.1/thing/foo.py
/opt/someroot/project/project-1.1/thing/bar.py
Our "environment" (set in a variety of ways
SET PYTHONPATH="/opt/someroot/project/project-1.1"
This allows multiple versions to be deployed side-by-side.
Each of the various "things" are designed to be separate, reusable packages.
A:
If a group of Python developers wants to put their shared code somewhere, in a hierarchical structure, what's the structure, and what's the related "import" syntax?
You put it in your C:\python26\Lib\site-packages\ directory under your own folder.
Inside that folder you should include an __init__.py file which will be run upon import, this can be empty.
Does java-style reference work in Python also? I.e., do directories correspond to dots?
Yes as long as the directories contain __init__.py files.
What is standard setup for an internal-use-only library of Python code, and what's the syntax for imports from that library area, say 3 levels deep?
MyCompany/MyProject/ -> import MyCompany.MyProject
|
Lay out import pathing in Python, straight and simple?
|
If a group of Python developers wants to put their shared code somewhere, in a hierarchical structure, what's the structure, and what's the related "import" syntax?
Does java-style reference work in Python also? I.e., do directories correspond to dots?
What is standard setup for an internal-use-only library of Python code, and what's the syntax for imports from that library area, say 3 levels deep?
I've read Learning Python, saw PYTHONPATH, been fiddling with code for a few weeks now, love it, but I'm just dense on "import" beyond trivial cases. If too general send me back to the books.
|
[
"What we do.\nDevelopment\n\nc:\\someroot\\project\\thing__init__.py # makes thing a package\nc:\\someroot\\project\\thing\\foo.py\nc:\\someroot\\project\\thing\\bar.py\n\nOur \"environment\" (set in a variety of ways\nSET PYTHONPATH=\"C:\\someroot\\project\"\n\nSome file we're working on\n import thing.foo\n import thing.bar\n\nDeployment\n\n/opt/someroot/project/project-1.1/thing/init.py # makes thing a package\n/opt/someroot/project/project-1.1/thing/foo.py\n/opt/someroot/project/project-1.1/thing/bar.py\n\nOur \"environment\" (set in a variety of ways\nSET PYTHONPATH=\"/opt/someroot/project/project-1.1\"\n\nThis allows multiple versions to be deployed side-by-side.\nEach of the various \"things\" are designed to be separate, reusable packages.\n",
"\nIf a group of Python developers wants to put their shared code somewhere, in a hierarchical structure, what's the structure, and what's the related \"import\" syntax?\n\nYou put it in your C:\\python26\\Lib\\site-packages\\ directory under your own folder.\nInside that folder you should include an __init__.py file which will be run upon import, this can be empty.\n\nDoes java-style reference work in Python also? I.e., do directories correspond to dots? \n\nYes as long as the directories contain __init__.py files.\n\nWhat is standard setup for an internal-use-only library of Python code, and what's the syntax for imports from that library area, say 3 levels deep?\n\nMyCompany/MyProject/ -> import MyCompany.MyProject\n\n"
] |
[
7,
3
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0000860672_import_python.txt
|
Q:
Is there a Python equivalent to Java's AWT Robot class?
Does anyone know of a Python class similar to Java Robot?
Specifically I would like to perform a screen grab in Ubuntu, and eventually track mouse clicks and keyboard presses (although that's a slightly different question).
A:
If you have GTK, then you can use the gtk.gdk.Display class to do most of the work. It controls the keyboard/mouse pointer grabs a set of gtk.gdk.Screen objects.
A:
Check out GNU LDTP:
GNU/Linux Desktop Testing Project (GNU
LDTP) is aimed at producing high
quality test automation framework
[...]
Especially Writing LDTP test scripts in Python scripting language
A:
As far as the screen grab, see this answer. That worked for me. Other answers to the same question might be of interest as well.
|
Is there a Python equivalent to Java's AWT Robot class?
|
Does anyone know of a Python class similar to Java Robot?
Specifically I would like to perform a screen grab in Ubuntu, and eventually track mouse clicks and keyboard presses (although that's a slightly different question).
|
[
"If you have GTK, then you can use the gtk.gdk.Display class to do most of the work. It controls the keyboard/mouse pointer grabs a set of gtk.gdk.Screen objects.\n",
"Check out GNU LDTP:\n\nGNU/Linux Desktop Testing Project (GNU\n LDTP) is aimed at producing high\n quality test automation framework\n [...]\n\nEspecially Writing LDTP test scripts in Python scripting language\n",
"As far as the screen grab, see this answer. That worked for me. Other answers to the same question might be of interest as well.\n"
] |
[
6,
3,
0
] |
[
"Check out the RobotFramework. I do not know if it will do the same things as JavaRobot, or if it will do more. But it is easy and very flexible to use.\n"
] |
[
-1
] |
[
"automation",
"awtrobot",
"linux",
"python",
"screenshot"
] |
stackoverflow_0000860013_automation_awtrobot_linux_python_screenshot.txt
|
Q:
How do capture groups work? (wrt python regular expressions)
While using regex to help solve a problem in the Python Challenge, I came across some behaviour that confused me.
from here:
(...) Matches whatever regular expression is inside the parentheses.
and
'+' Causes the resulting RE to match 1 or more repetitions of the preceding RE.
So this makes sense:
>>>import re
>>>re.findall(r"(\d+)", "1111112")
['1111112']
But this doesn't:
>>> re.findall(r"(\d)+", "1111112")
['2']
I realise that findall returns only groups when groups are present in the regex, but why is only the '2' returned? What happends to all the 1's in the match?
A:
Because you only have one capturing group, but it's "run" repeatedly, the new matches are repeatedly entered into the "storage space" for that group. In other words, the 1s were lost when they were "overwritten" by subsequent 1s and eventually the 2.
A:
You are repeating the group itself by appending '+' after ')', I do not know the implementation details but it matches 7 times, and returns only the last match.
In the first one, you are matching 7 digits, and making it a group.
|
How do capture groups work? (wrt python regular expressions)
|
While using regex to help solve a problem in the Python Challenge, I came across some behaviour that confused me.
from here:
(...) Matches whatever regular expression is inside the parentheses.
and
'+' Causes the resulting RE to match 1 or more repetitions of the preceding RE.
So this makes sense:
>>>import re
>>>re.findall(r"(\d+)", "1111112")
['1111112']
But this doesn't:
>>> re.findall(r"(\d)+", "1111112")
['2']
I realise that findall returns only groups when groups are present in the regex, but why is only the '2' returned? What happends to all the 1's in the match?
|
[
"Because you only have one capturing group, but it's \"run\" repeatedly, the new matches are repeatedly entered into the \"storage space\" for that group. In other words, the 1s were lost when they were \"overwritten\" by subsequent 1s and eventually the 2.\n",
"You are repeating the group itself by appending '+' after ')', I do not know the implementation details but it matches 7 times, and returns only the last match.\nIn the first one, you are matching 7 digits, and making it a group.\n"
] |
[
10,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0000861060_python_regex.txt
|
Q:
How can I find the locations of an item in a Python list of lists?
I want to find the location(s) of a specific item in a list of lists. It should return a list of tuples, where each tuple represents the indexes for a specific instance of the item. For example:
list = [['1', '2', '4', '6'], ['7', '0', '1', '4']]
getPosition('1') #returns [(0, 0), (1, 2)]
and getPosition('7') #returns [(1,0)]
A:
If you want something that will both
find duplicates and
handle nested lists (lists of lists of lists of ...)
you can do something like the following:
def get_positions(xs, item):
if isinstance(xs, list):
for i, it in enumerate(xs):
for pos in get_positions(it, item):
yield (i,) + pos
elif xs == item:
yield ()
Testing this:
>>> xs = [['1', '2', '4', '6'],
... ['7', '0', '1', '4'],
... [ [ '0', '1', '1'], ['1']]
... ]
>>> print list(get_positions(xs, '1'))
[(0, 0), (1, 2), (2, 0, 1), (2, 0, 2), (2, 1, 0)]
A:
It looks likes you want, for a list of sublists and a given item, to return a list of pairs where each pair is (the index of the sublist, the index of the item within the sublist). You can do that using list comprehensions and Python's built in enumerate() function:
def getPosition(list, item):
return [(i, sublist.index(item)) for i, sublist in enumerate(list)]
Edit: See @scribble's answer above/below.
A:
def getPosition(list, item):
return [(i, sublist.index(item)) for i, sublist in enumerate(list)
if item in sublist]
A:
def get_positions(xs, target):
return [(i,e.index(target)) for i,e in enumerate(xs)]
That's a good starting point. Presumably you have some sort of class such as
class SomeClass:
def __init__(self):
self.xs = [['1','2','4','6'], ['7','0','1','4']]
def get_positions(self, target):
return [(i,e.index(target)) for i,e in enumerate(self.xs)]
which in this case would let you say
model = SomeClass()
model.get_position(1) # returns [(0,0), (1,2)]
Note that in both cases you'll get an exception if your target isn't in every one of your sublists. The question does not specify whether this is the desired behavior.
A:
If you don't want a exception if the item is not in the list try this. Also as a generator because they are cool and versatile.
xs = [['1', '2', '4', '6'], ['7', '0', '1', '4']]
def get_positions(xs, item):
for i, xt in enumerate( xs ):
try: # trying beats checking
yield (i, xt.index(item))
except ValueError:
pass
print list(get_positions(xs, '1'))
print list(get_positions(xs, '6'))
# Edit for fun: The one-line version, without try:
get_positions2 = lambda xs,item: ((i,xt.index(item)) for i, xt in enumerate(xs) if item in xt)
print list(get_positions2(xs, '1'))
print list(get_positions2(xs, '6'))
A:
A while ago I wrote a library for python to do list matching that would fit the bill pretty well. It used the tokens ?, +, and * as wildcards, where ? signifies a single atom, + is a non-greedy one-or-more, and * is greedy one-or-more. For example:
from matching import match
match(['?', 2, 3, '*'], [1, 2, 3, 4, 5])
=> [1, [4, 5]]
match([1, 2, 3], [1, 2, 4])
=> MatchError: broken at 4
match([1, [2, 3, '*']], [1, [2, 3, 4]])
=> [[4]]
match([1, [2, 3, '*']], [1, [2, 3, 4]], True)
=> [1, 2, 3, [4]]
Download it here: http://www.artfulcode.net/wp-content/uploads/2008/12/matching.zip
A:
Here is a version without try..except, returning an iterator and that for
[['1', '1', '1', '1'], ['7', '0', '4']]
returns
[(0, 0), (0, 1), (0, 2), (0, 3)]
def getPosition1(l, val):
for row_nb, r in enumerate(l):
for col_nb in (x for x in xrange(len(r)) if r[x] == val):
yield row_nb, col_nb
A:
The most strainghtforward and probably the slowest way to do it would be:
>>> value = '1'
>>> l = [['1', '2', '3', '4'], ['3', '4', '5', '1']]
>>> m = []
>>> for i in range(len(l)):
... for j in range(len(l[i])):
... if l[i][j] == value:
... m.append((i,j))
...
>>> m
[(0, 0), (1, 3)]
A:
Here is another straight forward method that doesn't use generators.
def getPosition(lists,item):
positions = []
for i,li in enumerate(lists):
j = -1
try:
while True:
j = li.index(item,j+1)
positions.append((i,j))
except ValueError:
pass
return positions
l = [['1', '2', '4', '6'], ['7', '0', '1', '4']]
getPosition(l,'1') #returns [(0, 0), (1, 2)]
getPosition(l,'9') # returns []
l = [['1', '1', '1', '1'], ['7', '0', '1', '4']]
getPosition(l,'1') #returns [(0, 0), (0, 1), (0,2), (0,3), (1,2)]
|
How can I find the locations of an item in a Python list of lists?
|
I want to find the location(s) of a specific item in a list of lists. It should return a list of tuples, where each tuple represents the indexes for a specific instance of the item. For example:
list = [['1', '2', '4', '6'], ['7', '0', '1', '4']]
getPosition('1') #returns [(0, 0), (1, 2)]
and getPosition('7') #returns [(1,0)]
|
[
"If you want something that will both \n\nfind duplicates and \nhandle nested lists (lists of lists of lists of ...)\n\nyou can do something like the following:\ndef get_positions(xs, item):\n if isinstance(xs, list):\n for i, it in enumerate(xs):\n for pos in get_positions(it, item):\n yield (i,) + pos\n elif xs == item:\n yield ()\n\nTesting this:\n>>> xs = [['1', '2', '4', '6'],\n... ['7', '0', '1', '4'],\n... [ [ '0', '1', '1'], ['1']]\n... ]\n>>> print list(get_positions(xs, '1'))\n[(0, 0), (1, 2), (2, 0, 1), (2, 0, 2), (2, 1, 0)]\n\n",
"It looks likes you want, for a list of sublists and a given item, to return a list of pairs where each pair is (the index of the sublist, the index of the item within the sublist). You can do that using list comprehensions and Python's built in enumerate() function:\ndef getPosition(list, item):\n return [(i, sublist.index(item)) for i, sublist in enumerate(list)]\n\nEdit: See @scribble's answer above/below.\n",
"def getPosition(list, item):\n return [(i, sublist.index(item)) for i, sublist in enumerate(list) \n if item in sublist]\n\n",
"def get_positions(xs, target):\n return [(i,e.index(target)) for i,e in enumerate(xs)]\n\nThat's a good starting point. Presumably you have some sort of class such as\nclass SomeClass:\n def __init__(self):\n self.xs = [['1','2','4','6'], ['7','0','1','4']]\n\n def get_positions(self, target):\n return [(i,e.index(target)) for i,e in enumerate(self.xs)]\n\nwhich in this case would let you say\nmodel = SomeClass()\nmodel.get_position(1) # returns [(0,0), (1,2)]\n\nNote that in both cases you'll get an exception if your target isn't in every one of your sublists. The question does not specify whether this is the desired behavior.\n",
"If you don't want a exception if the item is not in the list try this. Also as a generator because they are cool and versatile.\nxs = [['1', '2', '4', '6'], ['7', '0', '1', '4']]\ndef get_positions(xs, item):\n for i, xt in enumerate( xs ):\n try: # trying beats checking\n yield (i, xt.index(item))\n except ValueError: \n pass\n\nprint list(get_positions(xs, '1'))\nprint list(get_positions(xs, '6'))\n\n# Edit for fun: The one-line version, without try:\n\nget_positions2 = lambda xs,item: ((i,xt.index(item)) for i, xt in enumerate(xs) if item in xt)\n\nprint list(get_positions2(xs, '1'))\nprint list(get_positions2(xs, '6'))\n\n",
"A while ago I wrote a library for python to do list matching that would fit the bill pretty well. It used the tokens ?, +, and * as wildcards, where ? signifies a single atom, + is a non-greedy one-or-more, and * is greedy one-or-more. For example:\nfrom matching import match\n\nmatch(['?', 2, 3, '*'], [1, 2, 3, 4, 5])\n=> [1, [4, 5]]\n\nmatch([1, 2, 3], [1, 2, 4])\n=> MatchError: broken at 4\n\nmatch([1, [2, 3, '*']], [1, [2, 3, 4]])\n=> [[4]]\n\nmatch([1, [2, 3, '*']], [1, [2, 3, 4]], True)\n=> [1, 2, 3, [4]]\n\nDownload it here: http://www.artfulcode.net/wp-content/uploads/2008/12/matching.zip\n",
"Here is a version without try..except, returning an iterator and that for\n[['1', '1', '1', '1'], ['7', '0', '4']]\n\nreturns \n[(0, 0), (0, 1), (0, 2), (0, 3)] \n\n\ndef getPosition1(l, val):\n for row_nb, r in enumerate(l):\n for col_nb in (x for x in xrange(len(r)) if r[x] == val):\n yield row_nb, col_nb\n\n",
"The most strainghtforward and probably the slowest way to do it would be:\n >>> value = '1'\n >>> l = [['1', '2', '3', '4'], ['3', '4', '5', '1']]\n >>> m = []\n >>> for i in range(len(l)):\n ... for j in range(len(l[i])):\n ... if l[i][j] == value:\n ... m.append((i,j))\n ...\n >>> m\n [(0, 0), (1, 3)]\n\n",
"Here is another straight forward method that doesn't use generators.\ndef getPosition(lists,item):\n positions = []\n for i,li in enumerate(lists):\n j = -1\n try:\n while True:\n j = li.index(item,j+1)\n positions.append((i,j))\n except ValueError:\n pass\n return positions\n\nl = [['1', '2', '4', '6'], ['7', '0', '1', '4']]\ngetPosition(l,'1') #returns [(0, 0), (1, 2)]\ngetPosition(l,'9') # returns []\n\nl = [['1', '1', '1', '1'], ['7', '0', '1', '4']]\ngetPosition(l,'1') #returns [(0, 0), (0, 1), (0,2), (0,3), (1,2)]\n\n"
] |
[
8,
6,
4,
3,
3,
1,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000853023_python.txt
|
Q:
A class method which behaves differently when called as an instance method?
I'm wondering if it's possible to make a method which behaves differently when called as a class method than when called as an instance method.
For example, as a skills-improvement project, I'm writing a Matrix class (yes, I know there are perfectly good matrix classes already out there). I've created a class method for it called identity which returns an identity matrix of a specified size.
Now, when called on an instance of Matrix, it seems logical that the size shouldn't need to be specified; it should return an identity matrix of the same size as the Matrix it's called on.
In other words, I'd like to define a method which can determine whether it was called via an instance and, if so, access that instance's attributes. Unfortunately, even after digging through the documentation and a few Google searches, I haven't found anything which suggests this is possible. Does anyone know differently?
Edit:
Wow! Clearly, I'm still not quite used to first-class functions. Here's what I ended up with — thanks to Unknown for providing the key!
class Foo(object):
def __init__(self, bar):
self.baz = bar
self.bar = MethodType(lambda self: self.__class__.bar(self.baz), self, self.__class__)
@classmethod
def bar(cls, baz):
return 5 * baz
Foo.bar(3) # returns 15
foo = Foo(7)
foo.bar() # returns 35
Edit 2:
Just a quick note — this technique (and most of those presented below) won't work on classes which define __slots__, as you cannot reassign the method.
A:
Questionably useful Python hacks are my forte.
from types import *
class Foo(object):
def __init__(self):
self.bar = methodize(bar, self)
self.baz = 999
@classmethod
def bar(cls, baz):
return 2 * baz
def methodize(func, instance):
return MethodType(func, instance, instance.__class__)
def bar(self):
return 4*self.baz
>>> Foo.bar(5)
10
>>> a=Foo()
>>> a.bar()
3996
A:
[edited: use attribute to be a more direct answer; see the helpful comment by John Fouhy]
You can use a descriptor to do what you want:
class cls_or_inst_method(object):
def __init__(self, class_method, instance_method):
self.class_method = class_method
self.instance_method = instance_method
def __get__(self, obj, objtype):
if obj is None:
return self.class_method
else:
return lambda: self.instance_method(obj)
def my_class_method(baz):
return baz + 1
def my_instance_method(self):
return self.baz * 2
class Foo(object):
baz = 10
bar = cls_or_inst_method(my_class_method, my_instance_method)
Using the above:
>>> print Foo.bar(5)
6
>>> my_foo = Foo()
>>> print my_foo.bar()
20
A:
@Unknown What's the difference between your's and this:
class Foo(object):
def _bar(self, baz):
print "_bar, baz:", baz
def __init__(self, bar):
self.bar = self._bar
self.baz = bar
@classmethod
def bar(cls, baz):
print "bar, baz:", baz
In [1]: import foo
In [2]: f = foo.Foo(42)
In [3]: f.bar(1)
_bar, baz: 1
In [4]: foo.Foo.bar(1)
bar, baz: 1
In [5]: f.__class__.bar(1)
bar, baz: 1
A:
I think the larger problem is that you are overloading the name 'bar' on class 'Foo', something python doesn't allow. The second definition of 'bar' clobbers the first definition of 'bar'.
Try to think of unique names for your classmethod and instance method. i.e.
@classmethod
def create(cls, baz):
...
def rubber_stamp(self):
...
A:
You can reassign your identity method in init with short lambda function:
class Matrix(object):
def __init__(self):
self.identity = lambda s=self:s.__class__.identity(s)
#...whatever initialization code you have...
self.size = 10
@classmethod
def identity(self, other):
#...always do you matrix calculations on 'other', not 'self'...
return other.size
m = Matrix()
print m.identity()
print Matrix.identity(m)
If you're not familiar with lambda, it creates an anonymous function. It's rarely necessary, but it can make your code more concise. The lambda line above could be rewritten:
def identity(self):
self.__class__.indentity(self)
self.identity = identity
|
A class method which behaves differently when called as an instance method?
|
I'm wondering if it's possible to make a method which behaves differently when called as a class method than when called as an instance method.
For example, as a skills-improvement project, I'm writing a Matrix class (yes, I know there are perfectly good matrix classes already out there). I've created a class method for it called identity which returns an identity matrix of a specified size.
Now, when called on an instance of Matrix, it seems logical that the size shouldn't need to be specified; it should return an identity matrix of the same size as the Matrix it's called on.
In other words, I'd like to define a method which can determine whether it was called via an instance and, if so, access that instance's attributes. Unfortunately, even after digging through the documentation and a few Google searches, I haven't found anything which suggests this is possible. Does anyone know differently?
Edit:
Wow! Clearly, I'm still not quite used to first-class functions. Here's what I ended up with — thanks to Unknown for providing the key!
class Foo(object):
def __init__(self, bar):
self.baz = bar
self.bar = MethodType(lambda self: self.__class__.bar(self.baz), self, self.__class__)
@classmethod
def bar(cls, baz):
return 5 * baz
Foo.bar(3) # returns 15
foo = Foo(7)
foo.bar() # returns 35
Edit 2:
Just a quick note — this technique (and most of those presented below) won't work on classes which define __slots__, as you cannot reassign the method.
|
[
"Questionably useful Python hacks are my forte.\nfrom types import *\n\nclass Foo(object):\n def __init__(self):\n self.bar = methodize(bar, self)\n self.baz = 999\n\n @classmethod\n def bar(cls, baz):\n return 2 * baz\n\n\ndef methodize(func, instance):\n return MethodType(func, instance, instance.__class__)\n\ndef bar(self):\n return 4*self.baz\n\n\n>>> Foo.bar(5)\n10\n>>> a=Foo()\n>>> a.bar()\n3996\n\n",
"[edited: use attribute to be a more direct answer; see the helpful comment by John Fouhy]\nYou can use a descriptor to do what you want:\nclass cls_or_inst_method(object):\n def __init__(self, class_method, instance_method):\n self.class_method = class_method\n self.instance_method = instance_method\n\n def __get__(self, obj, objtype):\n if obj is None:\n return self.class_method\n else:\n return lambda: self.instance_method(obj)\n\ndef my_class_method(baz):\n return baz + 1\n\ndef my_instance_method(self):\n return self.baz * 2\n\nclass Foo(object):\n baz = 10\n bar = cls_or_inst_method(my_class_method, my_instance_method)\n\nUsing the above:\n>>> print Foo.bar(5)\n6\n>>> my_foo = Foo()\n>>> print my_foo.bar()\n20\n\n",
"@Unknown What's the difference between your's and this:\nclass Foo(object):\n\n def _bar(self, baz):\n print \"_bar, baz:\", baz\n\n def __init__(self, bar):\n self.bar = self._bar\n self.baz = bar\n\n @classmethod\n def bar(cls, baz):\n print \"bar, baz:\", baz\n\nIn [1]: import foo\n\nIn [2]: f = foo.Foo(42)\n\nIn [3]: f.bar(1)\n_bar, baz: 1\n\nIn [4]: foo.Foo.bar(1)\nbar, baz: 1\n\nIn [5]: f.__class__.bar(1)\nbar, baz: 1\n\n",
"I think the larger problem is that you are overloading the name 'bar' on class 'Foo', something python doesn't allow. The second definition of 'bar' clobbers the first definition of 'bar'. \nTry to think of unique names for your classmethod and instance method. i.e.\n@classmethod\ndef create(cls, baz):\n ...\n\ndef rubber_stamp(self):\n ...\n\n",
"You can reassign your identity method in init with short lambda function:\nclass Matrix(object):\n def __init__(self):\n self.identity = lambda s=self:s.__class__.identity(s)\n\n #...whatever initialization code you have...\n self.size = 10 \n\n @classmethod\n def identity(self, other):\n #...always do you matrix calculations on 'other', not 'self'...\n return other.size\n\n\nm = Matrix()\nprint m.identity()\nprint Matrix.identity(m)\n\nIf you're not familiar with lambda, it creates an anonymous function. It's rarely necessary, but it can make your code more concise. The lambda line above could be rewritten:\n def identity(self):\n self.__class__.indentity(self)\n self.identity = identity\n\n"
] |
[
39,
7,
7,
3,
2
] |
[] |
[] |
[
"class",
"methods",
"python"
] |
stackoverflow_0000861055_class_methods_python.txt
|
Q:
Is python automagically parallelizing IO- and CPU- or memory-bound sections?
This is a follow-up questions on a previous one.
Consider this code, which is less toyish than the one in the previous question (but still much simpler than my real one)
import sys
data=[]
for line in open(sys.argv[1]):
data.append(line[-1])
print data[-1]
Now, I was expecting a longer run time (my benchmark file is 65150224 lines long), possibly much longer. This was not the case, it runs in ~ 2 minutes on the same hw as before!
Is it data.append() very lightweight? I don't believe so, thus I wrote this fake code to test it:
data=[]
counter=0
string="a\n"
for counter in xrange(65150224):
data.append(string[-1])
print data[-1]
This runs in 1.5 to 3 minutes (there is strong variability among runs)
Why don't I get 3.5 to 5 minutes in the former program? Obviously data.append() is happening in parallel with the IO.
This is good news!
But how does it work? Is it a documented feature? Is there any requirement on my code that I should follow to make it works as much as possible (besides load-balancing IO and memory/CPU activities)? Or is it just plain buffering/caching in action?
Again, I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you think it's worth doing.
A:
Obviously data.append() is happening in parallel with the IO.
I'm afraid not. It is possible to parallelize IO and computation in Python, but it doesn't happen magically.
One thing you could do is use posix_fadvise(2) to give the OS a hint that you plan to read the file sequentially (POSIX_FADV_SEQUENTIAL).
In some rough tests doing "wc -l" on a 600 meg file (an ISO) the performance increased by about 20%. Each test was done immediately after clearing the disk cache.
For a Python interface to fadvise see python-fadvise.
A:
How big are the lines in your file? If they're not very long (anything under about 1K probably qualifies) then you're likely seeing performance gains because of input buffering.
A:
Why do you think list.append() would be a slower operation? It is extremely fast, considering the internal pointer arrays used by lists to hold references to the objects in them are allocated in increasingly large blocks, so that every append does not actually re-allocate the array, and most can simply increment the length counter and set a pointer and incref.
A:
I don't see any evidence that "data.append() is happening in parallel with the IO." Like Benji, I don't think this is automatic in the way you think. You showed that doing data.append(line[-1]) takes about the same amount of time as lc = lc + 1 (essentially no time at all, compared to the IO and line splitting). It's not really surprising that data.append(line[-1]) is very fast. One would expect the whole line to be in a fast cache, and as noted append prepares buffers ahead of time and only rarely has to reallocate. Moreover, line[-1] will always be '\n', except possibly for the last line of the file (no idea if Python optimizes for this).
The only part I'm a little surprised about is that the xrange is so variable. I would expect it to always be faster, since there's no IO, and you're not actually using the counter.
A:
If your run times are varying by that amount for the second example, I'd suspect your method of timing or outside influences (other processes / system load) to be skewing the times to the point where they don't give any reliable information.
|
Is python automagically parallelizing IO- and CPU- or memory-bound sections?
|
This is a follow-up questions on a previous one.
Consider this code, which is less toyish than the one in the previous question (but still much simpler than my real one)
import sys
data=[]
for line in open(sys.argv[1]):
data.append(line[-1])
print data[-1]
Now, I was expecting a longer run time (my benchmark file is 65150224 lines long), possibly much longer. This was not the case, it runs in ~ 2 minutes on the same hw as before!
Is it data.append() very lightweight? I don't believe so, thus I wrote this fake code to test it:
data=[]
counter=0
string="a\n"
for counter in xrange(65150224):
data.append(string[-1])
print data[-1]
This runs in 1.5 to 3 minutes (there is strong variability among runs)
Why don't I get 3.5 to 5 minutes in the former program? Obviously data.append() is happening in parallel with the IO.
This is good news!
But how does it work? Is it a documented feature? Is there any requirement on my code that I should follow to make it works as much as possible (besides load-balancing IO and memory/CPU activities)? Or is it just plain buffering/caching in action?
Again, I tagged "linux" this question, because I'm interested only in linux-specific answers. Feel free to give OS-agnostic, or even other-OS answers, if you think it's worth doing.
|
[
"\nObviously data.append() is happening in parallel with the IO.\n\nI'm afraid not. It is possible to parallelize IO and computation in Python, but it doesn't happen magically.\nOne thing you could do is use posix_fadvise(2) to give the OS a hint that you plan to read the file sequentially (POSIX_FADV_SEQUENTIAL).\nIn some rough tests doing \"wc -l\" on a 600 meg file (an ISO) the performance increased by about 20%. Each test was done immediately after clearing the disk cache.\nFor a Python interface to fadvise see python-fadvise.\n",
"How big are the lines in your file? If they're not very long (anything under about 1K probably qualifies) then you're likely seeing performance gains because of input buffering.\n",
"Why do you think list.append() would be a slower operation? It is extremely fast, considering the internal pointer arrays used by lists to hold references to the objects in them are allocated in increasingly large blocks, so that every append does not actually re-allocate the array, and most can simply increment the length counter and set a pointer and incref.\n",
"I don't see any evidence that \"data.append() is happening in parallel with the IO.\" Like Benji, I don't think this is automatic in the way you think. You showed that doing data.append(line[-1]) takes about the same amount of time as lc = lc + 1 (essentially no time at all, compared to the IO and line splitting). It's not really surprising that data.append(line[-1]) is very fast. One would expect the whole line to be in a fast cache, and as noted append prepares buffers ahead of time and only rarely has to reallocate. Moreover, line[-1] will always be '\\n', except possibly for the last line of the file (no idea if Python optimizes for this).\nThe only part I'm a little surprised about is that the xrange is so variable. I would expect it to always be faster, since there's no IO, and you're not actually using the counter.\n",
"If your run times are varying by that amount for the second example, I'd suspect your method of timing or outside influences (other processes / system load) to be skewing the times to the point where they don't give any reliable information.\n"
] |
[
8,
1,
1,
1,
1
] |
[] |
[] |
[
"linux",
"performance",
"python",
"text_files"
] |
stackoverflow_0000860893_linux_performance_python_text_files.txt
|
Q:
django and mod_wsgi having database connection issues
I've noticed that whenever I enable the database settings on my django project (starting to notice a trend in my questions?) it gives me an internal server error. Setting the database settings to be blank makes the error go away. Here are the apache error logs that it outputs.
mod_wsgi (pid=770): Exception occurred processing WSGI script '/Users/teifionjordan/rob2/apache/django.wsgi'.
Traceback (most recent call last):
File "/Library/Python/2.5/site-packages/django/core/handlers/wsgi.py", line 239, in __call__
response = self.get_response(request)
File "/Library/Python/2.5/site-packages/django/core/handlers/base.py", line 67, in get_response
response = middleware_method(request)
File "/Library/Python/2.5/site-packages/django/contrib/sessions/middleware.py", line 9, in process_request
engine = __import__(settings.SESSION_ENGINE, {}, {}, [''])
File "/Library/Python/2.5/site-packages/django/contrib/sessions/backends/db.py", line 2, in <module>
from django.contrib.sessions.models import Session
File "/Library/Python/2.5/site-packages/django/contrib/sessions/models.py", line 4, in <module>
from django.db import models
File "/Library/Python/2.5/site-packages/django/db/__init__.py", line 16, in <module>
backend = __import__('%s%s.base' % (_import_path, settings.DATABASE_ENGINE), {}, {}, [''])
File "/Library/Python/2.5/site-packages/django/db/backends/mysql/base.py", line 10, in <module>
import MySQLdb as Database
File "build/bdist.macosx-10.5-i386/egg/MySQLdb/__init__.py", line 19, in <module>
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 7, in <module>
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 4, in __bootstrap__
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 841, in resource_filename
self, resource_name
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 1310, in get_resource_filename
self._extract_resource(manager, self._eager_to_zip(name))
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 1332, in _extract_resource
self.egg_name, self._parts(zip_path)
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 921, in get_cache_path
self.extraction_error()
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 887, in extraction_error
raise err
ExtractionError: Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 20] Not a directory: '/Library/WebServer/.python-eggs/MySQL_python-1.2.2-py2.5-macosx-10.5-i386.egg-tmp'
The Python egg cache directory is currently set to:
/Library/WebServer/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
And here is the django.wsgi file
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'rob2.settings'
sys.path.append('/Users/teifionjordan')
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I have several other scripts that all connect to a mysql database just fine, if I run the tutorial server then it displays the admin panel correctly. I have tried changing the environ variables for eggs but still nothing changes.
A:
You need to set the PYTHON_EGG_CACHE environment variable. Apache/mod_wsgi is trying to extract the egg into a directory that Apache doesn't have write access to....or that doesn't exist.
It's explained in the Django docs here.
Does /Library/WebServer/.python-eggs exist? What does your Apache config file look like?
|
django and mod_wsgi having database connection issues
|
I've noticed that whenever I enable the database settings on my django project (starting to notice a trend in my questions?) it gives me an internal server error. Setting the database settings to be blank makes the error go away. Here are the apache error logs that it outputs.
mod_wsgi (pid=770): Exception occurred processing WSGI script '/Users/teifionjordan/rob2/apache/django.wsgi'.
Traceback (most recent call last):
File "/Library/Python/2.5/site-packages/django/core/handlers/wsgi.py", line 239, in __call__
response = self.get_response(request)
File "/Library/Python/2.5/site-packages/django/core/handlers/base.py", line 67, in get_response
response = middleware_method(request)
File "/Library/Python/2.5/site-packages/django/contrib/sessions/middleware.py", line 9, in process_request
engine = __import__(settings.SESSION_ENGINE, {}, {}, [''])
File "/Library/Python/2.5/site-packages/django/contrib/sessions/backends/db.py", line 2, in <module>
from django.contrib.sessions.models import Session
File "/Library/Python/2.5/site-packages/django/contrib/sessions/models.py", line 4, in <module>
from django.db import models
File "/Library/Python/2.5/site-packages/django/db/__init__.py", line 16, in <module>
backend = __import__('%s%s.base' % (_import_path, settings.DATABASE_ENGINE), {}, {}, [''])
File "/Library/Python/2.5/site-packages/django/db/backends/mysql/base.py", line 10, in <module>
import MySQLdb as Database
File "build/bdist.macosx-10.5-i386/egg/MySQLdb/__init__.py", line 19, in <module>
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 7, in <module>
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 4, in __bootstrap__
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 841, in resource_filename
self, resource_name
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 1310, in get_resource_filename
self._extract_resource(manager, self._eager_to_zip(name))
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 1332, in _extract_resource
self.egg_name, self._parts(zip_path)
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 921, in get_cache_path
self.extraction_error()
File "/Library/Python/2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 887, in extraction_error
raise err
ExtractionError: Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 20] Not a directory: '/Library/WebServer/.python-eggs/MySQL_python-1.2.2-py2.5-macosx-10.5-i386.egg-tmp'
The Python egg cache directory is currently set to:
/Library/WebServer/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
And here is the django.wsgi file
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'rob2.settings'
sys.path.append('/Users/teifionjordan')
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I have several other scripts that all connect to a mysql database just fine, if I run the tutorial server then it displays the admin panel correctly. I have tried changing the environ variables for eggs but still nothing changes.
|
[
"You need to set the PYTHON_EGG_CACHE environment variable. Apache/mod_wsgi is trying to extract the egg into a directory that Apache doesn't have write access to....or that doesn't exist.\nIt's explained in the Django docs here.\nDoes /Library/WebServer/.python-eggs exist? What does your Apache config file look like?\n"
] |
[
8
] |
[] |
[] |
[
"django",
"mod_wsgi",
"python"
] |
stackoverflow_0000860169_django_mod_wsgi_python.txt
|
Q:
Decoding problems in Django and lxml
I have a strange problem with lxml when using the deployed version of my Django application. I use lxml to parse another HTML page which I fetch from my server. This works perfectly well on my development server on my own computer, but for some reason it gives me UnicodeDecodeError on the server.
('utf8', "\x85why hello there!", 0, 1, 'unexpected code byte')
I have made sure that Apache (with mod_python) runs with LANG='en_US.UTF-8'.
I've tried googling for this problem and tried different approaches to decoding the string correctly, but I can't figure it out.
In your answer, you may assume that my string is called hello or something.
A:
"\x85why hello there!" is not a utf-8 encoded string. You should try decoding the webpage before passing it to lxml. Check what encoding it uses by looking at the http headers when you fetch the page maybe you find the problem there.
A:
Doesn't syntax such as u"\x85why hello there!" help?
You may find the following resources from the official Python documentation helpful:
Python introduction, Unicode Strings
Sequence Types — str, unicode, list, tuple, buffer, xrange
|
Decoding problems in Django and lxml
|
I have a strange problem with lxml when using the deployed version of my Django application. I use lxml to parse another HTML page which I fetch from my server. This works perfectly well on my development server on my own computer, but for some reason it gives me UnicodeDecodeError on the server.
('utf8', "\x85why hello there!", 0, 1, 'unexpected code byte')
I have made sure that Apache (with mod_python) runs with LANG='en_US.UTF-8'.
I've tried googling for this problem and tried different approaches to decoding the string correctly, but I can't figure it out.
In your answer, you may assume that my string is called hello or something.
|
[
"\"\\x85why hello there!\" is not a utf-8 encoded string. You should try decoding the webpage before passing it to lxml. Check what encoding it uses by looking at the http headers when you fetch the page maybe you find the problem there.\n",
"Doesn't syntax such as u\"\\x85why hello there!\" help? \nYou may find the following resources from the official Python documentation helpful:\n\nPython introduction, Unicode Strings\nSequence Types — str, unicode, list, tuple, buffer, xrange\n\n"
] |
[
3,
0
] |
[
"Since modifying site.py is not an ideal solution try this at the start of your program:\nimport sys\nreload(sys)\nsys.setdefaultencoding(\"utf-8\")\n\n"
] |
[
-2
] |
[
"decoding",
"django",
"lxml",
"python",
"utf_8"
] |
stackoverflow_0000808275_decoding_django_lxml_python_utf_8.txt
|
Q:
tokenize module
Please help
There are many tokens in module tokenize like STRING,BACKQUOTE,AMPEREQUAL etc.
>>> import cStringIO
>>> import tokenize
>>> source = "{'test':'123','hehe':['hooray',0x10]}"
>>> src = cStringIO.StringIO(source).readline
>>> src = tokenize.generate_tokens(src)
>>> src
<generator object at 0x00BFBEE0>
>>> src.next()
(51, '{', (1, 0), (1, 1), "{'test':'123','hehe':['hooray',0x10]}")
>>> token = src.next()
>>> token
(3, "'test'", (1, 1), (1, 7), "{'test':'123','hehe':['hooray',0x10]}")
>>> token[0]
3
>>> tokenize.STRING
3
>>> tokenize.AMPER
19
>>> tokenize.AMPEREQUAL
42
>>> tokenize.AT
50
>>> tokenize.BACKQUOTE
25
This is what i experimented.But i was not able to find what they mean ?
From where i will understand this.I need an immediate solution.
A:
The various AMPER, BACKQUOTE etc values correspond to the token number of the appropriate symbol for python tokens / operators. ie AMPER = & (ampersand), AMPEREQUAL = "&=".
However, you don't actually have to care about these. They're used by the internal C tokeniser, but the python wrapper simplifies the output, translating all operator symbols to the OP token. You can translate the symbolic token ids (the first value in each token tuple) to the symbolic name using the token module's tok_name dictionary. For example:
>>> import tokenize, token
>>> s = "{'test':'123','hehe':['hooray',0x10]}"
>>> for t in tokenize.generate_tokens(iter([s]).next):
print token.tok_name[t[0]],
OP STRING OP STRING OP STRING OP OP STRING OP NUMBER OP OP ENDMARKER
As a quick debug statement to describe the tokens a bit better, you could also use tokenize.printtoken. This is undocumented, and looks like it isn't present in python3, so don't rely on it for production code, but as a quick peek at what the tokens mean, you may find it useful:
>>> for t in tokenize.generate_tokens(iter([s]).next):
tokenize.printtoken(*t)
1,0-1,1: OP '{'
1,1-1,7: STRING "'test'"
1,7-1,8: OP ':'
1,8-1,13: STRING "'123'"
1,13-1,14: OP ','
1,14-1,20: STRING "'hehe'"
1,20-1,21: OP ':'
1,21-1,22: OP '['
1,22-1,30: STRING "'hooray'"
1,30-1,31: OP ','
1,31-1,35: NUMBER '0x10'
1,35-1,36: OP ']'
1,36-1,37: OP '}'
2,0-2,0: ENDMARKER ''
The various values in the tuple you get back for each token are, in order:
token Id (corresponds to the type, eg STRING, OP, NAME etc)
The string - the actual token text for this token, eg "&" or "'a string'"
The start (line, column) in your input
The end (line, column) in your input
The full text of the line the token is on.
A:
You will need to read python's code tokenizer.c to understand the detail.
Just search the keyword you want to know. Should be not hard.
A:
Python's lexical analysis (including tokens) is documented at http://docs.python.org/reference/lexical_analysis.html . As http://docs.python.org/library/token.html#module-token says, "Refer to the file Grammar/Grammar in the Python distribution for the definitions of the names in the context of the language grammar.".
|
tokenize module
|
Please help
There are many tokens in module tokenize like STRING,BACKQUOTE,AMPEREQUAL etc.
>>> import cStringIO
>>> import tokenize
>>> source = "{'test':'123','hehe':['hooray',0x10]}"
>>> src = cStringIO.StringIO(source).readline
>>> src = tokenize.generate_tokens(src)
>>> src
<generator object at 0x00BFBEE0>
>>> src.next()
(51, '{', (1, 0), (1, 1), "{'test':'123','hehe':['hooray',0x10]}")
>>> token = src.next()
>>> token
(3, "'test'", (1, 1), (1, 7), "{'test':'123','hehe':['hooray',0x10]}")
>>> token[0]
3
>>> tokenize.STRING
3
>>> tokenize.AMPER
19
>>> tokenize.AMPEREQUAL
42
>>> tokenize.AT
50
>>> tokenize.BACKQUOTE
25
This is what i experimented.But i was not able to find what they mean ?
From where i will understand this.I need an immediate solution.
|
[
"The various AMPER, BACKQUOTE etc values correspond to the token number of the appropriate symbol for python tokens / operators. ie AMPER = & (ampersand), AMPEREQUAL = \"&=\".\nHowever, you don't actually have to care about these. They're used by the internal C tokeniser, but the python wrapper simplifies the output, translating all operator symbols to the OP token. You can translate the symbolic token ids (the first value in each token tuple) to the symbolic name using the token module's tok_name dictionary. For example:\n>>> import tokenize, token\n>>> s = \"{'test':'123','hehe':['hooray',0x10]}\"\n>>> for t in tokenize.generate_tokens(iter([s]).next):\n print token.tok_name[t[0]],\n\nOP STRING OP STRING OP STRING OP OP STRING OP NUMBER OP OP ENDMARKER\n\nAs a quick debug statement to describe the tokens a bit better, you could also use tokenize.printtoken. This is undocumented, and looks like it isn't present in python3, so don't rely on it for production code, but as a quick peek at what the tokens mean, you may find it useful:\n>>> for t in tokenize.generate_tokens(iter([s]).next):\n tokenize.printtoken(*t)\n\n1,0-1,1: OP '{'\n1,1-1,7: STRING \"'test'\"\n1,7-1,8: OP ':'\n1,8-1,13: STRING \"'123'\"\n1,13-1,14: OP ','\n1,14-1,20: STRING \"'hehe'\"\n1,20-1,21: OP ':'\n1,21-1,22: OP '['\n1,22-1,30: STRING \"'hooray'\"\n1,30-1,31: OP ','\n1,31-1,35: NUMBER '0x10'\n1,35-1,36: OP ']'\n1,36-1,37: OP '}'\n2,0-2,0: ENDMARKER ''\n\nThe various values in the tuple you get back for each token are, in order:\n\ntoken Id (corresponds to the type, eg STRING, OP, NAME etc)\nThe string - the actual token text for this token, eg \"&\" or \"'a string'\"\nThe start (line, column) in your input\nThe end (line, column) in your input\nThe full text of the line the token is on.\n\n",
"You will need to read python's code tokenizer.c to understand the detail.\nJust search the keyword you want to know. Should be not hard.\n",
"Python's lexical analysis (including tokens) is documented at http://docs.python.org/reference/lexical_analysis.html . As http://docs.python.org/library/token.html#module-token says, \"Refer to the file Grammar/Grammar in the Python distribution for the definitions of the names in the context of the language grammar.\".\n"
] |
[
4,
3,
2
] |
[] |
[] |
[
"python",
"tokenize"
] |
stackoverflow_0000856769_python_tokenize.txt
|
Q:
Extracting bitmap from a file
given a somewhat complex file of unknown specification that among other things contains an uncompressed bitmap file (.BMP), how would you extract it in Python?
Scan for the "BM" tag and see if the following bytes "resemble" a BMP header?
A:
I'd use the Python Imaging Library PIL and have it a go at the data. If it can parse it, then it's a valid image. When it throws an exception, then it isn't.
You need to search for the begining of the image; if you're lucky, the image reader will ignore garbage after the image data. When it doesn't, use a binary search to locate the end of the image.
A:
Yes, about the only thing you can do is search through the file for the 'BM' marker, pull out the following data into a BITMAPFILEHEADER and corresponding BITMAPINFO, and see if the values in it look valid (i.e. that the dimensions are sensible, colour depth is reasonable, etc).
Once you have found something that looks reasonable, pull that data out and pass it to the library mentioned in another answer.
|
Extracting bitmap from a file
|
given a somewhat complex file of unknown specification that among other things contains an uncompressed bitmap file (.BMP), how would you extract it in Python?
Scan for the "BM" tag and see if the following bytes "resemble" a BMP header?
|
[
"I'd use the Python Imaging Library PIL and have it a go at the data. If it can parse it, then it's a valid image. When it throws an exception, then it isn't.\nYou need to search for the begining of the image; if you're lucky, the image reader will ignore garbage after the image data. When it doesn't, use a binary search to locate the end of the image.\n",
"Yes, about the only thing you can do is search through the file for the 'BM' marker, pull out the following data into a BITMAPFILEHEADER and corresponding BITMAPINFO, and see if the values in it look valid (i.e. that the dimensions are sensible, colour depth is reasonable, etc).\nOnce you have found something that looks reasonable, pull that data out and pass it to the library mentioned in another answer.\n"
] |
[
4,
4
] |
[] |
[] |
[
"heuristics",
"python"
] |
stackoverflow_0000862487_heuristics_python.txt
|
Q:
Pointers in Python on variables with None value
I have a method that creates a new node in a tree - either left or right. If the value is lower than my current value it is inserted on the left, otherwise on the right side.
I want to refactor this code, so that I first see on which side I have to insert my element, and then insert it. Before I implemented this twice: Once for the left side and once for the right side.
It currently looks like this:
def neu(self, sortByValue, secondValue):
child = self.left if(sortByValue.lower() < self.value[0].lower()) else self.right
if(child == None):
child = MyTree(sortByValue,secondValue)
else: child.neu(sortByValue,secondValue)
My problem is, though, that self.left is None, and self.right is None. So when I create child as a variable and set it to MyTree(...), self.left and self.right are not receiving the value.
Is there anything I can do to improve this? Thanks!
A:
In Python variables are names not locations. For example:
>>> a = 1
>>> b = a
>>> a = 2
>>> print b
1
In your code you're simply rebinding the name child to a different value (your new node) and that has no affect on the previously bound value (None).
Here's a reworking of your code that should do what you want (untested):
def neu(self, sortByValue, secondValue):
def child(node):
if(node is None):
return MyTree(sortByValue, secondValue)
else:
child.neu(sortByValue, secondValue)
return node
if(sortByValue.lower() < self.value[0].lower()):
self.left = child(self.left)
else:
self.right = child(self.right)
A:
Hallo ;-)
self.left or self.right don't receive the value because you assign to child which just holds a copy of the destination value and no reference to it.
You want to have a pointer- This doesn't exist directly in Python.
You could express this using a class wrapper, but I think it's more comprehensible when you just write both possibilities in the if clause.
A:
Why are you using a tree?
I would use a dictionary:
Initialization:
tree = {}
Adding new node:
tree[sortByValue] = secondValue
Extracting things
print tree[sortByValue]
Not the straight answer but a more pythonic way of doing it
|
Pointers in Python on variables with None value
|
I have a method that creates a new node in a tree - either left or right. If the value is lower than my current value it is inserted on the left, otherwise on the right side.
I want to refactor this code, so that I first see on which side I have to insert my element, and then insert it. Before I implemented this twice: Once for the left side and once for the right side.
It currently looks like this:
def neu(self, sortByValue, secondValue):
child = self.left if(sortByValue.lower() < self.value[0].lower()) else self.right
if(child == None):
child = MyTree(sortByValue,secondValue)
else: child.neu(sortByValue,secondValue)
My problem is, though, that self.left is None, and self.right is None. So when I create child as a variable and set it to MyTree(...), self.left and self.right are not receiving the value.
Is there anything I can do to improve this? Thanks!
|
[
"In Python variables are names not locations. For example:\n>>> a = 1\n>>> b = a\n>>> a = 2\n>>> print b\n1\n\nIn your code you're simply rebinding the name child to a different value (your new node) and that has no affect on the previously bound value (None).\nHere's a reworking of your code that should do what you want (untested):\ndef neu(self, sortByValue, secondValue):\n def child(node):\n if(node is None):\n return MyTree(sortByValue, secondValue)\n else:\n child.neu(sortByValue, secondValue)\n return node\n\n if(sortByValue.lower() < self.value[0].lower()):\n self.left = child(self.left)\n else:\n self.right = child(self.right)\n\n",
"Hallo ;-)\nself.left or self.right don't receive the value because you assign to child which just holds a copy of the destination value and no reference to it.\nYou want to have a pointer- This doesn't exist directly in Python.\nYou could express this using a class wrapper, but I think it's more comprehensible when you just write both possibilities in the if clause.\n",
"Why are you using a tree?\nI would use a dictionary:\nInitialization:\n\ntree = {}\n\nAdding new node:\n\ntree[sortByValue] = secondValue\n\nExtracting things\n\nprint tree[sortByValue]\n\nNot the straight answer but a more pythonic way of doing it\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"reference"
] |
stackoverflow_0000862652_python_reference.txt
|
Q:
Txt file parse to get a list of .o file names
I have a txt file like :
test.txt
Symbols from __ctype_tab.o:
Name Value Class Type Size Line Section
__ctype |00000000| D | OBJECT|00000004| |.data
__ctype_tab |00000000| r | OBJECT|00000101| |.rodata
Symbols from _ashldi3.o:
Name Value Class Type Size Line Section
__ashldi3 |00000000| T | FUNC|00000050| |.text
Symbols from _ashrdi3.o:
Name Value Class Type Size Line Section
__ashrdi3 |00000000| T | FUNC|00000058| |.text
Symbols from _fixdfdi.o:
Name Value Class Type Size Line Section
__fixdfdi |00000000| T | FUNC|0000004c| |.text
__fixunsdfdi | | U | NOTYPE| | |*UND*
Symbols from _fixsfdi.o:
Name Value Class Type Size Line Section
__fixsfdi |00000000| T | FUNC|0000004c| |.text
__fixunssfdi | | U | NOTYPE| | |*UND*
Symbols from _fixunssfdi.o:
Name Value Class Type Size Line Section
__cmpdi2 | | U | NOTYPE| | |*UND*
__fixunssfdi |00000000| T | FUNC|00000228| |.text
__floatdidf | | U | NOTYPE| | |*UND*
What i want to do is , i will be given a function whose type is NOTYPE . i need to search the txt and find under which .o it's defined(ie;with type FUNC).When i get the .o file,i may see other functions as NOTYPE. Then i have to search where those are defined.it goes on. Finally i want to return a list of names of all .o files which contained the functions.
My piece of code:
notypeDict , funcDict = {} , {}
notypeList , funcList = [] , []
currObj , prevObj = '' , ''
fp = open(r'C:\test.txt','r') # file path cms here
fileList = fp.readlines()
for line in fileList:
if '.o' in line: # line containg .o
currObj=line.split()[-1][0:-1]
if '|' not in line: # line containg |
pass
else: # other lines
dataList=[dataItem.strip() for dataItem in line.strip().split('|')] # a list of each word in line
name=dataList[0].strip() # name of the function
notypeDict[prevObj] = notypeList # notypeDict is a dictionary which contains .o as key and a list of NOTYPE function name
funcDict[prevObj] = funcList # funcDict is a dictionary which contains .o as key and a list of FUNC function names
if prevObj == currObj :
pass
if prevObj != currObj :
notypeList , funcList = [] , []
if dataList[3] == 'NOTYPE' :
notypeList.append(name)
if dataList[3] == 'FUNC' :
funcList.append(name)
prevObj = currObj
print 'notypeDict' , notypeDict
print '\n\nfuncDict' , funcDict
Here i will get two dictionaries, notypeDict and funcDict.
notypeDict has .o as key and a list of NOTYPE functions as value
funcDict has .o as key and a list of FUNC functions as value.
I reached till here.
But don't getting ideas how to proceed to achieve my target.
I think my question is clear.
Please help me out.
A:
I would use regular expressions with capture groups for the different kinds of interesting lines in your file; I'd go through the file line by line, and as I found an interesting line (i.e. matched the regex), I'd process the captured data from the regex appropriately.
After having built up dictionaries etc., answering questions based on the data is easy.
A:
What do you think the following does?
if '.o' in line: # line containg .o
currObj=line.split()[-1][0:-1]
if '|' not in line: # line containg |
pass
else: # other lines
Does it find lines with '.o' or '|' or other?
No. Actually, it doesn't.
It finds lines that contain '.o'. And does something with them.
Then it checks that line again for '|' or "other". All of your '.o' lines are processed two times.
Once as a '.o', then again as a "not |".
You might mean elif instead of if.
This code
if prevObj == currObj :
pass
if prevObj != currObj :
notypeList , funcList = [] , []
Is rather more complex than it needs to be. Doesn't cause a problem, per se, but it is silly-looking.
This code
if dataList[3] == 'NOTYPE' :
notypeList.append(name)
if dataList[3] == 'FUNC' :
funcList.append(name)
is probably good. However, it looks bad because the conditions are exclusive and would look better as elif.
A:
What about this code? It is based on your two dictionaries. Just call find_dep_for_func(notype_funcname).
def find_ofile(funcname):
"""This will find .o file for given function."""
for ofile, fns in funcDict.iteritems():
if funcname in fns:
return ofile
raise Exception("Cannot find function "+funcname)
def find_dependencies(ofile, deps = None):
"""This will find dependent .o files for given .o file."""
olist = deps if deps else set([])
for fn in notypeDict[ofile]:
ofile = find_ofile(fn)
if not ofile in olist:
olist.add(ofile)
olist = find_dependencies(ofile, olist)
return olist
def find_dep_for_func(notype_funcname):
return find_dependencies(find_ofile(funcname))
|
Txt file parse to get a list of .o file names
|
I have a txt file like :
test.txt
Symbols from __ctype_tab.o:
Name Value Class Type Size Line Section
__ctype |00000000| D | OBJECT|00000004| |.data
__ctype_tab |00000000| r | OBJECT|00000101| |.rodata
Symbols from _ashldi3.o:
Name Value Class Type Size Line Section
__ashldi3 |00000000| T | FUNC|00000050| |.text
Symbols from _ashrdi3.o:
Name Value Class Type Size Line Section
__ashrdi3 |00000000| T | FUNC|00000058| |.text
Symbols from _fixdfdi.o:
Name Value Class Type Size Line Section
__fixdfdi |00000000| T | FUNC|0000004c| |.text
__fixunsdfdi | | U | NOTYPE| | |*UND*
Symbols from _fixsfdi.o:
Name Value Class Type Size Line Section
__fixsfdi |00000000| T | FUNC|0000004c| |.text
__fixunssfdi | | U | NOTYPE| | |*UND*
Symbols from _fixunssfdi.o:
Name Value Class Type Size Line Section
__cmpdi2 | | U | NOTYPE| | |*UND*
__fixunssfdi |00000000| T | FUNC|00000228| |.text
__floatdidf | | U | NOTYPE| | |*UND*
What i want to do is , i will be given a function whose type is NOTYPE . i need to search the txt and find under which .o it's defined(ie;with type FUNC).When i get the .o file,i may see other functions as NOTYPE. Then i have to search where those are defined.it goes on. Finally i want to return a list of names of all .o files which contained the functions.
My piece of code:
notypeDict , funcDict = {} , {}
notypeList , funcList = [] , []
currObj , prevObj = '' , ''
fp = open(r'C:\test.txt','r') # file path cms here
fileList = fp.readlines()
for line in fileList:
if '.o' in line: # line containg .o
currObj=line.split()[-1][0:-1]
if '|' not in line: # line containg |
pass
else: # other lines
dataList=[dataItem.strip() for dataItem in line.strip().split('|')] # a list of each word in line
name=dataList[0].strip() # name of the function
notypeDict[prevObj] = notypeList # notypeDict is a dictionary which contains .o as key and a list of NOTYPE function name
funcDict[prevObj] = funcList # funcDict is a dictionary which contains .o as key and a list of FUNC function names
if prevObj == currObj :
pass
if prevObj != currObj :
notypeList , funcList = [] , []
if dataList[3] == 'NOTYPE' :
notypeList.append(name)
if dataList[3] == 'FUNC' :
funcList.append(name)
prevObj = currObj
print 'notypeDict' , notypeDict
print '\n\nfuncDict' , funcDict
Here i will get two dictionaries, notypeDict and funcDict.
notypeDict has .o as key and a list of NOTYPE functions as value
funcDict has .o as key and a list of FUNC functions as value.
I reached till here.
But don't getting ideas how to proceed to achieve my target.
I think my question is clear.
Please help me out.
|
[
"I would use regular expressions with capture groups for the different kinds of interesting lines in your file; I'd go through the file line by line, and as I found an interesting line (i.e. matched the regex), I'd process the captured data from the regex appropriately.\nAfter having built up dictionaries etc., answering questions based on the data is easy.\n",
"What do you think the following does?\n if '.o' in line: # line containg .o\n currObj=line.split()[-1][0:-1] \n if '|' not in line: # line containg |\n pass\n else: # other lines\n\nDoes it find lines with '.o' or '|' or other?\nNo. Actually, it doesn't.\nIt finds lines that contain '.o'. And does something with them.\nThen it checks that line again for '|' or \"other\". All of your '.o' lines are processed two times.\nOnce as a '.o', then again as a \"not |\".\nYou might mean elif instead of if.\n\nThis code\n if prevObj == currObj :\n pass\n if prevObj != currObj : \n notypeList , funcList = [] , []\n\nIs rather more complex than it needs to be. Doesn't cause a problem, per se, but it is silly-looking.\n\nThis code \n if dataList[3] == 'NOTYPE' : \n notypeList.append(name)\n if dataList[3] == 'FUNC' :\n funcList.append(name)\n\nis probably good. However, it looks bad because the conditions are exclusive and would look better as elif. \n",
"What about this code? It is based on your two dictionaries. Just call find_dep_for_func(notype_funcname).\ndef find_ofile(funcname):\n \"\"\"This will find .o file for given function.\"\"\"\n for ofile, fns in funcDict.iteritems():\n if funcname in fns:\n return ofile \n raise Exception(\"Cannot find function \"+funcname)\n\ndef find_dependencies(ofile, deps = None):\n \"\"\"This will find dependent .o files for given .o file.\"\"\"\n olist = deps if deps else set([])\n for fn in notypeDict[ofile]:\n ofile = find_ofile(fn)\n if not ofile in olist:\n olist.add(ofile)\n olist = find_dependencies(ofile, olist)\n return olist\n\ndef find_dep_for_func(notype_funcname):\n return find_dependencies(find_ofile(funcname))\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"parsing",
"python"
] |
stackoverflow_0000862203_parsing_python.txt
|
Q:
Python Module by Path
I am writing a minimal replacement for mod_python's publisher.py
The basic premise is that it is loading modules based on a URL scheme:
/foo/bar/a/b/c/d
Whereby /foo/ might be a directory and 'bar' is a method ExposedBar in a publishable class in /foo/index.py. Likewise /foo might map to /foo.py and bar is a method in the exposed class. The semantics of this aren't really important. I have a line:
sys.path.insert(0, path_to_file) # /var/www/html/{bar|foo}
mod_obj = __import__(module_name)
mod_obj.__name__ = req.filename
Then the module is inspected for the appropriate class/functions/methods. When the process gets as far as it can the remaining URI data, /a/b/c is passed to that method or function.
This was working fine until I had /var/www/html/foo/index.py and /var/www/html/bar/index.py
When viewing in the browser, it is fairly random which 'index.py' gets selected, even though I set the first search path to '/var/www/html/foo' or '/var/www/html/bar' and then loaded __import__('index'). I have no idea why it is finding either by seemingly random choice. This is shown by:
__name__ is "/var/www/html/foo/index.py"
req.filename is "/var/www/html/foo/index.py"
__file__ is "/var/www/html/bar/index.py"
This question then is, why would the __import__ be randomly selecting either index. I would understand this if the path was '/var/www/html' but it isn't. Secondly:
Can I load a module by it's absolute path into a module object? Without modification of sys.path. I can't find any docs on __import__ or new.module() for this.
A:
Can I load a module by it's absolute
path into a module object? Without
modification of sys.path. I can't find
any docs on __import__ or new.module()
for this.
import imp
import os
def module_from_path(path):
filename = os.path.basename(path)
modulename = os.path.splitext(filename)[0]
with open(path) as f:
return imp.load_module(modulename, f, path, ('py', 'U', imp.PY_SOURCE))
|
Python Module by Path
|
I am writing a minimal replacement for mod_python's publisher.py
The basic premise is that it is loading modules based on a URL scheme:
/foo/bar/a/b/c/d
Whereby /foo/ might be a directory and 'bar' is a method ExposedBar in a publishable class in /foo/index.py. Likewise /foo might map to /foo.py and bar is a method in the exposed class. The semantics of this aren't really important. I have a line:
sys.path.insert(0, path_to_file) # /var/www/html/{bar|foo}
mod_obj = __import__(module_name)
mod_obj.__name__ = req.filename
Then the module is inspected for the appropriate class/functions/methods. When the process gets as far as it can the remaining URI data, /a/b/c is passed to that method or function.
This was working fine until I had /var/www/html/foo/index.py and /var/www/html/bar/index.py
When viewing in the browser, it is fairly random which 'index.py' gets selected, even though I set the first search path to '/var/www/html/foo' or '/var/www/html/bar' and then loaded __import__('index'). I have no idea why it is finding either by seemingly random choice. This is shown by:
__name__ is "/var/www/html/foo/index.py"
req.filename is "/var/www/html/foo/index.py"
__file__ is "/var/www/html/bar/index.py"
This question then is, why would the __import__ be randomly selecting either index. I would understand this if the path was '/var/www/html' but it isn't. Secondly:
Can I load a module by it's absolute path into a module object? Without modification of sys.path. I can't find any docs on __import__ or new.module() for this.
|
[
"\nCan I load a module by it's absolute\n path into a module object? Without\n modification of sys.path. I can't find\n any docs on __import__ or new.module()\n for this.\n\nimport imp\nimport os\n\ndef module_from_path(path):\n filename = os.path.basename(path)\n modulename = os.path.splitext(filename)[0]\n\n with open(path) as f:\n return imp.load_module(modulename, f, path, ('py', 'U', imp.PY_SOURCE))\n\n"
] |
[
3
] |
[] |
[] |
[
"mod_python",
"python"
] |
stackoverflow_0000863234_mod_python_python.txt
|
Q:
Which PEP governs the ordering of dict.values()?
When you call dict.values() the order of the returned items is dependent on the has value of the keys. This seems to be very consistent in all versions of cPython, however the python manual for dict simply states that the ordering is "arbitrary".
I remember reading somewhere that there is actually a PEP which specifically states the expected ordering of the items() and values() methods.
FYI, if this behavior is indeed a guaranteed behavior of a class I am working on I could greatly simplify and speed up a class I am working on. On the other hand if this is merely an accidental and undocumented feature of cPython then it's probably best not to trust it.
A:
From http://docs.python.org/library/stdtypes.html:
Keys and values are listed in an
arbitrary order which is non-random,
varies across Python implementations,
and depends on the dictionary’s
history of insertions and deletions.
A:
I suppose PEP-3106 is as close as it gets:
The specification implies that the order in which items are returned by
.keys(), .values() and .items() is the
same (just as it was in Python 2.x),
because the order is all derived from
the dict iterator (which is presumably
arbitrary but stable as long as a dict
isn't modified). This can be expressed
by the following invariant:
list(d.items()) == list(zip(d.keys(),
d.values()))
A:
"arbitrary" is not the same as "accidental".
But it is the same as "undocumented". Since the dictionary is based on hashes, you can't -- really -- guarantee ordering based on the hashing algorithm and collisions which occur.
To guarantee an order, you use the sorted function.
Or you can find a good ordered dictionary implementation that you want to use instead of a dict.
|
Which PEP governs the ordering of dict.values()?
|
When you call dict.values() the order of the returned items is dependent on the has value of the keys. This seems to be very consistent in all versions of cPython, however the python manual for dict simply states that the ordering is "arbitrary".
I remember reading somewhere that there is actually a PEP which specifically states the expected ordering of the items() and values() methods.
FYI, if this behavior is indeed a guaranteed behavior of a class I am working on I could greatly simplify and speed up a class I am working on. On the other hand if this is merely an accidental and undocumented feature of cPython then it's probably best not to trust it.
|
[
"From http://docs.python.org/library/stdtypes.html:\n\nKeys and values are listed in an\n arbitrary order which is non-random,\n varies across Python implementations,\n and depends on the dictionary’s\n history of insertions and deletions.\n\n",
"I suppose PEP-3106 is as close as it gets:\n\nThe specification implies that the order in which items are returned by\n .keys(), .values() and .items() is the\n same (just as it was in Python 2.x),\n because the order is all derived from\n the dict iterator (which is presumably\n arbitrary but stable as long as a dict\n isn't modified). This can be expressed\n by the following invariant:\nlist(d.items()) == list(zip(d.keys(),\n d.values()))\n\n",
"\"arbitrary\" is not the same as \"accidental\". \nBut it is the same as \"undocumented\". Since the dictionary is based on hashes, you can't -- really -- guarantee ordering based on the hashing algorithm and collisions which occur.\nTo guarantee an order, you use the sorted function. \nOr you can find a good ordered dictionary implementation that you want to use instead of a dict.\n"
] |
[
7,
6,
2
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0000863446_dictionary_python.txt
|
Q:
Ordering a list of dictionaries in python
I've got a python list of dictionaries:
mylist = [
{'id':0, 'weight':10, 'factor':1, 'meta':'ABC'},
{'id':1, 'weight':5, 'factor':1, 'meta':'ABC'},
{'id':2, 'weight':5, 'factor':2, 'meta':'ABC'},
{'id':3, 'weight':1, 'factor':1, 'meta':'ABC'}
]
Whats the most efficient/cleanest way to order that list by weight then factor (numerically). The resulting list should look like:
mylist = [
{'id':3, 'weight':1, 'factor':1, 'meta':'ABC'},
{'id':1, 'weight':5, 'factor':1, 'meta':'ABC'},
{'id':2, 'weight':5, 'factor':2, 'meta':'ABC'},
{'id':0, 'weight':10, 'factor':1, 'meta':'ABC'},
]
A:
mylist.sort(key=lambda d: (d['weight'], d['factor']))
or
import operator
mylist.sort(key=operator.itemgetter('weight', 'factor'))
A:
Something along the lines of the following ought to work:
def cmp_dict(x, y):
weight_diff = y['weight'] - x['weight']
if weight_diff == 0:
return y['factor'] - x['factor']
else:
return weight_diff
myList.sort(cmp_dict)
A:
I accepted dF's answer for the inspiration, but here is what I ultimately settled on for my scenario:
@staticmethod
def ordered_list(mylist):
def sort_func(d):
return (d['weight'], d['factor'])
mylist.sort(key=sort_func)
|
Ordering a list of dictionaries in python
|
I've got a python list of dictionaries:
mylist = [
{'id':0, 'weight':10, 'factor':1, 'meta':'ABC'},
{'id':1, 'weight':5, 'factor':1, 'meta':'ABC'},
{'id':2, 'weight':5, 'factor':2, 'meta':'ABC'},
{'id':3, 'weight':1, 'factor':1, 'meta':'ABC'}
]
Whats the most efficient/cleanest way to order that list by weight then factor (numerically). The resulting list should look like:
mylist = [
{'id':3, 'weight':1, 'factor':1, 'meta':'ABC'},
{'id':1, 'weight':5, 'factor':1, 'meta':'ABC'},
{'id':2, 'weight':5, 'factor':2, 'meta':'ABC'},
{'id':0, 'weight':10, 'factor':1, 'meta':'ABC'},
]
|
[
"mylist.sort(key=lambda d: (d['weight'], d['factor']))\n\nor\nimport operator\nmylist.sort(key=operator.itemgetter('weight', 'factor'))\n\n",
"Something along the lines of the following ought to work:\ndef cmp_dict(x, y):\n weight_diff = y['weight'] - x['weight']\n if weight_diff == 0:\n return y['factor'] - x['factor']\n else:\n return weight_diff\n\nmyList.sort(cmp_dict)\n\n",
"I accepted dF's answer for the inspiration, but here is what I ultimately settled on for my scenario:\n@staticmethod\ndef ordered_list(mylist):\n def sort_func(d):\n return (d['weight'], d['factor'])\n\n mylist.sort(key=sort_func)\n\n"
] |
[
23,
1,
1
] |
[
"decoratedlist = [(item[weight], item) for item in mylist]\ndecoratedlist.sort()\nresults = [item for (key, item) in decoratedlist]\n\n"
] |
[
-1
] |
[
"dictionary",
"list",
"python"
] |
stackoverflow_0000861190_dictionary_list_python.txt
|
Q:
Python: adding namespaces in lxml
I'm trying to specify a namespace using lxml similar to this example (taken from here):
<TreeInventory xsi:noNamespaceSchemaLocation="Trees.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</TreeInventory>
I'm not sure how to add the Schema instance to use and also the Schema location.
The documentation got me started, by doing something like:
>>> NS = 'http://www.w3.org/2001/XMLSchema-instance'
>>> TREE = '{%s}' % NS
>>> NSMAP = {None: NS}
>>> tree = etree.Element(TREE + 'TreeInventory', nsmap=NSMAP)
>>> etree.tostring(tree, pretty_print=True)
'<TreeInventory xmlns="http://www.w3.org/2001/XMLSchema-instance"/>\n'
I'm not sure how to specify it an instance though, and then also specify a location. It seems like this can be done with the nsmap keyword-arg in etree.Element, but I don't see how.
A:
In some more steps, for clarity:
>>> NS = 'http://www.w3.org/2001/XMLSchema-instance'
As far as I can see, it is the attribute noNamespaceSchemaLocation that you want namespaced, not the TreeInventory element. So:
>>> location_attribute = '{%s}noNamespaceSchemaLocation' % NS # f-string doesn't work in this case
>>> elem = etree.Element('TreeInventory', attrib={location_attribute: 'Trees.xsd'})
>>> etree.tostring(elem, pretty_print=True)
'<TreeInventory xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="Trees.xsd"/>\n'
This looks like what you wanted...
You could of course also create the element first, without attributes, and then set the attribute, like this:
>>> elem = etree.Element('TreeInventory')
>>> elem.set(location_attribute, 'Trees.xsd')
As for the nsmap parameter: I believe it is only used to define which prefixes to use on serialization. In this case, it is not needed, because lxml knows the commonly used prefix for the namespace in question is "xsi". If it were not some well-known namespace, you would probably see prefixes like "ns0", "ns1" etc..., unless you specified which prefix you preferred. (remember: the prefix is not supposed to matter)
|
Python: adding namespaces in lxml
|
I'm trying to specify a namespace using lxml similar to this example (taken from here):
<TreeInventory xsi:noNamespaceSchemaLocation="Trees.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
</TreeInventory>
I'm not sure how to add the Schema instance to use and also the Schema location.
The documentation got me started, by doing something like:
>>> NS = 'http://www.w3.org/2001/XMLSchema-instance'
>>> TREE = '{%s}' % NS
>>> NSMAP = {None: NS}
>>> tree = etree.Element(TREE + 'TreeInventory', nsmap=NSMAP)
>>> etree.tostring(tree, pretty_print=True)
'<TreeInventory xmlns="http://www.w3.org/2001/XMLSchema-instance"/>\n'
I'm not sure how to specify it an instance though, and then also specify a location. It seems like this can be done with the nsmap keyword-arg in etree.Element, but I don't see how.
|
[
"In some more steps, for clarity:\n>>> NS = 'http://www.w3.org/2001/XMLSchema-instance'\n\nAs far as I can see, it is the attribute noNamespaceSchemaLocation that you want namespaced, not the TreeInventory element. So:\n>>> location_attribute = '{%s}noNamespaceSchemaLocation' % NS # f-string doesn't work in this case\n>>> elem = etree.Element('TreeInventory', attrib={location_attribute: 'Trees.xsd'})\n>>> etree.tostring(elem, pretty_print=True)\n'<TreeInventory xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:noNamespaceSchemaLocation=\"Trees.xsd\"/>\\n'\n\nThis looks like what you wanted...\nYou could of course also create the element first, without attributes, and then set the attribute, like this:\n>>> elem = etree.Element('TreeInventory')\n>>> elem.set(location_attribute, 'Trees.xsd')\n\nAs for the nsmap parameter: I believe it is only used to define which prefixes to use on serialization. In this case, it is not needed, because lxml knows the commonly used prefix for the namespace in question is \"xsi\". If it were not some well-known namespace, you would probably see prefixes like \"ns0\", \"ns1\" etc..., unless you specified which prefix you preferred. (remember: the prefix is not supposed to matter)\n"
] |
[
10
] |
[] |
[] |
[
"lxml",
"python",
"xml_namespaces"
] |
stackoverflow_0000863183_lxml_python_xml_namespaces.txt
|
Q:
Python While Loop Condition Evaluation
Say I have the following loop:
i = 0
l = [0, 1, 2, 3]
while i < len(l):
if something_happens:
l.append(something)
i += 1
Will the len(i) condition being evaluated in the while loop be updated when something is appended to l?
A:
Yes it will.
A:
Your code will work, but using a loop counter is often not considered very "pythonic". Using for works just as well and eliminates the counter:
>>> foo = [0, 1, 2]
>>> for bar in foo:
if bar % 2: # append to foo for every odd number
foo.append(len(foo))
print bar
0
1
2
3
4
If you need to know how "far" into the list you are, you can use enumerate:
>>> foo = ["wibble", "wobble", "wubble"]
>>> for i, bar in enumerate(foo):
if i % 2: # append to foo for every odd number
foo.append("appended")
print bar
wibble
wobble
wubble
appended
appended
|
Python While Loop Condition Evaluation
|
Say I have the following loop:
i = 0
l = [0, 1, 2, 3]
while i < len(l):
if something_happens:
l.append(something)
i += 1
Will the len(i) condition being evaluated in the while loop be updated when something is appended to l?
|
[
"Yes it will.\n",
"Your code will work, but using a loop counter is often not considered very \"pythonic\". Using for works just as well and eliminates the counter:\n>>> foo = [0, 1, 2]\n>>> for bar in foo:\n if bar % 2: # append to foo for every odd number\n foo.append(len(foo))\n print bar\n\n0\n1\n2\n3\n4\n\nIf you need to know how \"far\" into the list you are, you can use enumerate:\n>>> foo = [\"wibble\", \"wobble\", \"wubble\"]\n>>> for i, bar in enumerate(foo):\n if i % 2: # append to foo for every odd number\n foo.append(\"appended\")\n print bar\n\nwibble\nwobble\nwubble\nappended\nappended\n\n"
] |
[
14,
3
] |
[] |
[] |
[
"python",
"while_loop"
] |
stackoverflow_0000864603_python_while_loop.txt
|
Q:
Beginning Windows Mobile 6.1 Development With Python
I've wanted to get into Python development for awhile and most of my programming experience has been in .NET and no mobile development. I recently thought of a useful app to make for my windows mobile phone and thought this could be a great first Python project.
I did a little research online and found PyCe which I think is what I would need to get started on the app? Can anyone with some experience in this area point me in the right direction? What to download to get started, what lightweight database I could use, etc?
Thanks in advance!
A:
Can't help you much with Python\CE but if you want a great db for mobile devices SQLLite will do the job for you. If you do a quick google you'll find there are libraries for connecting to SQLLite with Python too.
|
Beginning Windows Mobile 6.1 Development With Python
|
I've wanted to get into Python development for awhile and most of my programming experience has been in .NET and no mobile development. I recently thought of a useful app to make for my windows mobile phone and thought this could be a great first Python project.
I did a little research online and found PyCe which I think is what I would need to get started on the app? Can anyone with some experience in this area point me in the right direction? What to download to get started, what lightweight database I could use, etc?
Thanks in advance!
|
[
"Can't help you much with Python\\CE but if you want a great db for mobile devices SQLLite will do the job for you. If you do a quick google you'll find there are libraries for connecting to SQLLite with Python too.\n"
] |
[
1
] |
[] |
[] |
[
"mobile_phones",
"python",
"windows_mobile"
] |
stackoverflow_0000864887_mobile_phones_python_windows_mobile.txt
|
Q:
Python os.forkpty why can't I make it work
import pty
import os
import sys
import time
pid, fd = os.forkpty()
if pid == 0:
# Slave
os.execlp("su","su","MYUSERNAME","-c","id")
# Master
print os.read(fd, 1000)
os.write(fd,"MYPASSWORD\n")
time.sleep(1)
print os.read(fd, 1000)
os.waitpid(pid,0)
print "Why have I not seen any output from id?"
A:
You are sleeping for too long. Your best bet is to start reading as soon as you can one byte at a time.
#!/usr/bin/env python
import os
import sys
pid, fd = os.forkpty()
if pid == 0:
# child
os.execlp("ssh","ssh","hostname","uname")
else:
# parent
print os.read(fd, 1000)
os.write(fd,"password\n")
c = os.read(fd, 1)
while c:
c = os.read(fd, 1)
sys.stdout.write(c)
|
Python os.forkpty why can't I make it work
|
import pty
import os
import sys
import time
pid, fd = os.forkpty()
if pid == 0:
# Slave
os.execlp("su","su","MYUSERNAME","-c","id")
# Master
print os.read(fd, 1000)
os.write(fd,"MYPASSWORD\n")
time.sleep(1)
print os.read(fd, 1000)
os.waitpid(pid,0)
print "Why have I not seen any output from id?"
|
[
"You are sleeping for too long. Your best bet is to start reading as soon as you can one byte at a time.\n#!/usr/bin/env python\n\nimport os\nimport sys\n\npid, fd = os.forkpty()\n\nif pid == 0:\n # child\n os.execlp(\"ssh\",\"ssh\",\"hostname\",\"uname\")\nelse:\n # parent\n print os.read(fd, 1000)\n os.write(fd,\"password\\n\")\n\n c = os.read(fd, 1)\n while c:\n c = os.read(fd, 1)\n sys.stdout.write(c)\n\n"
] |
[
5
] |
[] |
[] |
[
"pty",
"python"
] |
stackoverflow_0000864826_pty_python.txt
|
Q:
How can I get the name of a python class?
When I have an object foo, I can get it's class object via
str(foo.__class__)
What I would need however is only the name of the class ("Foo" for example), the above would give me something along the lines of
"<class 'my.package.Foo'>"
I know I can get it quite easily with a regexp, but I would like to know if there's a more "clean" way.
A:
Try
__class__.__name__
A:
foo.__class__.__name__ should give you result you need.
A:
Python 3.0.1 (r301:69561, Feb 13 2009, 20:04:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class foo:
... x = 1
...
>>> f = foo()
>>> f.__class__.__name__
'foo'
>>>
|
How can I get the name of a python class?
|
When I have an object foo, I can get it's class object via
str(foo.__class__)
What I would need however is only the name of the class ("Foo" for example), the above would give me something along the lines of
"<class 'my.package.Foo'>"
I know I can get it quite easily with a regexp, but I would like to know if there's a more "clean" way.
|
[
"Try\n__class__.__name__\n\n",
"foo.__class__.__name__ should give you result you need.\n",
"Python 3.0.1 (r301:69561, Feb 13 2009, 20:04:18) [MSC v.1500 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> class foo:\n... x = 1\n...\n>>> f = foo()\n>>> f.__class__.__name__\n'foo'\n>>>\n\n"
] |
[
4,
3,
1
] |
[] |
[] |
[
"inspection",
"python",
"reflection"
] |
stackoverflow_0000865384_inspection_python_reflection.txt
|
Q:
how to isinstance(x, module)?
I need to test if a variable is a module or not. How to do this in the cleanest way?
I need this for initializing some dispatcher function and I want that the function can accept either dict or module as an argument.
A:
>>> import os, types
>>> isinstance(os, types.ModuleType)
True
(It also works for your own Python modules, as well as built-in ones like os.)
A:
I like to use this so you don't have to import the types module:
isinstance(amodule, __builtins__.__class__)
|
how to isinstance(x, module)?
|
I need to test if a variable is a module or not. How to do this in the cleanest way?
I need this for initializing some dispatcher function and I want that the function can accept either dict or module as an argument.
|
[
">>> import os, types\n>>> isinstance(os, types.ModuleType)\nTrue\n\n(It also works for your own Python modules, as well as built-in ones like os.)\n",
"I like to use this so you don't have to import the types module:\nisinstance(amodule, __builtins__.__class__)\n\n"
] |
[
38,
6
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000865503_python.txt
|
Q:
How can I perform divison on a datetime.timedelta in python?
I'd like to be able to do the following:
num_intervals = (cur_date - previous_date) / interval_length
or
print (datetime.now() - (datetime.now() - timedelta(days=5)))
/ timedelta(hours=12)
# won't run, would like it to print '10'
but the division operation is unsupported on timedeltas. Is there a way that I can implement divison for timedeltas?
Edit: Looks like this was added to Python 3.2 (thanks rincewind!): http://bugs.python.org/issue2706
A:
Division and multiplication by integers seems to work out of the box:
>>> from datetime import timedelta
>>> timedelta(hours=6)
datetime.timedelta(0, 21600)
>>> timedelta(hours=6) / 2
datetime.timedelta(0, 10800)
A:
Sure, just convert to a number of seconds (minutes, milliseconds, hours, take your pick of units) and do the division.
EDIT (again): so you can't assign to timedelta.__div__. Try this, then:
divtdi = datetime.timedelta.__div__
def divtd(td1, td2):
if isinstance(td2, (int, long)):
return divtdi(td1, td2)
us1 = td1.microseconds + 1000000 * (td1.seconds + 86400 * td1.days)
us2 = td2.microseconds + 1000000 * (td2.seconds + 86400 * td2.days)
return us1 / us2 # this does integer division, use float(us1) / us2 for fp division
And to incorporate this into nadia's suggestion:
class MyTimeDelta:
__div__ = divtd
Example usage:
>>> divtd(datetime.timedelta(hours = 12), datetime.timedelta(hours = 2))
6
>>> divtd(datetime.timedelta(hours = 12), 2)
datetime.timedelta(0, 21600)
>>> MyTimeDelta(hours = 12) / MyTimeDelta(hours = 2)
6
etc. Of course you could even name (or alias) your custom class timedelta so it gets used in place of the real timedelta, at least in your code.
A:
You can override the division operator like this:
class MyTimeDelta(timedelta):
def __div__(self, value):
# Dome something about the object
|
How can I perform divison on a datetime.timedelta in python?
|
I'd like to be able to do the following:
num_intervals = (cur_date - previous_date) / interval_length
or
print (datetime.now() - (datetime.now() - timedelta(days=5)))
/ timedelta(hours=12)
# won't run, would like it to print '10'
but the division operation is unsupported on timedeltas. Is there a way that I can implement divison for timedeltas?
Edit: Looks like this was added to Python 3.2 (thanks rincewind!): http://bugs.python.org/issue2706
|
[
"Division and multiplication by integers seems to work out of the box:\n>>> from datetime import timedelta\n>>> timedelta(hours=6)\ndatetime.timedelta(0, 21600)\n>>> timedelta(hours=6) / 2\ndatetime.timedelta(0, 10800)\n\n",
"Sure, just convert to a number of seconds (minutes, milliseconds, hours, take your pick of units) and do the division.\nEDIT (again): so you can't assign to timedelta.__div__. Try this, then:\ndivtdi = datetime.timedelta.__div__\ndef divtd(td1, td2):\n if isinstance(td2, (int, long)):\n return divtdi(td1, td2)\n us1 = td1.microseconds + 1000000 * (td1.seconds + 86400 * td1.days)\n us2 = td2.microseconds + 1000000 * (td2.seconds + 86400 * td2.days)\n return us1 / us2 # this does integer division, use float(us1) / us2 for fp division\n\nAnd to incorporate this into nadia's suggestion:\nclass MyTimeDelta:\n __div__ = divtd\n\nExample usage:\n>>> divtd(datetime.timedelta(hours = 12), datetime.timedelta(hours = 2))\n6\n>>> divtd(datetime.timedelta(hours = 12), 2)\ndatetime.timedelta(0, 21600)\n>>> MyTimeDelta(hours = 12) / MyTimeDelta(hours = 2)\n6\n\netc. Of course you could even name (or alias) your custom class timedelta so it gets used in place of the real timedelta, at least in your code.\n",
"You can override the division operator like this:\nclass MyTimeDelta(timedelta):\n def __div__(self, value):\n # Dome something about the object\n\n"
] |
[
16,
11,
4
] |
[] |
[] |
[
"date",
"datetime",
"division",
"python",
"timedelta"
] |
stackoverflow_0000865618_date_datetime_division_python_timedelta.txt
|
Q:
Can I get rows from SQLAlchemy that are plain arrays, rather than dictionaries?
I'm trying to optimize some Python code. The profiler tells me that SQLAlchemy's _get_col() is what's killing performance. The code looks something like this:
lots_of_rows = get_lots_of_rows()
for row in lots_of_rows:
if row.x == row.y:
print row.z
I was about to go through the code and make it more like this...
lots_of_rows = get_lots_of_rows()
for row in lots_of_rows:
if row[0] == row[1]:
print row[2]
...but I've found some documentation that seems to indicate that when accessing row objects like arrays, you're actually still pulling dictionary keys. In other words, the row object looks like this:
'x': (x object)
'0': (x object)
'y': (y object)
'1': (y object)
'z': (z object)
'2': (z object)
If that's the case, I doubt I'll see any performance improvement from accessing columns by number rather than name. Is there any way to get SA to return results as a list of tuples, or a list of lists, rather than a list of dictionaries? Alternately, can anyone suggest any other optimizations?
A:
Forgive the obvious answer, but why isn't row.x == row.y in your query? For example:
mytable.select().where(mytable.c.x==mytable.c.y)
Should give you a huge performance boost. Read the rest of the documentation.
A:
I think row.items() is what you're looking for. It returns a list of (key, value) tuples for the row.
Link
A:
SQLAlchemy proxies all access to the underlying database cursor to map named keys to positions in the row tuple and perform any necessary type conversions. The underlying implementation is quite heavily optimized, caching almost everything. Looking over the disassembly the only ways to further optimize seem to be to throw out extensibility and get rid of a couple of attribute lookups or to resort to dynamic code generation for smaller gains, or to gain more, implement the corresponding ResultProxy and RowProxy classes in C.
Some quick profiling shows that the overhead is around 5us per lookup on my laptop. That will be significant if only trivial processing is done with the data. In those kind of cases it might be reasonable to drop down to dbapi level. This doesn't mean that you have to lose the query building functionality of SQLAlchemy. Just execute the statement as you usually would and get the dbapi cursor from the ResultProxy by accessing result.cursor.cursor. (result.cursor is an SQLAlchemy CursorFairy object) Then you can use the regular dbapi fetchall(), fetchone() and fetchmany() methods.
But if you really are doing trivial processing it might be useful to do it, or at least the filtering part on the database server. You probably lose database portability, but that might not be an issue.
A:
You should post your profiler results as well as stack traces around the '_get_col' call so we know which _get_col is being called. (and whether _get_col really is the bottleneck).
I looked at the sqlalchemy source, looks like it may be calling 'lookup_key' (in engine/base.py) each time and it looks like this caches the column value locally, i guess lazily (via PopulateDict).
You can try bypassing that by directly using row.__props (not recommended since it's private), maybe you can row.cursor, but it looks like you would gain much by bypassing sqlalchemy (except the sql generation) and working directly w/ a cursor.
-- J
|
Can I get rows from SQLAlchemy that are plain arrays, rather than dictionaries?
|
I'm trying to optimize some Python code. The profiler tells me that SQLAlchemy's _get_col() is what's killing performance. The code looks something like this:
lots_of_rows = get_lots_of_rows()
for row in lots_of_rows:
if row.x == row.y:
print row.z
I was about to go through the code and make it more like this...
lots_of_rows = get_lots_of_rows()
for row in lots_of_rows:
if row[0] == row[1]:
print row[2]
...but I've found some documentation that seems to indicate that when accessing row objects like arrays, you're actually still pulling dictionary keys. In other words, the row object looks like this:
'x': (x object)
'0': (x object)
'y': (y object)
'1': (y object)
'z': (z object)
'2': (z object)
If that's the case, I doubt I'll see any performance improvement from accessing columns by number rather than name. Is there any way to get SA to return results as a list of tuples, or a list of lists, rather than a list of dictionaries? Alternately, can anyone suggest any other optimizations?
|
[
"Forgive the obvious answer, but why isn't row.x == row.y in your query? For example:\nmytable.select().where(mytable.c.x==mytable.c.y)\n\nShould give you a huge performance boost. Read the rest of the documentation.\n",
"I think row.items() is what you're looking for. It returns a list of (key, value) tuples for the row.\nLink\n",
"SQLAlchemy proxies all access to the underlying database cursor to map named keys to positions in the row tuple and perform any necessary type conversions. The underlying implementation is quite heavily optimized, caching almost everything. Looking over the disassembly the only ways to further optimize seem to be to throw out extensibility and get rid of a couple of attribute lookups or to resort to dynamic code generation for smaller gains, or to gain more, implement the corresponding ResultProxy and RowProxy classes in C.\nSome quick profiling shows that the overhead is around 5us per lookup on my laptop. That will be significant if only trivial processing is done with the data. In those kind of cases it might be reasonable to drop down to dbapi level. This doesn't mean that you have to lose the query building functionality of SQLAlchemy. Just execute the statement as you usually would and get the dbapi cursor from the ResultProxy by accessing result.cursor.cursor. (result.cursor is an SQLAlchemy CursorFairy object) Then you can use the regular dbapi fetchall(), fetchone() and fetchmany() methods.\nBut if you really are doing trivial processing it might be useful to do it, or at least the filtering part on the database server. You probably lose database portability, but that might not be an issue.\n",
"You should post your profiler results as well as stack traces around the '_get_col' call so we know which _get_col is being called. (and whether _get_col really is the bottleneck).\nI looked at the sqlalchemy source, looks like it may be calling 'lookup_key' (in engine/base.py) each time and it looks like this caches the column value locally, i guess lazily (via PopulateDict). \nYou can try bypassing that by directly using row.__props (not recommended since it's private), maybe you can row.cursor, but it looks like you would gain much by bypassing sqlalchemy (except the sql generation) and working directly w/ a cursor. \n-- J\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0000656050_python_sqlalchemy.txt
|
Q:
Converting an ImageMagick FX operator to pure Python code with PIL
I'm trying to port some image processing functionality from an Image Magick
command (using the Fx Special Effects Image Operator) to Python using PIL. My
issue is that I'm not entirely understanding what this fx operator is doing:
convert input.png gradient.png -fx "v.p{0,u*v.h}" output.png
From a high level, this command takes the colors from a gradient image
(gradient.png) and applies them as the color palette of the input image
(input.png), writing to an output image (output.png).
From what I've figured out, u is the input image, v is the gradient, and it is
going through each left-most pixel in the gradient from top to bottom,
somehow applying its colors to the input image.
I can't wrap my head around how to do this same thing programmatically with
PIL. The best thing I've come up with was to convert the image to a paletted
image (down-sampling to a measly 256 colors) and grabbing colors individually
from the gradient with a pixel access object.
import Image
# open the input image
input_img = Image.open('input.png')
# open gradient image and resize to 256px height
gradient_img = Image.open('gradient.png')
gradient_img = gradient_img.resize( (gradient_img.size[0], 256,) )
# get pixel access object (significantly quicker than getpixel method)
gradient_pix = gradient_img.load()
# build a sequence of 256 palette values (going from bottom to top)
sequence = []
for i in range(255, 0, -1):
# from rgb tuples for each pixel row
sequence.extend(gradient_pix[0, i])
# convert to "P" mode in order to use putpalette() with built sequence
output_img = input_img.convert("P")
output_img.putpalette(sequence)
# save output file
output_img = output_img.convert("RGBA")
output_img.save('output.png')
This works, but like I said, it down-samples to 256 colors. Not only is
this a ham-handed way of doing things, it results in a really crappy output
image. How could I duplicate the Magick functionality without cramming the
result into 265 colors?
addendum: forgot to cite the blog where I found the original Magick command
A:
I know it's been about a month and you might has already figured it out. But here is the answer.
From ImageMagicK documentation I was able to understand what the effect is actually doing.
convert input.png gradient.png -fx "v.p{0,u*v.h}" output.png
v is the second image (gradient.png)
u is the first image (input.png)
v.p will get a pixel value
v.p{0, 0} -> first pixel in the image
v.h -> the hight of the second image
v.p{0, u * v.h} -> will read the Nth pixel where N = u * v.h
I converted that into PIL, and the result looks exactly like you want it to be:
import Image
# open the input image
input_img = Image.open('input.png')
# open gradient image and resize to 256px height
gradient_img = Image.open('gradient.png')
gradient_img = gradient_img.resize( (gradient_img.size[0], 256,) )
# get pixel access object (significantly quicker than getpixel method)
gradient_pix = gradient_img.load()
data = input_img.getdata()
input_img.putdata([gradient_pix[0, r] for (r, g, b, a) in data])
input_img.save('output.png')
|
Converting an ImageMagick FX operator to pure Python code with PIL
|
I'm trying to port some image processing functionality from an Image Magick
command (using the Fx Special Effects Image Operator) to Python using PIL. My
issue is that I'm not entirely understanding what this fx operator is doing:
convert input.png gradient.png -fx "v.p{0,u*v.h}" output.png
From a high level, this command takes the colors from a gradient image
(gradient.png) and applies them as the color palette of the input image
(input.png), writing to an output image (output.png).
From what I've figured out, u is the input image, v is the gradient, and it is
going through each left-most pixel in the gradient from top to bottom,
somehow applying its colors to the input image.
I can't wrap my head around how to do this same thing programmatically with
PIL. The best thing I've come up with was to convert the image to a paletted
image (down-sampling to a measly 256 colors) and grabbing colors individually
from the gradient with a pixel access object.
import Image
# open the input image
input_img = Image.open('input.png')
# open gradient image and resize to 256px height
gradient_img = Image.open('gradient.png')
gradient_img = gradient_img.resize( (gradient_img.size[0], 256,) )
# get pixel access object (significantly quicker than getpixel method)
gradient_pix = gradient_img.load()
# build a sequence of 256 palette values (going from bottom to top)
sequence = []
for i in range(255, 0, -1):
# from rgb tuples for each pixel row
sequence.extend(gradient_pix[0, i])
# convert to "P" mode in order to use putpalette() with built sequence
output_img = input_img.convert("P")
output_img.putpalette(sequence)
# save output file
output_img = output_img.convert("RGBA")
output_img.save('output.png')
This works, but like I said, it down-samples to 256 colors. Not only is
this a ham-handed way of doing things, it results in a really crappy output
image. How could I duplicate the Magick functionality without cramming the
result into 265 colors?
addendum: forgot to cite the blog where I found the original Magick command
|
[
"I know it's been about a month and you might has already figured it out. But here is the answer.\nFrom ImageMagicK documentation I was able to understand what the effect is actually doing.\nconvert input.png gradient.png -fx \"v.p{0,u*v.h}\" output.png\n\nv is the second image (gradient.png)\nu is the first image (input.png)\nv.p will get a pixel value\nv.p{0, 0} -> first pixel in the image\nv.h -> the hight of the second image\nv.p{0, u * v.h} -> will read the Nth pixel where N = u * v.h\n\nI converted that into PIL, and the result looks exactly like you want it to be:\nimport Image\n\n# open the input image\ninput_img = Image.open('input.png')\n\n# open gradient image and resize to 256px height\ngradient_img = Image.open('gradient.png')\ngradient_img = gradient_img.resize( (gradient_img.size[0], 256,) )\n\n# get pixel access object (significantly quicker than getpixel method)\ngradient_pix = gradient_img.load()\n\ndata = input_img.getdata()\ninput_img.putdata([gradient_pix[0, r] for (r, g, b, a) in data])\ninput_img.save('output.png')\n\n"
] |
[
1
] |
[] |
[] |
[
"imagemagick",
"python",
"python_imaging_library"
] |
stackoverflow_0000789401_imagemagick_python_python_imaging_library.txt
|
Q:
Elixir (SqlAlchemy): relations between 3 tables with composite primary keys
I've 3 tables:
A Company table with (company_id) primary key
A Page table with (company_id, url) primary key & a foreign key back to Company
An Attr table with (company_id, attr_key) primary key & a foreign key back to Company.
My question is how to construct the ManyToOne relation from Attr back to Page using the existing columns in Attr, i.e. company_id and url?
from elixir import Entity, has_field, setup_all, ManyToOne, OneToMany, Field, Unicode, using_options
from sqlalchemy.orm import relation
class Company(Entity):
using_options(tablename='company')
company_id = Field(Unicode(32), primary_key=True)
has_field('display_name', Unicode(255))
pages = OneToMany('Page')
class Page(Entity):
using_options(tablename='page')
company = ManyToOne('Company', colname='company_id', primary_key=True)
url = Field(Unicode(255), primary_key=True)
class Attr(Entity):
using_options(tablename='attr')
company = ManyToOne('Company', colname='company_id', primary_key=True)
attr_key = Field(Unicode(255), primary_key=True)
url = Field(Unicode(255)) #, ForeignKey('page.url'))
# page = ManyToOne('Page', colname=["company_id", "url"])
# page = relation(Page, backref='attrs', foreign_keys=["company_id", "url"], primaryjoin=and_(url==Page.url_part, company_id==Page.company_id))
I've commented out some failed attempts.
In the end, Attr.company_id will need to be a foreignkey to both Page and Company (as well as a primary key in Attr).
Is this possible?
A:
Yes you can do this. Elixir doesn't have a built in way to do this, but because it's a thin wrapper on SQLAlchemy you can convince it to do this. Because Elixir doesn't have a concept of a many-to-one relation that reuses existing columns you need to use the GenericProperty with the SQLAlchemy relation property and add the foreign key using table options. The following code should do what you want:
from elixir import Entity, has_field, setup_all, ManyToOne, OneToMany, Field, Unicode, using_options, using_table_options, GenericProperty
from sqlalchemy.orm import relation
from sqlalchemy import ForeignKeyConstraint
class Company(Entity):
using_options(tablename='company')
company_id = Field(Unicode(32), primary_key=True)
display_name = Field(Unicode(255))
pages = OneToMany('Page')
class Page(Entity):
using_options(tablename='page')
company = ManyToOne('Company', colname='company_id', primary_key=True)
url = Field(Unicode(255), primary_key=True)
attrs = OneToMany('Attr')
class Attr(Entity):
using_options(tablename='attr')
page = ManyToOne('Page', colname=['company_id', 'url'], primary_key=True)
attr_key = Field(Unicode(255), primary_key=True)
using_table_options(ForeignKeyConstraint(['company_id'], ['company.company_id']))
company = GenericProperty(relation(Company))
|
Elixir (SqlAlchemy): relations between 3 tables with composite primary keys
|
I've 3 tables:
A Company table with (company_id) primary key
A Page table with (company_id, url) primary key & a foreign key back to Company
An Attr table with (company_id, attr_key) primary key & a foreign key back to Company.
My question is how to construct the ManyToOne relation from Attr back to Page using the existing columns in Attr, i.e. company_id and url?
from elixir import Entity, has_field, setup_all, ManyToOne, OneToMany, Field, Unicode, using_options
from sqlalchemy.orm import relation
class Company(Entity):
using_options(tablename='company')
company_id = Field(Unicode(32), primary_key=True)
has_field('display_name', Unicode(255))
pages = OneToMany('Page')
class Page(Entity):
using_options(tablename='page')
company = ManyToOne('Company', colname='company_id', primary_key=True)
url = Field(Unicode(255), primary_key=True)
class Attr(Entity):
using_options(tablename='attr')
company = ManyToOne('Company', colname='company_id', primary_key=True)
attr_key = Field(Unicode(255), primary_key=True)
url = Field(Unicode(255)) #, ForeignKey('page.url'))
# page = ManyToOne('Page', colname=["company_id", "url"])
# page = relation(Page, backref='attrs', foreign_keys=["company_id", "url"], primaryjoin=and_(url==Page.url_part, company_id==Page.company_id))
I've commented out some failed attempts.
In the end, Attr.company_id will need to be a foreignkey to both Page and Company (as well as a primary key in Attr).
Is this possible?
|
[
"Yes you can do this. Elixir doesn't have a built in way to do this, but because it's a thin wrapper on SQLAlchemy you can convince it to do this. Because Elixir doesn't have a concept of a many-to-one relation that reuses existing columns you need to use the GenericProperty with the SQLAlchemy relation property and add the foreign key using table options. The following code should do what you want:\nfrom elixir import Entity, has_field, setup_all, ManyToOne, OneToMany, Field, Unicode, using_options, using_table_options, GenericProperty\nfrom sqlalchemy.orm import relation\nfrom sqlalchemy import ForeignKeyConstraint\n\nclass Company(Entity):\n using_options(tablename='company')\n\n company_id = Field(Unicode(32), primary_key=True)\n display_name = Field(Unicode(255))\n pages = OneToMany('Page')\n\nclass Page(Entity):\n using_options(tablename='page')\n\n company = ManyToOne('Company', colname='company_id', primary_key=True)\n url = Field(Unicode(255), primary_key=True)\n attrs = OneToMany('Attr')\n\nclass Attr(Entity):\n using_options(tablename='attr')\n\n page = ManyToOne('Page', colname=['company_id', 'url'], primary_key=True)\n attr_key = Field(Unicode(255), primary_key=True)\n\n using_table_options(ForeignKeyConstraint(['company_id'], ['company.company_id']))\n company = GenericProperty(relation(Company))\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_elixir",
"sqlalchemy"
] |
stackoverflow_0000835834_python_python_elixir_sqlalchemy.txt
|
Q:
How to generate XML documents with namespaces in Python
I'm trying to generate an XML document with namespaces, currently with Python's xml.dom.minidom:
import xml.dom.minidom
doc = xml.dom.minidom.Document()
el = doc.createElementNS('http://example.net/ns', 'el')
doc.appendChild(el)
print(doc.toprettyxml())
The namespace is saved (doc.childNodes[0].namespaceURI is 'http://example.net/ns'), but why is it missing in the output?
<?xml version="1.0" ?>
<el/>
I expect:
<?xml version="1.0" ?>
<el xmlns="http://example.net/ns" />
or
<?xml version="1.0" ?>
<randomid:el xmlns:randomid="http://example.net/ns" />
A:
createElementNS() is defined as:
def createElementNS(self, namespaceURI, qualifiedName):
prefix, localName = _nssplit(qualifiedName)
e = Element(qualifiedName, namespaceURI, prefix)
e.ownerDocument = self
return e
so…
import xml.dom.minidom
doc = xml.dom.minidom.Document()
el = doc.createElementNS('http://example.net/ns', 'ex:el')
#--------------------------------------------------^^^^^
doc.appendChild(el)
print(doc.toprettyxml())
yields:
<?xml version="1.0" ?>
<ex:el/>
…not quite there…
import xml.dom.minidom
doc = xml.dom.minidom.Document()
el = doc.createElementNS('http://example.net/ns', 'ex:el')
el.setAttribute("xmlns:ex", "http://example.net/ns")
doc.appendChild(el)
print(doc.toprettyxml())
yields:
<?xml version="1.0" ?>
<ex:el xmlns:ex="http://example.net/ns"/>
alternatively:
import xml.dom.minidom
doc = xml.dom.minidom.Document()
el = doc.createElementNS('http://example.net/ns', 'el')
el.setAttribute("xmlns", "http://example.net/ns")
doc.appendChild(el)
print(doc.toprettyxml())
wich produces:
<?xml version="1.0" ?>
<el xmlns="http://example.net/ns"/>
It looks like you'd have to do it manually. Element.writexml() shows no indication that namespaces would get any special treatment.
EDIT: This answer is targeted at xml.dom.minidom only, since the OP used it in the question. I do not indicate that it was impossible to use XML namespaces in Python generally. ;-)
A:
This feature is already proposed; a patch is slumbering in the Python bug database. See Tomalak's answer (in short: Manually add the xmlns attribute) for a workaround.
|
How to generate XML documents with namespaces in Python
|
I'm trying to generate an XML document with namespaces, currently with Python's xml.dom.minidom:
import xml.dom.minidom
doc = xml.dom.minidom.Document()
el = doc.createElementNS('http://example.net/ns', 'el')
doc.appendChild(el)
print(doc.toprettyxml())
The namespace is saved (doc.childNodes[0].namespaceURI is 'http://example.net/ns'), but why is it missing in the output?
<?xml version="1.0" ?>
<el/>
I expect:
<?xml version="1.0" ?>
<el xmlns="http://example.net/ns" />
or
<?xml version="1.0" ?>
<randomid:el xmlns:randomid="http://example.net/ns" />
|
[
"createElementNS() is defined as:\ndef createElementNS(self, namespaceURI, qualifiedName):\n prefix, localName = _nssplit(qualifiedName)\n e = Element(qualifiedName, namespaceURI, prefix)\n e.ownerDocument = self\n return e\n\nso…\nimport xml.dom.minidom\ndoc = xml.dom.minidom.Document()\nel = doc.createElementNS('http://example.net/ns', 'ex:el')\n#--------------------------------------------------^^^^^\ndoc.appendChild(el)\nprint(doc.toprettyxml())\n\nyields:\n<?xml version=\"1.0\" ?>\n<ex:el/>\n\n…not quite there…\nimport xml.dom.minidom\ndoc = xml.dom.minidom.Document()\nel = doc.createElementNS('http://example.net/ns', 'ex:el')\nel.setAttribute(\"xmlns:ex\", \"http://example.net/ns\")\ndoc.appendChild(el)\nprint(doc.toprettyxml())\n\nyields:\n<?xml version=\"1.0\" ?>\n<ex:el xmlns:ex=\"http://example.net/ns\"/>\n\nalternatively:\nimport xml.dom.minidom\ndoc = xml.dom.minidom.Document()\nel = doc.createElementNS('http://example.net/ns', 'el')\nel.setAttribute(\"xmlns\", \"http://example.net/ns\")\ndoc.appendChild(el)\nprint(doc.toprettyxml())\n\nwich produces:\n<?xml version=\"1.0\" ?>\n<el xmlns=\"http://example.net/ns\"/>\n\nIt looks like you'd have to do it manually. Element.writexml() shows no indication that namespaces would get any special treatment.\nEDIT: This answer is targeted at xml.dom.minidom only, since the OP used it in the question. I do not indicate that it was impossible to use XML namespaces in Python generally. ;-)\n",
"This feature is already proposed; a patch is slumbering in the Python bug database. See Tomalak's answer (in short: Manually add the xmlns attribute) for a workaround.\n"
] |
[
21,
5
] |
[] |
[] |
[
"dom",
"namespaces",
"python",
"xml"
] |
stackoverflow_0000863774_dom_namespaces_python_xml.txt
|
Q:
In Django, how can you change the User class to work with a different db table?
We're running django alongside - and sharing a database with - an existing application. And we want to use an existing "user" table (not Django's own) to store user information.
It looks like it's possible to change the name of the table that Django uses, in the Meta class of the User definition.
But we'd prefer not to change the Django core itself.
So we were thinking that we could sub-class the core auth.User class like this :
class OurUser(User) :
objects = UserManager()
class Meta:
db_table = u'our_user_table'
Here, the aim is not to add any extra fields to the customized User class. But just to use the alternative table.
However, this fails (likely because the ORM is assuming that the our_user_table should have a foreign key referring back to the original User table, which it doesn't).
So, is this sensible way to do what we want to do? Have I missed out on some easier way to map classes onto tables? Or, if not, can this be made to work?
Update :
I think I might be able to make the change I want just by "monkey-patching" the _meta of User in a local_settings.py
User._meta.db_table = 'our_user_table'
Can anyone think of anything bad that could happen if I do this? (Particularly in the context of a fairly typical Django / Pinax application?)
A:
You might find it useful to set up your old table as an alternative authentication source and sidestep all these issues.
Another option is to subclass the user and have the subclass point to your user-model. Override the save function to ensure that everything you need to do to preserve your old functionality is there.
I haven't done either of these myself but hopefully they are useful pointers.
Update
What I mean by alternative authentication in this case is a small python script that says "Yes, this is a valid username / password" - It then creates an instance of model in the standard Django table, copies fields across from the legacy table and returns the new user to the caller.
If you need to keep the two tables in sync, you could decide to have your alternative authentication never create a standard django user and just say "Yes, this is a valid password and username"
|
In Django, how can you change the User class to work with a different db table?
|
We're running django alongside - and sharing a database with - an existing application. And we want to use an existing "user" table (not Django's own) to store user information.
It looks like it's possible to change the name of the table that Django uses, in the Meta class of the User definition.
But we'd prefer not to change the Django core itself.
So we were thinking that we could sub-class the core auth.User class like this :
class OurUser(User) :
objects = UserManager()
class Meta:
db_table = u'our_user_table'
Here, the aim is not to add any extra fields to the customized User class. But just to use the alternative table.
However, this fails (likely because the ORM is assuming that the our_user_table should have a foreign key referring back to the original User table, which it doesn't).
So, is this sensible way to do what we want to do? Have I missed out on some easier way to map classes onto tables? Or, if not, can this be made to work?
Update :
I think I might be able to make the change I want just by "monkey-patching" the _meta of User in a local_settings.py
User._meta.db_table = 'our_user_table'
Can anyone think of anything bad that could happen if I do this? (Particularly in the context of a fairly typical Django / Pinax application?)
|
[
"You might find it useful to set up your old table as an alternative authentication source and sidestep all these issues. \nAnother option is to subclass the user and have the subclass point to your user-model. Override the save function to ensure that everything you need to do to preserve your old functionality is there.\nI haven't done either of these myself but hopefully they are useful pointers.\nUpdate\nWhat I mean by alternative authentication in this case is a small python script that says \"Yes, this is a valid username / password\" - It then creates an instance of model in the standard Django table, copies fields across from the legacy table and returns the new user to the caller. \nIf you need to keep the two tables in sync, you could decide to have your alternative authentication never create a standard django user and just say \"Yes, this is a valid password and username\"\n"
] |
[
6
] |
[] |
[] |
[
"django",
"django_models",
"monkeypatching",
"python"
] |
stackoverflow_0000866418_django_django_models_monkeypatching_python.txt
|
Q:
Algorithm: How to Delete every other file
I have a folder with thousands of images.
I want to delete every other image.
What is the most effective way to do this?
Going through each one with i%2==0 is still O(n).
Is there a fast way to do this (preferably in Python)?
Thx
A:
To delete half the N images you cannot be faster than O(N)! You do know that the O() notation means (among other things) that constant multiplicative factors are irrelevant, yes?
A:
import os
l = os.listdir('/some/dir/with/files')
for n in l[::2]:
os.unlink(n)
A:
Going through each one with i%2==0 is still O(n). Is there a fast way to do this (preferably in Python)?
The only way to be faster than O(n) is if your files are already sorted, and you only want to delete 1 file.
You said i%2==0, this means you are deleting every "even" file. O(n/2) is still O(n)
A:
I fail to see any conceivable way in which deleting n/2 files could be faster than O(n), unless the filesystem has some special feature for deleting large numbers of files (but I don't think that actually exists in practice, if it's even possible)
A:
If you wanted to delete Log(n) files, there would be... You can store images in a database, though ( MySQL has a "blob" type, among several others, that will store your images). Then you could do it in O(1) if you named them smartly.
/edit
i hate how i have to use shorthand and bad grammar to get my answers in quickly!!!
if you're looking for a python equivalent of rm -rf *2.img *4.img *6.img *8.img *0.img, know that the computer still has to go through the entire list of files
A:
You could use islice from the itertools module. Here goes your example:
import os, itertools
dirContent = os.listdir('/some/dir/with/files')
toBeDeleted = itertools.islice(dirContent, 0, len(dirContent), 2)
# Now remove the files
[os.unlink(file) for file in toBeDeleted]
This is another form of doing what you want, although I'm not sure if it'll be faster. Hope this helps.
A:
"Going through each one with i%2==0 is still O(n)"
Increment by 2 instead of incrementing by 1?
for(i = 0; i < numFiles; i += 2) {
deleteFile(files[i]);
}
Seriously though: iterating through a list of files probably isn't the slowest part of your file deletion algo. The actual deletion likely takes several orders of magnitude more time.
A:
I would try to use something operating-system specific like:
linux:
@files = grep { -f "$dir/$_" && /*.H$/ }
unlink @files
Win:
$file_delete =~ /H$/;
rm $file_delete
to see if your os can do it faster than iterating in python.
use os.system(...) or subprocess.call(...) to run these from python.
|
Algorithm: How to Delete every other file
|
I have a folder with thousands of images.
I want to delete every other image.
What is the most effective way to do this?
Going through each one with i%2==0 is still O(n).
Is there a fast way to do this (preferably in Python)?
Thx
|
[
"To delete half the N images you cannot be faster than O(N)! You do know that the O() notation means (among other things) that constant multiplicative factors are irrelevant, yes?\n",
"import os\nl = os.listdir('/some/dir/with/files')\n\nfor n in l[::2]:\n os.unlink(n)\n\n",
"\nGoing through each one with i%2==0 is still O(n). Is there a fast way to do this (preferably in Python)?\n\nThe only way to be faster than O(n) is if your files are already sorted, and you only want to delete 1 file.\nYou said i%2==0, this means you are deleting every \"even\" file. O(n/2) is still O(n)\n",
"I fail to see any conceivable way in which deleting n/2 files could be faster than O(n), unless the filesystem has some special feature for deleting large numbers of files (but I don't think that actually exists in practice, if it's even possible)\n",
"If you wanted to delete Log(n) files, there would be... You can store images in a database, though ( MySQL has a \"blob\" type, among several others, that will store your images). Then you could do it in O(1) if you named them smartly.\n/edit\ni hate how i have to use shorthand and bad grammar to get my answers in quickly!!!\nif you're looking for a python equivalent of rm -rf *2.img *4.img *6.img *8.img *0.img, know that the computer still has to go through the entire list of files\n",
"You could use islice from the itertools module. Here goes your example:\nimport os, itertools\ndirContent = os.listdir('/some/dir/with/files')\ntoBeDeleted = itertools.islice(dirContent, 0, len(dirContent), 2)\n# Now remove the files\n[os.unlink(file) for file in toBeDeleted]\n\nThis is another form of doing what you want, although I'm not sure if it'll be faster. Hope this helps.\n",
"\"Going through each one with i%2==0 is still O(n)\"\nIncrement by 2 instead of incrementing by 1?\nfor(i = 0; i < numFiles; i += 2) {\n deleteFile(files[i]);\n}\n\nSeriously though: iterating through a list of files probably isn't the slowest part of your file deletion algo. The actual deletion likely takes several orders of magnitude more time.\n",
"I would try to use something operating-system specific like:\nlinux:\n@files = grep { -f \"$dir/$_\" && /*.H$/ }\nunlink @files\n\nWin:\n$file_delete =~ /H$/;\nrm $file_delete\n\nto see if your os can do it faster than iterating in python.\nuse os.system(...) or subprocess.call(...) to run these from python.\n"
] |
[
21,
13,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"algorithm",
"python"
] |
stackoverflow_0000865973_algorithm_python.txt
|
Q:
Defining a table with sqlalchemy with a mysql unix timestamp
Background, there are several ways to store dates in MySQ.
As a string e.g. "09/09/2009".
As integer using the function UNIX_TIMESTAMP() this is supposedly the traditional unix time representation (you know seconds since the epoch plus/minus leap seconds).
As a MySQL TIMESTAMP, a mysql specific data type not the same than unix timestamps.
As a MySQL Date field, another mysql specific data type.
It's very important not to confuse case 2 with case 3 (or case 4).
I have an existing table with an integer date field (case 2) how can I define it in sqlalchemy in a way I don't have to access mysql's "FROM_UNIXTIME" function?
For the record, just using sqlalchemy.types.DateTime and hoping it does the right thing when it detects an integer column doesn't work, it works for timestamp fields and date fields.
A:
I think there is a couple of issues with the type decorator you showed.
impl should be sqlalchemy.types.Integer instead of DateTime.
The decorator should allow nullable columns.
Here's the what I have in mind:
import datetime, time
from sqlalchemy.types import TypeDecorator, DateTime, Integer
class IntegerDateTime(TypeDecorator):
"""a type that decorates DateTime, converts to unix time on
the way in and to datetime.datetime objects on the way out."""
impl = Integer # In schema, you want these datetimes to
# be stored as integers.
def process_bind_param(self, value, _):
"""Assumes a datetime.datetime"""
if value is None:
return None # support nullability
elif isinstance(value, datetime.datetime):
return int(time.mktime(value.timetuple()))
raise ValueError("Can operate only on datetime values. "
"Offending value type: {0}".format(type(value).__name__))
def process_result_value(self, value, _):
if value is not None: # support nullability
return datetime.datetime.fromtimestamp(float(value))
A:
So yeah, this approach works. And I ended up answering my own question :/, hope somebody finds this useful.
import datetime, time
from sqlalchemy.types import TypeDecorator, DateTime
class IntegerDateTime(TypeDecorator):
"""a type that decorates DateTime, converts to unix time on
the way in and to datetime.datetime objects on the way out."""
impl = DateTime
def process_bind_param(self, value, engine):
"""Assumes a datetime.datetime"""
assert isinstance(value, datetime.datetime)
return int(time.mktime(value.timetuple()))
def process_result_value(self, value, engine):
return datetime.datetime.fromtimestamp(float(value))
def copy(self):
return IntegerDateTime(timezone=self.timezone)
|
Defining a table with sqlalchemy with a mysql unix timestamp
|
Background, there are several ways to store dates in MySQ.
As a string e.g. "09/09/2009".
As integer using the function UNIX_TIMESTAMP() this is supposedly the traditional unix time representation (you know seconds since the epoch plus/minus leap seconds).
As a MySQL TIMESTAMP, a mysql specific data type not the same than unix timestamps.
As a MySQL Date field, another mysql specific data type.
It's very important not to confuse case 2 with case 3 (or case 4).
I have an existing table with an integer date field (case 2) how can I define it in sqlalchemy in a way I don't have to access mysql's "FROM_UNIXTIME" function?
For the record, just using sqlalchemy.types.DateTime and hoping it does the right thing when it detects an integer column doesn't work, it works for timestamp fields and date fields.
|
[
"I think there is a couple of issues with the type decorator you showed.\n\nimpl should be sqlalchemy.types.Integer instead of DateTime.\nThe decorator should allow nullable columns.\n\nHere's the what I have in mind:\n\nimport datetime, time\nfrom sqlalchemy.types import TypeDecorator, DateTime, Integer\n\nclass IntegerDateTime(TypeDecorator):\n \"\"\"a type that decorates DateTime, converts to unix time on\n the way in and to datetime.datetime objects on the way out.\"\"\"\n impl = Integer # In schema, you want these datetimes to\n # be stored as integers.\n def process_bind_param(self, value, _):\n \"\"\"Assumes a datetime.datetime\"\"\"\n if value is None:\n return None # support nullability\n elif isinstance(value, datetime.datetime):\n return int(time.mktime(value.timetuple()))\n raise ValueError(\"Can operate only on datetime values. \"\n \"Offending value type: {0}\".format(type(value).__name__))\n def process_result_value(self, value, _):\n if value is not None: # support nullability\n return datetime.datetime.fromtimestamp(float(value))\n\n",
"So yeah, this approach works. And I ended up answering my own question :/, hope somebody finds this useful.\nimport datetime, time\nfrom sqlalchemy.types import TypeDecorator, DateTime\nclass IntegerDateTime(TypeDecorator):\n \"\"\"a type that decorates DateTime, converts to unix time on\n the way in and to datetime.datetime objects on the way out.\"\"\"\n impl = DateTime\n def process_bind_param(self, value, engine):\n \"\"\"Assumes a datetime.datetime\"\"\"\n assert isinstance(value, datetime.datetime)\n return int(time.mktime(value.timetuple()))\n def process_result_value(self, value, engine):\n return datetime.datetime.fromtimestamp(float(value))\n def copy(self):\n return IntegerDateTime(timezone=self.timezone)\n\n"
] |
[
8,
3
] |
[] |
[] |
[
"mysql",
"python",
"sqlalchemy"
] |
stackoverflow_0000762750_mysql_python_sqlalchemy.txt
|
Q:
Python: File IO - Disable incremental flush
Kind of the opposite of this question.
Is there a way to tell Python "Do not write to disk until I tell you to." (by closing or flushing the file)? I'm writing to a file on the network, and would rather write the entire file at once.
In the meantime, I'm writing to a StringIO buffer, and then writing that to the disk at the end.
A:
No, a glance at the python manual does not indicate an option to set the buffer size to infinity.
Your current solution is basically the same concept.
You could use Alex's idea, but I would hazard against it for the following reasons:
The buffer size on open is limited to 2^31-1 or 2 gigs. Any larger will result in "OverflowError: long int too large to convert to int"
It doesn't seem to work:
a = open("blah.txt", "w", 2 ** 31 - 1)
for i in xrange(10000):
a.write("a")
Open up the file without closing python, and you will see the text
A:
You can open your file with as large a buffer as you want. For example, to use up to a billion bytes for buffering, x=open('/tmp/za', 'w', 1000*1000*1000) -- if you have a hundred billion bytes of memory and want to use them all, just add another *100...;-). Memory will only be consumed in the amount actually needed, so, no worry...
A:
I would say this partly depends on what you're trying to do.
The case where I came across this issue was when my application was a bit slow
creating a file that was used by another application, the other application
would get incomplete versions of the file.
I solved it by writing the file to a different place, then renaming it into the correct
place once I'd finished writing.
If you want this for other reasons then maybe that doesn't help.
|
Python: File IO - Disable incremental flush
|
Kind of the opposite of this question.
Is there a way to tell Python "Do not write to disk until I tell you to." (by closing or flushing the file)? I'm writing to a file on the network, and would rather write the entire file at once.
In the meantime, I'm writing to a StringIO buffer, and then writing that to the disk at the end.
|
[
"No, a glance at the python manual does not indicate an option to set the buffer size to infinity. \nYour current solution is basically the same concept.\nYou could use Alex's idea, but I would hazard against it for the following reasons:\n\nThe buffer size on open is limited to 2^31-1 or 2 gigs. Any larger will result in \"OverflowError: long int too large to convert to int\"\nIt doesn't seem to work:\na = open(\"blah.txt\", \"w\", 2 ** 31 - 1)\nfor i in xrange(10000): \n a.write(\"a\")\n\n\nOpen up the file without closing python, and you will see the text\n",
"You can open your file with as large a buffer as you want. For example, to use up to a billion bytes for buffering, x=open('/tmp/za', 'w', 1000*1000*1000) -- if you have a hundred billion bytes of memory and want to use them all, just add another *100...;-). Memory will only be consumed in the amount actually needed, so, no worry...\n",
"I would say this partly depends on what you're trying to do. \nThe case where I came across this issue was when my application was a bit slow \ncreating a file that was used by another application, the other application \nwould get incomplete versions of the file.\nI solved it by writing the file to a different place, then renaming it into the correct\nplace once I'd finished writing. \nIf you want this for other reasons then maybe that doesn't help.\n"
] |
[
3,
3,
1
] |
[] |
[] |
[
"file_io",
"python"
] |
stackoverflow_0000865957_file_io_python.txt
|
Q:
Need Help Understanding how to use less complex regex in Python
I am trying to learn more about regular expressions I have one below that I believe finds cases where there is a missing close paren on a number up to 999 billion. The one below it I thought should do the same but I do not get similar results
missingParenReg=re.compile(r"^\([$]*[0-9]{1,3}[,]?[0-9]{0,3}[,]?[0-9]{0,3}[,]?[0-9]{0,3}[.]*[0-9]*[^)]$")
missingParenReg2=re.compile(r"^\([$]?([0-9]{1,3}[,]?)+[.]*[0-9]*[^)]$")
I think the second one says:
There must be an open paren to start
There may or may not be as many as one dollar sign
The next group must exist at least once but can exist an unlimited number of times
The group should have at least one digit but may have as many as three
The group may have as few as 0 and as many as 1 commas
Following this group there may or may not be a decimal point
If there is a decimal point it will be followed by as many as 0 but as many as uncounted occurences of digits
At the end there should not be a closing paren.
I am trying to understand this magic stuff so I would appreciate a correction to my regex (if it can be corrected) in addition to a more elegant solution if you have it.
A:
Are there nested parentheses (your regexps assume there are not)? If not:
whether_paren_is_missing = (astring[0] == '(' and not astring[-1] == ')')
To validate a dollar amount part:
import re
cents = r"(?:\.\d\d)" # cents
re_dollar_amount = re.compile(r"""(?x)
^ # match at the very begining of the string
\$? # optional dollar sign
(?: # followed by
(?: # integer part
0 # zero
| # or
[1-9]\d{,2} # 1 to 3 digits (no leading zero)
(?: # followed by
(?:,\d{3})* # zero or more three-digits groups with commas
| # or
\d* # zero or more digits without commas (no leading zero)
)
)
(?:\.|%(cents)s)? # optional f.p. part
| # or
%(cents)s # pure f.p. '$.01'
)
$ # match end of string
""" % vars())
Allow:
$0
0
$234
22
$0.01
10000.12
$99.90
2,010,123
1.00
2,103.45
$.10
$1.
Forbid:
01234
00
123.4X
1.001
.
A:
The trickier part about regular expressions isn't making them accept valid input, it's making them reject invalid input. For example, the second expression accepts input that is clearly wrong, including:
(1,2,3,4 -- one digit between each comma
(12,34,56 -- two digits between each comma
(1234......5 -- unlimited number of decimal points
(1234,.5 -- comma before decimal point
(123,456789,012 -- if there are some commas, they should be between each triple
(01234 -- leading zero is not conventional
(123.4X -- last char is not a closing paren
Here's an alternative regular expression that should reject the examples above:
[-+]?[$]?(0|[1-9]\d*|[1-9]\d{0,2}(,\d{3})*)(\.\d+)?
Optional leading plus/minus.
Optional dollar sign.
Three choices separated by |:
Single zero digit (for numbers like 0.5 or simply 0).
Any number of digits with no commas. The first digit must not be zero.
Comma-separated digits. The first digit must not be zero. Up to three digits before the first comma. Each comma must be followed by exactly three digits.
Optional single decimal point, which must be followed by one or more digits.
Regarding the parens, if all you care about is whether the parens are balanced, then you can disregard parsing out the numeric format precisely; just trust that any combination of digits, decimal points, and commas between the parens are valid. Then use the (?!...) construct that evaluates as a match if the input doesn't match the regular expression inside.
(?!\([$\d.,]+\))
A:
I've found very helpful to use kiki when tailoring a regex. It shows you visually what's going on with your regexes. It is a huge time-saver.
|
Need Help Understanding how to use less complex regex in Python
|
I am trying to learn more about regular expressions I have one below that I believe finds cases where there is a missing close paren on a number up to 999 billion. The one below it I thought should do the same but I do not get similar results
missingParenReg=re.compile(r"^\([$]*[0-9]{1,3}[,]?[0-9]{0,3}[,]?[0-9]{0,3}[,]?[0-9]{0,3}[.]*[0-9]*[^)]$")
missingParenReg2=re.compile(r"^\([$]?([0-9]{1,3}[,]?)+[.]*[0-9]*[^)]$")
I think the second one says:
There must be an open paren to start
There may or may not be as many as one dollar sign
The next group must exist at least once but can exist an unlimited number of times
The group should have at least one digit but may have as many as three
The group may have as few as 0 and as many as 1 commas
Following this group there may or may not be a decimal point
If there is a decimal point it will be followed by as many as 0 but as many as uncounted occurences of digits
At the end there should not be a closing paren.
I am trying to understand this magic stuff so I would appreciate a correction to my regex (if it can be corrected) in addition to a more elegant solution if you have it.
|
[
"Are there nested parentheses (your regexps assume there are not)? If not:\nwhether_paren_is_missing = (astring[0] == '(' and not astring[-1] == ')')\n\nTo validate a dollar amount part:\nimport re\n\ncents = r\"(?:\\.\\d\\d)\" # cents \nre_dollar_amount = re.compile(r\"\"\"(?x)\n ^ # match at the very begining of the string\n \\$? # optional dollar sign\n (?: # followed by\n (?: # integer part \n 0 # zero\n | # or\n [1-9]\\d{,2} # 1 to 3 digits (no leading zero) \n (?: # followed by\n (?:,\\d{3})* # zero or more three-digits groups with commas \n | # or\n \\d* # zero or more digits without commas (no leading zero)\n )\n )\n (?:\\.|%(cents)s)? # optional f.p. part \n | # or\n %(cents)s # pure f.p. '$.01'\n )\n $ # match end of string\n \"\"\" % vars())\n\nAllow:\n\n $0\n 0\n $234\n 22\n $0.01\n 10000.12\n $99.90\n 2,010,123\n 1.00\n 2,103.45\n $.10\n $1.\n\nForbid:\n\n 01234\n 00\n 123.4X\n 1.001\n .\n\n",
"The trickier part about regular expressions isn't making them accept valid input, it's making them reject invalid input. For example, the second expression accepts input that is clearly wrong, including:\n\n(1,2,3,4 -- one digit between each comma\n(12,34,56 -- two digits between each comma\n(1234......5 -- unlimited number of decimal points\n(1234,.5 -- comma before decimal point\n(123,456789,012 -- if there are some commas, they should be between each triple\n(01234 -- leading zero is not conventional\n(123.4X -- last char is not a closing paren\n\nHere's an alternative regular expression that should reject the examples above:\n[-+]?[$]?(0|[1-9]\\d*|[1-9]\\d{0,2}(,\\d{3})*)(\\.\\d+)?\n\nOptional leading plus/minus.\nOptional dollar sign.\nThree choices separated by |:\n\n\nSingle zero digit (for numbers like 0.5 or simply 0).\nAny number of digits with no commas. The first digit must not be zero.\nComma-separated digits. The first digit must not be zero. Up to three digits before the first comma. Each comma must be followed by exactly three digits. \n\nOptional single decimal point, which must be followed by one or more digits.\n\nRegarding the parens, if all you care about is whether the parens are balanced, then you can disregard parsing out the numeric format precisely; just trust that any combination of digits, decimal points, and commas between the parens are valid. Then use the (?!...) construct that evaluates as a match if the input doesn't match the regular expression inside.\n(?!\\([$\\d.,]+\\))\n",
"I've found very helpful to use kiki when tailoring a regex. It shows you visually what's going on with your regexes. It is a huge time-saver.\n"
] |
[
4,
3,
0
] |
[
"One difference I see at a glance is that your regex will not find strings like:\n(123,,,\n\nThat's because the corrected version requires at least one digit between commas. (A reasonable requirement, I'd say.) \n"
] |
[
-1
] |
[
"python",
"regex"
] |
stackoverflow_0000361443_python_regex.txt
|
Q:
keyerror inside django model class __init__
Here's a Django model class I wrote. This class gets a keyerror when I call get_object_or_404 from Django (I conceive that keyerror is raised due to no kwargs being passed to __init__ by the get function, arguments are all positional). Interestingly, it does not get an error when I call get_object_or_404 from console.
I wonder why, and if the below code is the correct way (ie, using init to populate the link field) to construct this class.
class Link(models.Model)
event_type = models.IntegerField(choices=EVENT_TYPES)
user = models.ForeignKey(User)
created_on = models.DateTimeField(auto_now_add = True)
link = models.CharField(max_length=30)
isActive = models.BooleanField(default=True)
def _generate_link(self):
prelink = str(self.user.id)+str(self.event_type)+str(self.created_on)
m = md5.new()
m.update(prelink)
return m.hexdigest()
def __init__(self, *args, **kwargs):
self.user = kwargs['user'].pop()
self.event_type = kwargs['event_type'].pop()
self.link = self._generate_link()
super(Link,self).__init__(*args,**kwargs)
A:
self.user = kwargs['user'].pop()
self.event_type = kwargs['event_type'].pop()
You're trying to retrieve an entry from the dictionary, and then call its pop method. If you want to remove and return an object from a dictionary, call dict.pop():
self.user = kwargs.pop('user')
Of course, this will fail with a KeyError when "user" is not present in kwargs. You'll want to provide a default value to pop:
self.user = kwargs.pop('user', None)
This means "if "user" is in the dictionary, remove and return it. Otherwise, return None".
Regarding the other two lines:
self.link = self._generate_link()
super(Link,self).__init__(*args,**kwargs)
super().__init__() will set link to something, probably None. I would reverse the lines, to something like this:
super(Link,self).__init__(*args,**kwargs)
self.link = self._generate_link()
You might want to add a test before setting the link, to see if it already exists (if self.link is not None: ...). That way, links you pass into the constructor won't be overwritten.
A:
There's no reason to write your own __init__ for Django model classes. I think you'll be a lot happier without it.
Almost anything you think you want to do in __init__ can be better done in save.
A:
I don't think you need the __init__ here at all.
You are always calculating the value of link when the class is instantiated. This means you ignore whatever is stored in the database. Since this is the case, why bother with a model field at all? You would be better making link a property, with the getter using the code from _generate_link.
@property
def link(self):
....
A:
wonder why, and if the below code is the correct way (ie, using __init__ to populate the link field) to construct this class.
I once got some problems when I tried to overload __init__
In the maillist i got this answer
It's best not to overload it with your own
__init__. A better option is to hook into the post_init signal with a
custom method and in that method do your process() and
make_thumbnail() calls.
In your case the post_init-signal should do the trick and implementing __init__ shouldn't be necessary at all.
You could write something like this:
class Link(models.Model)
event_type = models.IntegerField(choices=EVENT_TYPES)
user = models.ForeignKey(User)
created_on = models.DateTimeField(auto_now_add = True)
link = models.CharField(max_length=30)
isActive = models.BooleanField(default=True)
def create_link(self):
prelink = str(self.user.id)+str(self.event_type)+str(self.created_on)
m = md5.new()
m.update(prelink)
return m.hexdigest()
def post_link_init(sender, **kwargs):
kwargs['instance'].create_link()
post_init.connect(post_link_init, sender=Link)
>>> link = Link(event_type=1, user=aUser, created_on=datetime.now(), link='foo', isActive=True)
providing keyword unique for link = models.CharField(max_length=30, unique=True) could be helpful, too. If it is not provided, get_object_or_404 may won't work in case the same value in the link-field exists several times.
signals and unique in the django-docs
|
keyerror inside django model class __init__
|
Here's a Django model class I wrote. This class gets a keyerror when I call get_object_or_404 from Django (I conceive that keyerror is raised due to no kwargs being passed to __init__ by the get function, arguments are all positional). Interestingly, it does not get an error when I call get_object_or_404 from console.
I wonder why, and if the below code is the correct way (ie, using init to populate the link field) to construct this class.
class Link(models.Model)
event_type = models.IntegerField(choices=EVENT_TYPES)
user = models.ForeignKey(User)
created_on = models.DateTimeField(auto_now_add = True)
link = models.CharField(max_length=30)
isActive = models.BooleanField(default=True)
def _generate_link(self):
prelink = str(self.user.id)+str(self.event_type)+str(self.created_on)
m = md5.new()
m.update(prelink)
return m.hexdigest()
def __init__(self, *args, **kwargs):
self.user = kwargs['user'].pop()
self.event_type = kwargs['event_type'].pop()
self.link = self._generate_link()
super(Link,self).__init__(*args,**kwargs)
|
[
"self.user = kwargs['user'].pop()\nself.event_type = kwargs['event_type'].pop()\n\nYou're trying to retrieve an entry from the dictionary, and then call its pop method. If you want to remove and return an object from a dictionary, call dict.pop():\nself.user = kwargs.pop('user')\n\nOf course, this will fail with a KeyError when \"user\" is not present in kwargs. You'll want to provide a default value to pop:\nself.user = kwargs.pop('user', None)\n\nThis means \"if \"user\" is in the dictionary, remove and return it. Otherwise, return None\".\nRegarding the other two lines:\nself.link = self._generate_link()\nsuper(Link,self).__init__(*args,**kwargs)\n\nsuper().__init__() will set link to something, probably None. I would reverse the lines, to something like this:\nsuper(Link,self).__init__(*args,**kwargs)\nself.link = self._generate_link()\n\nYou might want to add a test before setting the link, to see if it already exists (if self.link is not None: ...). That way, links you pass into the constructor won't be overwritten.\n",
"There's no reason to write your own __init__ for Django model classes. I think you'll be a lot happier without it. \nAlmost anything you think you want to do in __init__ can be better done in save.\n",
"I don't think you need the __init__ here at all. \nYou are always calculating the value of link when the class is instantiated. This means you ignore whatever is stored in the database. Since this is the case, why bother with a model field at all? You would be better making link a property, with the getter using the code from _generate_link.\n@property\ndef link(self): \n ....\n\n",
"\nwonder why, and if the below code is the correct way (ie, using __init__ to populate the link field) to construct this class.\n\nI once got some problems when I tried to overload __init__\nIn the maillist i got this answer\n\nIt's best not to overload it with your own\n __init__. A better option is to hook into the post_init signal with a\n custom method and in that method do your process() and\n make_thumbnail() calls.\n\nIn your case the post_init-signal should do the trick and implementing __init__ shouldn't be necessary at all.\nYou could write something like this:\nclass Link(models.Model)\n event_type = models.IntegerField(choices=EVENT_TYPES)\n user = models.ForeignKey(User)\n created_on = models.DateTimeField(auto_now_add = True)\n link = models.CharField(max_length=30)\n isActive = models.BooleanField(default=True)\n\n def create_link(self):\n prelink = str(self.user.id)+str(self.event_type)+str(self.created_on)\n m = md5.new()\n m.update(prelink)\n return m.hexdigest()\n\ndef post_link_init(sender, **kwargs):\n kwargs['instance'].create_link()\npost_init.connect(post_link_init, sender=Link)\n\n>>> link = Link(event_type=1, user=aUser, created_on=datetime.now(), link='foo', isActive=True)\n\nproviding keyword unique for link = models.CharField(max_length=30, unique=True) could be helpful, too. If it is not provided, get_object_or_404 may won't work in case the same value in the link-field exists several times.\nsignals and unique in the django-docs \n"
] |
[
7,
2,
2,
1
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0000866399_django_django_models_python.txt
|
Q:
db connection in python
I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception
A:
Your database connection will almost certainly be based on a TCP socket. TCP sockets will hang around for a long time retrying before failing and (in python) raising an exception. Not to mention and retries/automatic reconnection attempts in the database layer.
A:
As Douglas's answer said, it won't raise exception due to TCP.
You may try to use socket.setdefaulttimeout() to set a shorter timeout value.
setdefaulttimeout(...)
setdefaulttimeout(timeout)
Set the default timeout in floating seconds for new socket objects.
A value of None indicates that new socket objects have no timeout.
When the socket module is first imported, the default is None.
However, it may not work if your database connection is not build by python socket, for example, native socket.
A:
If you want to implement timeouts that work no matter how the client library is connecting to the server, it's best to attempt the DB operations in a separate thread, or, better, a separate process, which a "monitor" thread/process can kill if needed; see the multiprocessing module in Python 2.6 standard library (there's a backported version for 2.5 if you need that). A process is better because when it's killed the operating system will take care of deallocating and cleaning up resources, while killing a thread is always a pretty unsafe and messy business.
|
db connection in python
|
I am writing a code in python in which I established a connection with database. I have queries in a loop. While queries being executed in the loop , If i unplug the network cable it should stop with an exception. But this not happens, When i again plug yhe network cabe after 2 minutes it starts again from where it ended. I am using linux and psycopg2. It is not showing exception
|
[
"Your database connection will almost certainly be based on a TCP socket. TCP sockets will hang around for a long time retrying before failing and (in python) raising an exception. Not to mention and retries/automatic reconnection attempts in the database layer.\n",
"As Douglas's answer said, it won't raise exception due to TCP.\nYou may try to use socket.setdefaulttimeout() to set a shorter timeout value.\n\nsetdefaulttimeout(...)\n setdefaulttimeout(timeout)\n\n Set the default timeout in floating seconds for new socket objects.\n A value of None indicates that new socket objects have no timeout.\n When the socket module is first imported, the default is None.\n\n\nHowever, it may not work if your database connection is not build by python socket, for example, native socket.\n",
"If you want to implement timeouts that work no matter how the client library is connecting to the server, it's best to attempt the DB operations in a separate thread, or, better, a separate process, which a \"monitor\" thread/process can kill if needed; see the multiprocessing module in Python 2.6 standard library (there's a backported version for 2.5 if you need that). A process is better because when it's killed the operating system will take care of deallocating and cleaning up resources, while killing a thread is always a pretty unsafe and messy business.\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"database_connection",
"python",
"tcp"
] |
stackoverflow_0000867175_database_connection_python_tcp.txt
|
Q:
Numbers Comparison - Python Bug?
Deep inside my code, in a nested if inside a nested for inside a class method, I'm comparing a certain index value to the length of a certain list, to validate I can access that index. The code looks something like that:
if t.index_value < len(work_list):
... do stuff ...
else:
... print some error ...
For clarification, index_value is at least zero (validated somewhere else). To my surprise, even though I know the index_value data is valid, the code keeps going to the "else:" clause. I added some ad-hoc debug code:
print('Checking whether '+str(t.index_value)+"<"+str(len(work_list)))
x = t.index_value
y = len(work_list)
print(x)
print(y)
print(x<y)
if t.index_value < len(work_list):
... do stuff ...
else:
... print some error ...
Following is the output:
>> Checking whether 3<4
>> 3
>> 4
>> False
Can anyone help me understand what's going on here?
Further clarifications:
work_list is a local variable instantiated within the method
t is a class instance, instantiated within the method ( t = SomeClass() )
Update: The problem was that the type of t.index_value was UNICODE and not int. The reason was that I deserialized the contents of t from a text file, where the value of index_value is represented by a single digit character. After I extracted it from the text, I immediately assigned it to index_value, without passing it through int() which it what I should have done, and that solved the problem.
I decided to keep the "controversial" title despite the fact it's clearly my bug and not Python's, because people with the same problem may find it using this title.
A:
In my experience, what is the type of "t.index_value"? Maybe it is a string "3".
>>> print '3' < 4
False
A:
To display values which might be of different types than you expect (e.g. a string rather than a number, as kcwu suggests), use repr(x) and the like.
|
Numbers Comparison - Python Bug?
|
Deep inside my code, in a nested if inside a nested for inside a class method, I'm comparing a certain index value to the length of a certain list, to validate I can access that index. The code looks something like that:
if t.index_value < len(work_list):
... do stuff ...
else:
... print some error ...
For clarification, index_value is at least zero (validated somewhere else). To my surprise, even though I know the index_value data is valid, the code keeps going to the "else:" clause. I added some ad-hoc debug code:
print('Checking whether '+str(t.index_value)+"<"+str(len(work_list)))
x = t.index_value
y = len(work_list)
print(x)
print(y)
print(x<y)
if t.index_value < len(work_list):
... do stuff ...
else:
... print some error ...
Following is the output:
>> Checking whether 3<4
>> 3
>> 4
>> False
Can anyone help me understand what's going on here?
Further clarifications:
work_list is a local variable instantiated within the method
t is a class instance, instantiated within the method ( t = SomeClass() )
Update: The problem was that the type of t.index_value was UNICODE and not int. The reason was that I deserialized the contents of t from a text file, where the value of index_value is represented by a single digit character. After I extracted it from the text, I immediately assigned it to index_value, without passing it through int() which it what I should have done, and that solved the problem.
I decided to keep the "controversial" title despite the fact it's clearly my bug and not Python's, because people with the same problem may find it using this title.
|
[
"In my experience, what is the type of \"t.index_value\"? Maybe it is a string \"3\".\n>>> print '3' < 4\nFalse\n\n",
"To display values which might be of different types than you expect (e.g. a string rather than a number, as kcwu suggests), use repr(x) and the like.\n"
] |
[
8,
2
] |
[] |
[] |
[
"numbers",
"python"
] |
stackoverflow_0000867436_numbers_python.txt
|
Q:
python dealing with Nonetype before cast\addition
I'm pulling a row from a db and adding up the fields (approx 15) to get a total. But some field values will be Null, which causes an error in the addition of the fields (TypeError: unsupported operand type(s) for +: 'NoneType' and 'int')
Right now, with each field, I get the field value and set it to 'x#', then check if it is None and if so set 'x#' to 0.
Not very elegant...any advice on a better way to deal with this in python?
cc
A:
You can do it easily like this:
result = sum(field for field in row if field)
A:
Another (better?) option is to do this in the database. You can alter your db query to map NULL to 0 using COALESCE.
Say you have a table with integer columns named col1, col2, col3 that can accept NULLs.
Option 1:
SELECT coalesce(col1, 0) as col1, coalesce(col2, 0) as col2, coalesce(col3, 0) as col3
FROM your_table;
Then use sum() in Python on the returned row without having to worry about the presence of None.
Option 2:
Sum the columns in the database and return the total in the query:
SELECT coalesce(col1, 0) + coalesce(col2, 0) + coalesce(col3, 0) as total
FROM your_table;
Nothing more to do in Python. One advantage of the second option is that you can select other columns in your query that are not part of your sum (you probably have other columns in your table and are making multiple queries to get different columns of the table?)
A:
Here's a clunkier version.
total = (a if a is not None else 0) + (b if b is not None else 0) + ...
Here's another choice.
def ifnull(col,replacement=0):
return col if col is not None else replacement
total = ifnull(a) + ifnull(b) + ifnull(c) + ...
Here's another choice.
def non_null( *fields ):
for f in fields:
if f is not None:
yield f
total = sum( non_null( a, b, c, d, e, f, g ) )
A:
total = 0.0
for f in fields:
total += f or 0.0
|
python dealing with Nonetype before cast\addition
|
I'm pulling a row from a db and adding up the fields (approx 15) to get a total. But some field values will be Null, which causes an error in the addition of the fields (TypeError: unsupported operand type(s) for +: 'NoneType' and 'int')
Right now, with each field, I get the field value and set it to 'x#', then check if it is None and if so set 'x#' to 0.
Not very elegant...any advice on a better way to deal with this in python?
cc
|
[
"You can do it easily like this:\nresult = sum(field for field in row if field)\n\n",
"Another (better?) option is to do this in the database. You can alter your db query to map NULL to 0 using COALESCE.\nSay you have a table with integer columns named col1, col2, col3 that can accept NULLs.\nOption 1:\nSELECT coalesce(col1, 0) as col1, coalesce(col2, 0) as col2, coalesce(col3, 0) as col3\nFROM your_table;\n\nThen use sum() in Python on the returned row without having to worry about the presence of None.\nOption 2:\nSum the columns in the database and return the total in the query:\nSELECT coalesce(col1, 0) + coalesce(col2, 0) + coalesce(col3, 0) as total\nFROM your_table;\n\nNothing more to do in Python. One advantage of the second option is that you can select other columns in your query that are not part of your sum (you probably have other columns in your table and are making multiple queries to get different columns of the table?)\n",
"Here's a clunkier version.\ntotal = (a if a is not None else 0) + (b if b is not None else 0) + ...\n\nHere's another choice.\ndef ifnull(col,replacement=0):\n return col if col is not None else replacement\n\ntotal = ifnull(a) + ifnull(b) + ifnull(c) + ...\n\nHere's another choice.\ndef non_null( *fields ):\n for f in fields:\n if f is not None:\n yield f\n\ntotal = sum( non_null( a, b, c, d, e, f, g ) )\n\n",
"total = 0.0\nfor f in fields:\n total += f or 0.0\n\n"
] |
[
13,
1,
0,
0
] |
[] |
[] |
[
"python",
"types"
] |
stackoverflow_0000866208_python_types.txt
|
Q:
What host to use when making a UDP socket in python?
I want ro receive some data that is sent as a UDP packet over VPN. So wrote (mostly copied) this program in python:
import socket
import sys
HOST = ???????
PORT = 80
# SOCK_DGRAM is the socket type to use for UDP sockets
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((HOST,PORT))
data,addr = sock.recv(1024)
print "Received: %s" % data
print "Addr: %s" % addr
What should I use as host? I know the IP of the sender but it seems anything thats not local gives me socket.error: [Errno 10049]. The IP that the VPN gives me (the same IP that the sender sends to, that is)? Or just localhost?
A:
The host argument is the host IP you want to bind to. Specify the IP of one of your interfaces (Eg, your public IP, or 127.0.0.1 for localhost), or use 0.0.0.0 to bind to all interfaces. If you bind to a specific interface, your service will only be available on that interface - for example, if you want to run something that can only be accessed via localhost, or if you have multiple IPs and need to run different servers on each.
A:
"0.0.0.0" will listen for all incoming hosts. For example,
sock.bind(("0.0.0.0", 999))
data,addr = sock.recv(1024)
A:
Use:
sock.bind(("", 999))
|
What host to use when making a UDP socket in python?
|
I want ro receive some data that is sent as a UDP packet over VPN. So wrote (mostly copied) this program in python:
import socket
import sys
HOST = ???????
PORT = 80
# SOCK_DGRAM is the socket type to use for UDP sockets
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((HOST,PORT))
data,addr = sock.recv(1024)
print "Received: %s" % data
print "Addr: %s" % addr
What should I use as host? I know the IP of the sender but it seems anything thats not local gives me socket.error: [Errno 10049]. The IP that the VPN gives me (the same IP that the sender sends to, that is)? Or just localhost?
|
[
"The host argument is the host IP you want to bind to. Specify the IP of one of your interfaces (Eg, your public IP, or 127.0.0.1 for localhost), or use 0.0.0.0 to bind to all interfaces. If you bind to a specific interface, your service will only be available on that interface - for example, if you want to run something that can only be accessed via localhost, or if you have multiple IPs and need to run different servers on each.\n",
"\"0.0.0.0\" will listen for all incoming hosts. For example,\nsock.bind((\"0.0.0.0\", 999))\ndata,addr = sock.recv(1024)\n\n",
"Use:\nsock.bind((\"\", 999))\n\n"
] |
[
10,
3,
3
] |
[] |
[] |
[
"python",
"sockets",
"udp"
] |
stackoverflow_0000868173_python_sockets_udp.txt
|
Q:
How to catch str exception?
import sys
try:
raise "xxx"
except str,e:
print "1",e
except:
print "2",sys.exc_type,sys.exc_value
In the above code a string exception is raised which though deprecated but still a 3rd party library I use uses it.
So how can I catch such exception without relying on catch all, which could be bad.
except str,e: doesn't catch it? why?
system:
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
A:
The generic except: clause is the only way to catch all str exceptions.
str exceptions are a legacy Python feature. In new code you should use raise Exception("xxx") or raise your own Exception subclass, or assert 0, "xxx".
A:
Here is the solution from python mailing list, not very elegant but will work if can't avoid the need for such hack
import sys
try:
raise "a string exception"
except:
e, t, tb = sys.exc_info()
if not isinstance(e, str):
raise
print "caught", e
A:
try:
raise "xxx"
except "xxx":
print "xxx caught"
except <class> only works with classes that are a subclass of Exception i think. Oh btw, use basestring when checking the type of strings, works with unicode strings too.
A:
Raising raw strings is just wrong. It's a deprecated feature (and as such should have raised warnings). Catching the explicit string will work if you really need it, and so will catching everything. Since catching everything puts the ugliness in your code, I recommend catching the string explicitly, or even better: fixing the broken library.
try:
#code_that_throws_string()
raise("spam")
except "spam":
pass
The pass statement will be reached. There are loads of good reasons not to use strings as exceptions, and this is one of them (the other being: I don't believe you can get tracebacks, so they're largely useless for bug fixing).
So, fix the library (good). Or catch the string explicitly (bad). Or catch everything (very bad) and do some isinstance(e, str) checking (even worse).
|
How to catch str exception?
|
import sys
try:
raise "xxx"
except str,e:
print "1",e
except:
print "2",sys.exc_type,sys.exc_value
In the above code a string exception is raised which though deprecated but still a 3rd party library I use uses it.
So how can I catch such exception without relying on catch all, which could be bad.
except str,e: doesn't catch it? why?
system:
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
|
[
"The generic except: clause is the only way to catch all str exceptions.\nstr exceptions are a legacy Python feature. In new code you should use raise Exception(\"xxx\") or raise your own Exception subclass, or assert 0, \"xxx\".\n",
"Here is the solution from python mailing list, not very elegant but will work if can't avoid the need for such hack\nimport sys\ntry:\n raise \"a string exception\"\nexcept:\n e, t, tb = sys.exc_info()\n if not isinstance(e, str):\n raise \n print \"caught\", e\n\n",
"try:\n raise \"xxx\"\n\nexcept \"xxx\":\n print \"xxx caught\"\n\nexcept <class> only works with classes that are a subclass of Exception i think. Oh btw, use basestring when checking the type of strings, works with unicode strings too.\n",
"Raising raw strings is just wrong. It's a deprecated feature (and as such should have raised warnings). Catching the explicit string will work if you really need it, and so will catching everything. Since catching everything puts the ugliness in your code, I recommend catching the string explicitly, or even better: fixing the broken library.\ntry:\n #code_that_throws_string()\n raise(\"spam\")\nexcept \"spam\":\n pass\n\nThe pass statement will be reached. There are loads of good reasons not to use strings as exceptions, and this is one of them (the other being: I don't believe you can get tracebacks, so they're largely useless for bug fixing).\nSo, fix the library (good). Or catch the string explicitly (bad). Or catch everything (very bad) and do some isinstance(e, str) checking (even worse).\n"
] |
[
6,
4,
2,
2
] |
[] |
[] |
[
"exception",
"python",
"string"
] |
stackoverflow_0000867522_exception_python_string.txt
|
Q:
Loading files into variables
I am trying to write a small function that gets a variable name, check if it exists, and if not loads it from a file (using pickle) to the global namespace.
I tried using this in a file:
import cPickle
#
# Load if neccesary
#
def loadfile(variable, filename):
if variable not in globals():
cmd = "%s = cPickle.load(file('%s','r'))" % (variable, filename)
print cmd
exec(cmd) in globals()
But it doesn't work - the variable don't get defined. What am I doing wrong?
A:
You could alway avoid exec entirely:
import cPickle
#
# Load if neccesary
#
def loadfile(variable, filename):
g=globals()
if variable not in g:
g[variable]=cPickle.load(file(filename,'r'))
EDIT: of course that only loads the globals into the current module's globals.
If you want to load the stuff into the globals of another module you'd be best to pass in them in as a parameter:
import cPickle
#
# Load if neccesary
#
def loadfile(variable, filename, g=None):
if g is None:
g=globals()
if variable not in g:
g[variable]=cPickle.load(file(filename,'r'))
# then in another module do this
loadfile('myvar','myfilename',globals())
A:
Using 'globals' has the problem that it only works for the current module. Rather than passing 'globals' around, a better way is to use the 'setattr' builtin directly on a namespace. This means you can then reuse the function on instances as well as modules.
import cPickle
#
# Load if neccesary
#
def loadfile(variable, filename, namespace=None):
if module is None:
import __main__ as namespace
setattr(namespace, variable, cPickle.load(file(filename,'r')))
# From the main script just do:
loadfile('myvar','myfilename')
# To set the variable in module 'mymodule':
import mymodule
...
loadfile('myvar', 'myfilename', mymodule)
Be careful about the module name: the main script is always a module main. If you are running script.py and do 'import script' you'll get a separate copy of your code which is usually not what you want.
|
Loading files into variables
|
I am trying to write a small function that gets a variable name, check if it exists, and if not loads it from a file (using pickle) to the global namespace.
I tried using this in a file:
import cPickle
#
# Load if neccesary
#
def loadfile(variable, filename):
if variable not in globals():
cmd = "%s = cPickle.load(file('%s','r'))" % (variable, filename)
print cmd
exec(cmd) in globals()
But it doesn't work - the variable don't get defined. What am I doing wrong?
|
[
"You could alway avoid exec entirely:\n\n\nimport cPickle\n\n#\n# Load if neccesary\n#\ndef loadfile(variable, filename):\n g=globals()\n if variable not in g:\n g[variable]=cPickle.load(file(filename,'r'))\n\n\n\nEDIT: of course that only loads the globals into the current module's globals.\nIf you want to load the stuff into the globals of another module you'd be best to pass in them in as a parameter:\n\n\nimport cPickle\n\n#\n# Load if neccesary\n#\ndef loadfile(variable, filename, g=None):\n if g is None:\n g=globals()\n if variable not in g:\n g[variable]=cPickle.load(file(filename,'r'))\n\n# then in another module do this\nloadfile('myvar','myfilename',globals())\n\n\n",
"Using 'globals' has the problem that it only works for the current module. Rather than passing 'globals' around, a better way is to use the 'setattr' builtin directly on a namespace. This means you can then reuse the function on instances as well as modules.\nimport cPickle\n\n#\n# Load if neccesary\n#\ndef loadfile(variable, filename, namespace=None):\n if module is None:\n import __main__ as namespace\n setattr(namespace, variable, cPickle.load(file(filename,'r')))\n\n# From the main script just do:\nloadfile('myvar','myfilename')\n\n# To set the variable in module 'mymodule':\nimport mymodule\n...\nloadfile('myvar', 'myfilename', mymodule)\n\nBe careful about the module name: the main script is always a module main. If you are running script.py and do 'import script' you'll get a separate copy of your code which is usually not what you want.\n"
] |
[
2,
2
] |
[] |
[] |
[
"namespaces",
"pickle",
"python"
] |
stackoverflow_0000868112_namespaces_pickle_python.txt
|
Q:
Python Newbie: Returning Multiple Int/String Results in Python
I have a function that has several outputs, all of which "native", i.e. integers and strings. For example, let's say I have a function that analyzes a string, and finds both the number of words and the average length of a word.
In C/C++ I would use @ to pass 2 parameters to the function. In Python I'm not sure what's the right solution, because integers and strings are not passed by reference but by value (at least this is what I understand from trial-and-error), so the following code won't work:
def analyze(string, number_of_words, average_length):
... do some analysis ...
number_of_words = ...
average_length = ...
If i do the above, the values outside the scope of the function don't change. What I currently do is use a dictionary like so:
def analyze(string, result):
... do some analysis ...
result['number_of_words'] = ...
result['average_length'] = ...
And I use the function like this:
s = "hello goodbye"
result = {}
analyze(s, result)
However, that does not feel right. What's the correct Pythonian way to achieve this? Please note I'm referring only to cases where the function returns 2-3 results, not tens of results. Also, I'm a complete newbie to Python, so I know I may be missing something trivial here...
Thanks
A:
python has a return statement, which allows you to do the follwing:
def func(input):
# do calculation on input
return result
s = "hello goodbye"
res = func(s) # res now a result dictionary
but you don't need to have result at all, you can return a few values like so:
def func(input):
# do work
return length, something_else # one might be an integer another string, etc.
s = "hello goodbye"
length, something = func(s)
A:
If you return the variables in your function like this:
def analyze(s, num_words, avg_length):
# do something
return s, num_words, avg_length
Then you can call it like this to update the parameters that were passed:
s, num_words, avg_length = analyze(s, num_words, avg_length)
But, for your example function, this would be better:
def analyze(s):
# do something
return num_words, avg_length
A:
In python you don't modify parameters in the C/C++ way (passing them by reference or through a pointer and doing modifications in situ).There are some reasons such as that the string objects are inmutable in python. The right thing to do is to return the modified parameters in a tuple (as SilentGhost suggested) and rebind the variables to the new values.
A:
If you need to use method arguments in both directions, you can encapsulate the arguments to the class and pass object to the method and let the method use its properties.
|
Python Newbie: Returning Multiple Int/String Results in Python
|
I have a function that has several outputs, all of which "native", i.e. integers and strings. For example, let's say I have a function that analyzes a string, and finds both the number of words and the average length of a word.
In C/C++ I would use @ to pass 2 parameters to the function. In Python I'm not sure what's the right solution, because integers and strings are not passed by reference but by value (at least this is what I understand from trial-and-error), so the following code won't work:
def analyze(string, number_of_words, average_length):
... do some analysis ...
number_of_words = ...
average_length = ...
If i do the above, the values outside the scope of the function don't change. What I currently do is use a dictionary like so:
def analyze(string, result):
... do some analysis ...
result['number_of_words'] = ...
result['average_length'] = ...
And I use the function like this:
s = "hello goodbye"
result = {}
analyze(s, result)
However, that does not feel right. What's the correct Pythonian way to achieve this? Please note I'm referring only to cases where the function returns 2-3 results, not tens of results. Also, I'm a complete newbie to Python, so I know I may be missing something trivial here...
Thanks
|
[
"python has a return statement, which allows you to do the follwing:\ndef func(input):\n # do calculation on input\n return result\n\ns = \"hello goodbye\"\nres = func(s) # res now a result dictionary\n\nbut you don't need to have result at all, you can return a few values like so:\ndef func(input):\n # do work\n return length, something_else # one might be an integer another string, etc.\n\ns = \"hello goodbye\"\nlength, something = func(s)\n\n",
"If you return the variables in your function like this:\ndef analyze(s, num_words, avg_length):\n # do something\n return s, num_words, avg_length\n\nThen you can call it like this to update the parameters that were passed:\ns, num_words, avg_length = analyze(s, num_words, avg_length)\n\nBut, for your example function, this would be better:\ndef analyze(s):\n # do something\n return num_words, avg_length\n\n",
"In python you don't modify parameters in the C/C++ way (passing them by reference or through a pointer and doing modifications in situ).There are some reasons such as that the string objects are inmutable in python. The right thing to do is to return the modified parameters in a tuple (as SilentGhost suggested) and rebind the variables to the new values.\n",
"If you need to use method arguments in both directions, you can encapsulate the arguments to the class and pass object to the method and let the method use its properties.\n"
] |
[
18,
3,
1,
0
] |
[] |
[] |
[
"parameters",
"python",
"reference"
] |
stackoverflow_0000868325_parameters_python_reference.txt
|
Q:
On GAE, how may I show a date according to right client TimeZone?
On my Google App Engine application, i'm storing an auto-updated date/time in my model like that :
class MyModel(db.Model):
date = db.DateTimeProperty(auto_now_add=True)
But, that date/time is the local time on server, according to it's time zone.
So, when I would like to display it on my web page, how may I format it according the client time zone ?
Also, how may I know on which time zone that google server is ?
A:
With respect to the second part of your question:
Python time() returns UTC regardless of what time zone the server is in. timezone() and tzname() will give you, respectively, the offset to local time on the server and the name of the timezone and the DST timezone as a tuple. GAE uses Python 2.5.x as of the time of this posting; see the documentation for Python time functions here.
For the first part of the question:
You can either format the date with code on the server, or with code on the client.
If you format on the server, you can
Use the timezone of the requester's IP address
Require a user to register and give you a timezone, store it, and use the stored value
Use a (hidden?) field on the GET or POST so the client's desired timezone is part of the request
If you format on the client, you'll need to write a few lines of JavaScript. The procedure is something like "make a date from UTC using Date(utctime), then stuff it into the document." By default, JavaScript dates display as local time regardless of how they were initialized - awesome!
I recommend formatting on the client, because what's a webpage like without a bit of JavaScript? It's 2009! Marcelo has some examples.
A:
Ok, so thanks to Thomas L Holaday, I have to sent that UTC date to the client, for example using JSON :
json = '{"serverUTCDate":new Date("%s")}' % date.ctime()
And then, on the client side, add/remove the number of seconds according to the user time zone like that :
var localDate = serverUTCDate.getTime() - (new Date()).getTimezoneOffset()*60000;
A:
You can find out the server's timezone by asking for local and UTC time and seeing what the difference is. To find out the client timezone you need to have in your template a little bit of Javascript that will tell you e.g. in an AJAX exchange. To manipulate timezones once you've discovered the deltas I suggest pytz, pytz.sourceforge.net/ .
|
On GAE, how may I show a date according to right client TimeZone?
|
On my Google App Engine application, i'm storing an auto-updated date/time in my model like that :
class MyModel(db.Model):
date = db.DateTimeProperty(auto_now_add=True)
But, that date/time is the local time on server, according to it's time zone.
So, when I would like to display it on my web page, how may I format it according the client time zone ?
Also, how may I know on which time zone that google server is ?
|
[
"With respect to the second part of your question:\nPython time() returns UTC regardless of what time zone the server is in. timezone() and tzname() will give you, respectively, the offset to local time on the server and the name of the timezone and the DST timezone as a tuple. GAE uses Python 2.5.x as of the time of this posting; see the documentation for Python time functions here.\nFor the first part of the question:\nYou can either format the date with code on the server, or with code on the client. \n\nIf you format on the server, you can \n\nUse the timezone of the requester's IP address\nRequire a user to register and give you a timezone, store it, and use the stored value\nUse a (hidden?) field on the GET or POST so the client's desired timezone is part of the request\n\nIf you format on the client, you'll need to write a few lines of JavaScript. The procedure is something like \"make a date from UTC using Date(utctime), then stuff it into the document.\" By default, JavaScript dates display as local time regardless of how they were initialized - awesome! \n\nI recommend formatting on the client, because what's a webpage like without a bit of JavaScript? It's 2009! Marcelo has some examples.\n",
"Ok, so thanks to Thomas L Holaday, I have to sent that UTC date to the client, for example using JSON : \njson = '{\"serverUTCDate\":new Date(\"%s\")}' % date.ctime()\n\nAnd then, on the client side, add/remove the number of seconds according to the user time zone like that :\nvar localDate = serverUTCDate.getTime() - (new Date()).getTimezoneOffset()*60000;\n\n",
"You can find out the server's timezone by asking for local and UTC time and seeing what the difference is. To find out the client timezone you need to have in your template a little bit of Javascript that will tell you e.g. in an AJAX exchange. To manipulate timezones once you've discovered the deltas I suggest pytz, pytz.sourceforge.net/ .\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"django",
"google_app_engine",
"python",
"timezone"
] |
stackoverflow_0000868708_django_google_app_engine_python_timezone.txt
|
Q:
Writing Digg like system in django/python
I am tying to write a digg , hackernews , http://collectivesys.com/ like application where users submit something and other users can vote up or down , mark items as favorite ect .
I was just wondering if there are some open source implementations django/python that i could use as starting point , instead of reinventing the wheel by starting from scratch.
A:
Check out Pinax and Django Pluggables for some pre-made Django apps to help you out.
A:
reddit is open source, written mostly in python. Apart from the code, there might be some algorithms you may find helpful.
A:
I'd recommend taking a close look at the django-voting project on Google Code.
They claim to be an django implementation of "Reddit Style Voting"
A:
Though not written in django, reddit is written in python and is open source. From the code you could get some ideas and see how they overcame certain hurdles etc. They've open sourced everything bar their antispam measures.
|
Writing Digg like system in django/python
|
I am tying to write a digg , hackernews , http://collectivesys.com/ like application where users submit something and other users can vote up or down , mark items as favorite ect .
I was just wondering if there are some open source implementations django/python that i could use as starting point , instead of reinventing the wheel by starting from scratch.
|
[
"Check out Pinax and Django Pluggables for some pre-made Django apps to help you out.\n",
"reddit is open source, written mostly in python. Apart from the code, there might be some algorithms you may find helpful.\n",
"I'd recommend taking a close look at the django-voting project on Google Code.\nThey claim to be an django implementation of \"Reddit Style Voting\"\n",
"Though not written in django, reddit is written in python and is open source. From the code you could get some ideas and see how they overcame certain hurdles etc. They've open sourced everything bar their antispam measures.\n"
] |
[
6,
5,
3,
1
] |
[] |
[] |
[
"digg",
"django",
"python"
] |
stackoverflow_0000867251_digg_django_python.txt
|
Q:
HTML Newbie Question: Colored Background for Characters in Django HttpResponse
I would like to generate an HttpResponse that contains a certain string. For each of the characters in the string I have a background color I want to use.
For simplification, let's assume I can only have shades of green in the background, and that the "background colors" data represents "level of brightness" in the green domain.
For example, a response may be 'abcd', and my "background colors" data may be:
[0.0, 1.0, 0.5, 1.0]
This would mean the first character 'a' needs to have a background of dark green (e.g. 004000), the second character 'b' needs to have a background of bright green (e.g. 00ff00), the third character 'c' needs to have a "middle" brightness (e.g. 00A000), and so on.
I don't want to use a template but rather just return "plain text" response. Is that possible?
If not - what would be the simplest template I could use for that?
Thanks
A:
It could be something like this:
aString = 'abcd'
newString =''
colors= [0.0, 1.0, 0.5, 1.0]
for i in aString:
newString = newString + '<span style="background-color: rgb(0,%s,0)">%s</span>'%(colors.pop(0)*255,i)
response = HttpResponse(newString)
untested
A:
you can use something like this to generate html in the django view itself
and return it as text/html
data = "abcd"
greenShades = [0.0, 1.0, 0.5, 1.0]
out = "<html>"
for d, clrG in zip(data,greenShades):
out +=""" <div style="background-color:RGB(0,%s,0);color:white;">%s</div> """%(int(clrG*255), d)
out += "</html>"
A:
Your best bet here would be to use the span element, as well as a stylesheet. If you don't want to use a template, then you'd have to render this inline. An example:
string_data = 'asdf'
color_data = [0.0, 1.0, 0.5, 1.0]
response = []
for char, color in zip(string_data, color_data):
response.append('<span style="background-color:rgb(0,%s,0);">%s</span>' % (color, char)
response = HttpResponse(''.join(response))
I'd imagine that this could also be done in a template if you wanted.
|
HTML Newbie Question: Colored Background for Characters in Django HttpResponse
|
I would like to generate an HttpResponse that contains a certain string. For each of the characters in the string I have a background color I want to use.
For simplification, let's assume I can only have shades of green in the background, and that the "background colors" data represents "level of brightness" in the green domain.
For example, a response may be 'abcd', and my "background colors" data may be:
[0.0, 1.0, 0.5, 1.0]
This would mean the first character 'a' needs to have a background of dark green (e.g. 004000), the second character 'b' needs to have a background of bright green (e.g. 00ff00), the third character 'c' needs to have a "middle" brightness (e.g. 00A000), and so on.
I don't want to use a template but rather just return "plain text" response. Is that possible?
If not - what would be the simplest template I could use for that?
Thanks
|
[
"It could be something like this:\naString = 'abcd'\nnewString =''\ncolors= [0.0, 1.0, 0.5, 1.0]\nfor i in aString:\n newString = newString + '<span style=\"background-color: rgb(0,%s,0)\">%s</span>'%(colors.pop(0)*255,i)\n\n\n\nresponse = HttpResponse(newString)\n\nuntested\n",
"you can use something like this to generate html in the django view itself\nand return it as text/html\ndata = \"abcd\"\ngreenShades = [0.0, 1.0, 0.5, 1.0]\n\nout = \"<html>\"\nfor d, clrG in zip(data,greenShades):\n out +=\"\"\" <div style=\"background-color:RGB(0,%s,0);color:white;\">%s</div> \"\"\"%(int(clrG*255), d)\nout += \"</html>\"\n\n",
"Your best bet here would be to use the span element, as well as a stylesheet. If you don't want to use a template, then you'd have to render this inline. An example: \nstring_data = 'asdf'\ncolor_data = [0.0, 1.0, 0.5, 1.0]\nresponse = []\nfor char, color in zip(string_data, color_data):\n response.append('<span style=\"background-color:rgb(0,%s,0);\">%s</span>' % (color, char)\nresponse = HttpResponse(''.join(response))\n\nI'd imagine that this could also be done in a template if you wanted.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"colors",
"django",
"html",
"python"
] |
stackoverflow_0000868871_colors_django_html_python.txt
|
Q:
Downloading file using post method and python
I need a little help getting a tar file to download from a website. The website is set up as a form where you pick the file you want and click submit and then the download windows opens up for you to pick the location.
I'm trying to do the same thing in code (so I don't have to manual pick each file). So far I have gotten python 2.5.2 to return a response but its in a socket._fileobject and I'm now not sure how to convert that into a file on my computer.
Below is output of the python shell and the steps I have taken
Python 2.5.2 on win32
IDLE 1.2.2
>>> import urllib
>>> import urllib2
>>> url = 'http://website/getdata.php'
>>> values = {'data[]': 'filex.tar'}
>>> data = urllib.urlencode(values)
>>> req = urllib2.Request(url,data)
>>> response = urllib2.urlopen(req)
>>> response
addinfourl at 22250280 whose fp = <socket._fileobject object at 0x01534570
>>> response.fp
socket._fileobject object at 0x01534570
A:
Append this:
myfile = open('myfile.tar', 'wb')
shutil.copyfileobj(response.fp, myfile)
myfile.close()
response.fp is a file-like object that you can read from, just like an open file. shutil.copyfileobj() is a simple function that reads from one file-like object and writes its contents to another.
|
Downloading file using post method and python
|
I need a little help getting a tar file to download from a website. The website is set up as a form where you pick the file you want and click submit and then the download windows opens up for you to pick the location.
I'm trying to do the same thing in code (so I don't have to manual pick each file). So far I have gotten python 2.5.2 to return a response but its in a socket._fileobject and I'm now not sure how to convert that into a file on my computer.
Below is output of the python shell and the steps I have taken
Python 2.5.2 on win32
IDLE 1.2.2
>>> import urllib
>>> import urllib2
>>> url = 'http://website/getdata.php'
>>> values = {'data[]': 'filex.tar'}
>>> data = urllib.urlencode(values)
>>> req = urllib2.Request(url,data)
>>> response = urllib2.urlopen(req)
>>> response
addinfourl at 22250280 whose fp = <socket._fileobject object at 0x01534570
>>> response.fp
socket._fileobject object at 0x01534570
|
[
"Append this:\nmyfile = open('myfile.tar', 'wb')\nshutil.copyfileobj(response.fp, myfile)\nmyfile.close()\n\nresponse.fp is a file-like object that you can read from, just like an open file. shutil.copyfileobj() is a simple function that reads from one file-like object and writes its contents to another.\n"
] |
[
2
] |
[] |
[] |
[
"python",
"urllib2"
] |
stackoverflow_0000869679_python_urllib2.txt
|
Q:
Running unittest.main() from a module?
I wrote a little function that dynamically defines unittest.TestCase classes (trivial version below).
When I moved it out of the same source file into its own module, I can't figure out how to get unittest to discover the new classes. Calling unittest.main() from either file doesn't execute any tests.
factory.py:
import unittest
_testnum = 0
def test_factory(a, b):
global _testnum
testname = 'dyntest' + str(_testnum)
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
_testnum += 1
def finish():
unittest.main()
someotherfile.py:
from factory import test_factory, finish
test_factory(1, 1)
test_factory(1, 2)
if __name__ == '__main__':
finish()
Output:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
So it doesn't execute any tests.
Note that keeping it all in the same file works as expected:
import unittest
_testnum = 0
def test_factory(a, b):
global _testnum
testname = 'dyntest' + str(_testnum)
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
_testnum += 1
test_factory(1, 1)
test_factory(1, 2)
if __name__ == '__main__':
unittest.main()
Output (as expected):
.F
======================================================================
FAIL: testme (__main__.dyntest1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "partb.py", line 11, in <lambda>
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
AssertionError: 1 != 2
----------------------------------------------------------------------
Ran 2 tests in 0.008s
FAILED (failures=1)
How I use my test_factory() function such that I can execute all of the TestCase objects it defines from a separate source file?
A:
The general idea (what unittest.main does for you) is:
suite = unittest.TestLoader().loadTestsFromTestCase(SomeTestCase)
unittest.TextTestRunner(verbosity=2).run(suite)
as per http://docs.python.org/library/unittest.html?highlight=unittest#module-unittest . Your test cases are hidden in globals() by the test_factory function, so just do a dir(), find the globals that are instances of unittest.TestCase (or ones with names starting with 'dyntest', etc), and just build your suite that way and run it.
A:
By default, unittest.main() looks for unit TestCase objects in the main module. The test_factory creates the TestCase objects in its own module. That's why moving it outside of the main module causes the behavior you see.
Try:
def finish():
unittest.main(module=__name__)
|
Running unittest.main() from a module?
|
I wrote a little function that dynamically defines unittest.TestCase classes (trivial version below).
When I moved it out of the same source file into its own module, I can't figure out how to get unittest to discover the new classes. Calling unittest.main() from either file doesn't execute any tests.
factory.py:
import unittest
_testnum = 0
def test_factory(a, b):
global _testnum
testname = 'dyntest' + str(_testnum)
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
_testnum += 1
def finish():
unittest.main()
someotherfile.py:
from factory import test_factory, finish
test_factory(1, 1)
test_factory(1, 2)
if __name__ == '__main__':
finish()
Output:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
So it doesn't execute any tests.
Note that keeping it all in the same file works as expected:
import unittest
_testnum = 0
def test_factory(a, b):
global _testnum
testname = 'dyntest' + str(_testnum)
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
_testnum += 1
test_factory(1, 1)
test_factory(1, 2)
if __name__ == '__main__':
unittest.main()
Output (as expected):
.F
======================================================================
FAIL: testme (__main__.dyntest1)
----------------------------------------------------------------------
Traceback (most recent call last):
File "partb.py", line 11, in <lambda>
globals()[testname] = type(testname, (unittest.TestCase,), {'testme': lambda self: self.assertEqual(a, b)})
AssertionError: 1 != 2
----------------------------------------------------------------------
Ran 2 tests in 0.008s
FAILED (failures=1)
How I use my test_factory() function such that I can execute all of the TestCase objects it defines from a separate source file?
|
[
"The general idea (what unittest.main does for you) is:\nsuite = unittest.TestLoader().loadTestsFromTestCase(SomeTestCase)\nunittest.TextTestRunner(verbosity=2).run(suite)\n\nas per http://docs.python.org/library/unittest.html?highlight=unittest#module-unittest . Your test cases are hidden in globals() by the test_factory function, so just do a dir(), find the globals that are instances of unittest.TestCase (or ones with names starting with 'dyntest', etc), and just build your suite that way and run it.\n",
"By default, unittest.main() looks for unit TestCase objects in the main module. The test_factory creates the TestCase objects in its own module. That's why moving it outside of the main module causes the behavior you see.\nTry:\ndef finish():\n unittest.main(module=__name__)\n\n"
] |
[
9,
8
] |
[] |
[] |
[
"python",
"unit_testing"
] |
stackoverflow_0000869519_python_unit_testing.txt
|
Q:
How do I disassemble a Python script?
Earlier today, I asked a question about the way Python handles certain kinds of loops. One of the answers contained disassembled versions of my examples.
I'd like to know more. How can I disassemble my own Python code?
A:
Look at the dis module:
def myfunc(alist):
return len(alist)
>>> dis.dis(myfunc)
2 0 LOAD_GLOBAL 0 (len)
3 LOAD_FAST 0 (alist)
6 CALL_FUNCTION 1
9 RETURN_VALUE
A:
Besides using dis as module, you can also run it as command line tool
For example, on windows you can run:
c:\Python25\Lib\dis.py test.py
And it will output the disassembed result to console.
A:
Use the dis module from the Python standard library (import dis e.g. in an interactive interpreter, then dis.dis any function you care about!-).
|
How do I disassemble a Python script?
|
Earlier today, I asked a question about the way Python handles certain kinds of loops. One of the answers contained disassembled versions of my examples.
I'd like to know more. How can I disassemble my own Python code?
|
[
"Look at the dis module:\ndef myfunc(alist):\n return len(alist)\n\n>>> dis.dis(myfunc)\n 2 0 LOAD_GLOBAL 0 (len)\n 3 LOAD_FAST 0 (alist)\n 6 CALL_FUNCTION 1\n 9 RETURN_VALUE\n\n",
"Besides using dis as module, you can also run it as command line tool\nFor example, on windows you can run:\nc:\\Python25\\Lib\\dis.py test.py\n\nAnd it will output the disassembed result to console.\n",
"Use the dis module from the Python standard library (import dis e.g. in an interactive interpreter, then dis.dis any function you care about!-).\n"
] |
[
13,
3,
2
] |
[] |
[] |
[
"debugging",
"python",
"reverse_engineering"
] |
stackoverflow_0000869586_debugging_python_reverse_engineering.txt
|
Q:
Help translation PYTHON to VB.NET
I am coding an application in VB.NET that sends sms.
Would you please post PYTHON->VB.NET
translation of this code and/or guidelines?
Thanks in advance!!!
import threading
class MessageThread(threading.Thread):
def __init__(self,msg,no):
threading.Thread.__init__(self)
self.msg = msg # text message
self.no = no # mobile number
def run(self):
# function that sends "msg" to "no"
send_msg(msg,no)
# records of users are retrived from database
# and (msg,no) tuples are generated
records = [(msg1,no1),(msg2, no2),...(msgN,noN)]
thread_list = []
for each in records:
t = MessageThread(each)
thread_list.append(t)
for each in thread_list:
each.start()
for each in thread_list:
each.join()
A:
This code creates a thread for every msg/no tuple and calls sendmsg. The first "for each ... each.start()" starts the thread (which only calls sendmsg) and the second "for each ... each.join()" waits for each thread to complete. Depending on the number of records, this could create a significant number of threads (what if you were sending 1000 SMS records) which is not necessarily efficient although it is asynchronous.
The code is relatively simple and pythonic, whereas for .NET you would probably want to use a ThreadPool or BackgroundWorker to do the sendmsg calls. You will need to create a .NET class that is equivalent to the tuple (msg,no), and probably put the sendmsg() function in the class itself. Then create .NET code to load the messages (which is not shown in the Python code). Typically you would use a generic List<> to hold the SMS records as well. Then the ThreadPool would queue all of the items and call sendmsg.
If you are trying to keep the code as equivalent to the original Python, then you should look at IronPython.
(The underscore in sendmsg caused the text to use italics, so I removed the underscore in my response.)
A:
This is IronPython-code ("Python for .NET"), hence the source code uses the .NET-Framework just as VB and all classes (even System.Threading.Thread) can be used in the same way shown.
Some tips:
MessageThread derives from Thread, msg and no must be declared as class variables, __init__ is the constructor, the self-parameter in the member functions isn't transcoded to VB (just leave it out). Use a List<Thread> for thread_list and define a little structure for the tuples in records.
|
Help translation PYTHON to VB.NET
|
I am coding an application in VB.NET that sends sms.
Would you please post PYTHON->VB.NET
translation of this code and/or guidelines?
Thanks in advance!!!
import threading
class MessageThread(threading.Thread):
def __init__(self,msg,no):
threading.Thread.__init__(self)
self.msg = msg # text message
self.no = no # mobile number
def run(self):
# function that sends "msg" to "no"
send_msg(msg,no)
# records of users are retrived from database
# and (msg,no) tuples are generated
records = [(msg1,no1),(msg2, no2),...(msgN,noN)]
thread_list = []
for each in records:
t = MessageThread(each)
thread_list.append(t)
for each in thread_list:
each.start()
for each in thread_list:
each.join()
|
[
"This code creates a thread for every msg/no tuple and calls sendmsg. The first \"for each ... each.start()\" starts the thread (which only calls sendmsg) and the second \"for each ... each.join()\" waits for each thread to complete. Depending on the number of records, this could create a significant number of threads (what if you were sending 1000 SMS records) which is not necessarily efficient although it is asynchronous.\nThe code is relatively simple and pythonic, whereas for .NET you would probably want to use a ThreadPool or BackgroundWorker to do the sendmsg calls. You will need to create a .NET class that is equivalent to the tuple (msg,no), and probably put the sendmsg() function in the class itself. Then create .NET code to load the messages (which is not shown in the Python code). Typically you would use a generic List<> to hold the SMS records as well. Then the ThreadPool would queue all of the items and call sendmsg.\nIf you are trying to keep the code as equivalent to the original Python, then you should look at IronPython.\n(The underscore in sendmsg caused the text to use italics, so I removed the underscore in my response.)\n",
"This is IronPython-code (\"Python for .NET\"), hence the source code uses the .NET-Framework just as VB and all classes (even System.Threading.Thread) can be used in the same way shown.\nSome tips:\nMessageThread derives from Thread, msg and no must be declared as class variables, __init__ is the constructor, the self-parameter in the member functions isn't transcoded to VB (just leave it out). Use a List<Thread> for thread_list and define a little structure for the tuples in records.\n"
] |
[
1,
0
] |
[] |
[] |
[
"multithreading",
"python",
"translation",
"vb.net"
] |
stackoverflow_0000870116_multithreading_python_translation_vb.net.txt
|
Q:
AuthSub with Text_db in google app engine
I am trying to read a spreadsheet from app engine using text_db and authsub.
I read http://code.google.com/appengine/articles/gdata.html and got it to work. Then I read http://code.google.com/p/gdata-python-client/wiki/AuthSubWithTextDB and I tried to merge the two in the file below (step4.py) but when I run it locally I get:
Traceback (most recent call last):
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/ext/webapp/__init__.py", line 498, in __call__
handler.get(*groups)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/step4.py", line 56, in get
session_token = client._GetDocsClient().UpgradeToSessionToken(auth_token) #If I don't pass this argument I get a NonAuthSubToken
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 866, in UpgradeToSessionToken
self.SetAuthSubToken(self.upgrade_to_session_token(token))
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 885, in upgrade_to_session_token
headers={'Content-Type':'application/x-www-form-urlencoded'})
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/auth.py", line 678, in perform_request
return http_client.request(operation, url, data=data, headers=headers)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/atom/http.py", line 163, in request
return connection.getresponse()
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/dist/httplib.py", line 200, in getresponse
self._allow_truncated, self._follow_redirects)
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/api/urlfetch.py", line 267, in fetch
raise DownloadError(str(e))
DownloadError: ApplicationError: 2 nonnumeric port: ''
Can anyone shed some light on this? Specifically, why is it that the original (step3.py from the first link) works but my call here to UpgradeToSessionToken fails?
# step4.py
#
# Trying to read spreadsheets from app engine using text_db and authsub.
#
# Merge of this code
# http://code.google.com/p/gdata-python-client/wiki/AuthSubWithTextDB
# with this one
# http://code.google.com/appengine/articles/gdata.html (step 3)
import wsgiref.handlers
import cgi
from google.appengine.ext import webapp
from google.appengine.api import users
import atom.url
import gdata.service
import gdata.alt.appengine
import gdata.spreadsheet.text_db
import settings
class Fetcher(webapp.RequestHandler):
def get(self):
# Write our pages title
self.response.out.write("""<html><head><title>
Google Data Feed Fetcher: read Google Data API Atom feeds</title>""")
self.response.out.write('</head><body>')
next_url = atom.url.Url('http', settings.HOST_NAME, path='/step4')
# Allow the user to sign in or sign out
if users.get_current_user():
self.response.out.write('<a href="%s">Sign Out</a><br>' % (
users.create_logout_url(str(next_url))))
else:
self.response.out.write('<a href="%s">Sign In</a><br>' % (
users.create_login_url(str(next_url))))
# Initialize a client to talk to Google Data API services.
# client = gdata.service.GDataService()
# auth_url = client.GenerateAuthSubURL(
# next_url,
# ('http://docs.google.com/feeds/',), secure=False, session=True)
client = gdata.spreadsheet.text_db.DatabaseClient()
auth_url = client._GetDocsClient().GenerateAuthSubURL(
next_url,
('http://spreadsheets.google.com/feeds/','http://docs.google.com/feeds/documents/'), secure=False, session=True)
gdata.alt.appengine.run_on_appengine(client)
feed_url = self.request.get('feed_url')
session_token = None
# Find the AuthSub token and upgrade it to a session token.
auth_token = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
if auth_token:
# Upgrade the single-use AuthSub token to a multi-use session token.
client._GetDocsClient().SetAuthSubToken(auth_token)
session_token = client._GetDocsClient().UpgradeToSessionToken(auth_token) #If I don't pass this argument I get a NonAuthSubToken
client._GetSpreadsheetsClient().SetAuthSubToken(client._GetDocsClient().GetAuthSubToken())
# session_token = client.upgrade_to_session_token(auth_token)
if session_token and users.get_current_user():
# If there is a current user, store the token in the datastore and
# associate it with the current user. Since we told the client to
# run_on_appengine, the add_token call will automatically store the
# session token if there is a current_user.
client.token_store.add_token(session_token)
elif session_token:
# Since there is no current user, we will put the session token
# in a property of the client. We will not store the token in the
# datastore, since we wouldn't know which user it belongs to.
# Since a new client object is created with each get call, we don't
# need to worry about the anonymous token being used by other users.
client.current_token = session_token
self.response.out.write('<div id="main">')
self.fetch_feed(client, feed_url)
self.response.out.write('</div>')
self.response.out.write(
'<div id="sidebar"><div id="scopes"><h4>Request a token</h4><ul>')
self.response.out.write('<li><a href="%s">Google Documents</a></li>' % (auth_url))
self.response.out.write('</ul></div><br/><div id="tokens">')
def fetch_feed(self, client, feed_url):
# Attempt to fetch the feed.
if not feed_url:
self.response.out.write(
'No feed_url was specified for the app to fetch.<br/>')
example_url = atom.url.Url('http', settings.HOST_NAME, path='/step4',
params={'feed_url':
'http://docs.google.com/feeds/documents/private/full'}
).to_string()
self.response.out.write('Here\'s an example query which will show the'
' XML for the feed listing your Google Documents <a '
'href="%s">%s</a>' % (example_url, example_url))
return
try:
response = client.Get(feed_url, converter=str)
self.response.out.write(cgi.escape(response))
except gdata.service.RequestError, request_error:
# If fetching fails, then tell the user that they need to login to
# authorize this app by logging in at the following URL.
if request_error[0]['status'] == 401:
# Get the URL of the current page so that our AuthSub request will
# send the user back to here.
next = atom.url.Url('http', settings.HOST_NAME, path='/step4',
params={'feed_url': feed_url})
# If there is a current user, we can request a session token, otherwise
# we should ask for a single use token.
auth_sub_url = client.GenerateAuthSubURL(next, feed_url,
secure=False, session=True)
self.response.out.write('<a href="%s">' % (auth_sub_url))
self.response.out.write(
'Click here to authorize this application to view the feed</a>')
else:
self.response.out.write(
'Something went wrong, here is the error object: %s ' % (
str(request_error[0])))
def main():
application = webapp.WSGIApplication([('/.*', Fetcher),], debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
A:
As always, I figure out the answer only after giving up and asking for help.
we need to add two more calls to run_on_appengine (to register the two clients that the text_db client has):
gdata.alt.appengine.run_on_appengine(client)
gdata.alt.appengine.run_on_appengine(client._GetDocsClient())
gdata.alt.appengine.run_on_appengine(client._GetSpreadsheetsClient()) here
I would have expected the first call to run_on_appengine to result in the two other calls, but I guess not.
Oh, and change the auth_url line to:
auth_url = client._GetDocsClient().GenerateAuthSubURL(
next_url,scope='http://spreadsheets.google.com/feeds/ http://docs.google.com/feeds/documents/', secure=False, session=True)
Passing the scope urls in a list caused a "token does not have the correct scope" error.
|
AuthSub with Text_db in google app engine
|
I am trying to read a spreadsheet from app engine using text_db and authsub.
I read http://code.google.com/appengine/articles/gdata.html and got it to work. Then I read http://code.google.com/p/gdata-python-client/wiki/AuthSubWithTextDB and I tried to merge the two in the file below (step4.py) but when I run it locally I get:
Traceback (most recent call last):
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/ext/webapp/__init__.py", line 498, in __call__
handler.get(*groups)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/step4.py", line 56, in get
session_token = client._GetDocsClient().UpgradeToSessionToken(auth_token) #If I don't pass this argument I get a NonAuthSubToken
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 866, in UpgradeToSessionToken
self.SetAuthSubToken(self.upgrade_to_session_token(token))
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/service.py", line 885, in upgrade_to_session_token
headers={'Content-Type':'application/x-www-form-urlencoded'})
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/gdata/auth.py", line 678, in perform_request
return http_client.request(operation, url, data=data, headers=headers)
File "/home/jmvidal/share/progs/googleapps/google_appengine/glassboard/atom/http.py", line 163, in request
return connection.getresponse()
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/dist/httplib.py", line 200, in getresponse
self._allow_truncated, self._follow_redirects)
File "/home/jmvidal/share/progs/googleapps/google_appengine/google/appengine/api/urlfetch.py", line 267, in fetch
raise DownloadError(str(e))
DownloadError: ApplicationError: 2 nonnumeric port: ''
Can anyone shed some light on this? Specifically, why is it that the original (step3.py from the first link) works but my call here to UpgradeToSessionToken fails?
# step4.py
#
# Trying to read spreadsheets from app engine using text_db and authsub.
#
# Merge of this code
# http://code.google.com/p/gdata-python-client/wiki/AuthSubWithTextDB
# with this one
# http://code.google.com/appengine/articles/gdata.html (step 3)
import wsgiref.handlers
import cgi
from google.appengine.ext import webapp
from google.appengine.api import users
import atom.url
import gdata.service
import gdata.alt.appengine
import gdata.spreadsheet.text_db
import settings
class Fetcher(webapp.RequestHandler):
def get(self):
# Write our pages title
self.response.out.write("""<html><head><title>
Google Data Feed Fetcher: read Google Data API Atom feeds</title>""")
self.response.out.write('</head><body>')
next_url = atom.url.Url('http', settings.HOST_NAME, path='/step4')
# Allow the user to sign in or sign out
if users.get_current_user():
self.response.out.write('<a href="%s">Sign Out</a><br>' % (
users.create_logout_url(str(next_url))))
else:
self.response.out.write('<a href="%s">Sign In</a><br>' % (
users.create_login_url(str(next_url))))
# Initialize a client to talk to Google Data API services.
# client = gdata.service.GDataService()
# auth_url = client.GenerateAuthSubURL(
# next_url,
# ('http://docs.google.com/feeds/',), secure=False, session=True)
client = gdata.spreadsheet.text_db.DatabaseClient()
auth_url = client._GetDocsClient().GenerateAuthSubURL(
next_url,
('http://spreadsheets.google.com/feeds/','http://docs.google.com/feeds/documents/'), secure=False, session=True)
gdata.alt.appengine.run_on_appengine(client)
feed_url = self.request.get('feed_url')
session_token = None
# Find the AuthSub token and upgrade it to a session token.
auth_token = gdata.auth.extract_auth_sub_token_from_url(self.request.uri)
if auth_token:
# Upgrade the single-use AuthSub token to a multi-use session token.
client._GetDocsClient().SetAuthSubToken(auth_token)
session_token = client._GetDocsClient().UpgradeToSessionToken(auth_token) #If I don't pass this argument I get a NonAuthSubToken
client._GetSpreadsheetsClient().SetAuthSubToken(client._GetDocsClient().GetAuthSubToken())
# session_token = client.upgrade_to_session_token(auth_token)
if session_token and users.get_current_user():
# If there is a current user, store the token in the datastore and
# associate it with the current user. Since we told the client to
# run_on_appengine, the add_token call will automatically store the
# session token if there is a current_user.
client.token_store.add_token(session_token)
elif session_token:
# Since there is no current user, we will put the session token
# in a property of the client. We will not store the token in the
# datastore, since we wouldn't know which user it belongs to.
# Since a new client object is created with each get call, we don't
# need to worry about the anonymous token being used by other users.
client.current_token = session_token
self.response.out.write('<div id="main">')
self.fetch_feed(client, feed_url)
self.response.out.write('</div>')
self.response.out.write(
'<div id="sidebar"><div id="scopes"><h4>Request a token</h4><ul>')
self.response.out.write('<li><a href="%s">Google Documents</a></li>' % (auth_url))
self.response.out.write('</ul></div><br/><div id="tokens">')
def fetch_feed(self, client, feed_url):
# Attempt to fetch the feed.
if not feed_url:
self.response.out.write(
'No feed_url was specified for the app to fetch.<br/>')
example_url = atom.url.Url('http', settings.HOST_NAME, path='/step4',
params={'feed_url':
'http://docs.google.com/feeds/documents/private/full'}
).to_string()
self.response.out.write('Here\'s an example query which will show the'
' XML for the feed listing your Google Documents <a '
'href="%s">%s</a>' % (example_url, example_url))
return
try:
response = client.Get(feed_url, converter=str)
self.response.out.write(cgi.escape(response))
except gdata.service.RequestError, request_error:
# If fetching fails, then tell the user that they need to login to
# authorize this app by logging in at the following URL.
if request_error[0]['status'] == 401:
# Get the URL of the current page so that our AuthSub request will
# send the user back to here.
next = atom.url.Url('http', settings.HOST_NAME, path='/step4',
params={'feed_url': feed_url})
# If there is a current user, we can request a session token, otherwise
# we should ask for a single use token.
auth_sub_url = client.GenerateAuthSubURL(next, feed_url,
secure=False, session=True)
self.response.out.write('<a href="%s">' % (auth_sub_url))
self.response.out.write(
'Click here to authorize this application to view the feed</a>')
else:
self.response.out.write(
'Something went wrong, here is the error object: %s ' % (
str(request_error[0])))
def main():
application = webapp.WSGIApplication([('/.*', Fetcher),], debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
|
[
"As always, I figure out the answer only after giving up and asking for help.\nwe need to add two more calls to run_on_appengine (to register the two clients that the text_db client has):\ngdata.alt.appengine.run_on_appengine(client)\ngdata.alt.appengine.run_on_appengine(client._GetDocsClient())\ngdata.alt.appengine.run_on_appengine(client._GetSpreadsheetsClient()) here\n\nI would have expected the first call to run_on_appengine to result in the two other calls, but I guess not.\nOh, and change the auth_url line to:\nauth_url = client._GetDocsClient().GenerateAuthSubURL(\n next_url,scope='http://spreadsheets.google.com/feeds/ http://docs.google.com/feeds/documents/', secure=False, session=True)\n\nPassing the scope urls in a list caused a \"token does not have the correct scope\" error.\n"
] |
[
1
] |
[] |
[] |
[
"authentication",
"gdata_api",
"google_app_engine",
"python"
] |
stackoverflow_0000870192_authentication_gdata_api_google_app_engine_python.txt
|
Q:
How do languages such as Python overcome C's Integral data limits?
While doing some random experimentation with a factorial program in C, Python and Scheme. I came across this fact:
In C, using 'unsigned long long' data type, the largest factorial I can print is of 65. which is '9223372036854775808' that is 19 digits as specified here.
In Python, I can find the factorial of a number as large as 999 which consists of a large number of digits, much more than 19.
How does CPython achieve this? Does it use a data type like 'octaword' ?
I might be missing some fundamental facts here. So, I would appreciate some insights and/or references to read. Thanks!
UPDATE: Thank you all for the explanation. Does that means, CPython is using the GNU Multi-precision library (or some other similar library)?
UPDATE 2: I am looking for Python's 'bignum' implementation in the sources. Where exactly it is? Its here at http://svn.python.org/view/python/trunk/Objects/longobject.c?view=markup. Thanks Baishampayan.
A:
It's called Arbitrary Precision Arithmetic. There's more here: http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
A:
Looking at the Python source code, it seems the long type (at least in pre-Python 3 code) is defined in longintrepr.h like this -
/* Long integer representation.
The absolute value of a number is equal to
SUM(for i=0 through abs(ob_size)-1) ob_digit[i] * 2**(SHIFT*i)
Negative numbers are represented with ob_size < 0;
zero is represented by ob_size == 0.
In a normalized number, ob_digit[abs(ob_size)-1] (the most significant
digit) is never zero. Also, in all cases, for all valid i,
0 <= ob_digit[i] <= MASK.
The allocation function takes care of allocating extra memory
so that ob_digit[0] ... ob_digit[abs(ob_size)-1] are actually available.
CAUTION: Generic code manipulating subtypes of PyVarObject has to
aware that longs abuse ob_size's sign bit.
*/
struct _longobject {
PyObject_VAR_HEAD
digit ob_digit[1];
};
The actual usable interface of the long type is then defined in longobject.h by creating a new type PyLongObject like this -
typedef struct _longobject PyLongObject;
And so on.
There is more stuff happening inside longobject.c, you can take a look at those for more details.
A:
Data types such as int in C are directly mapped (more or less) to the data types supported by the processor. So the limits on C's int are essentially the limits imposed by the processor hardware.
But one can implement one's own int data type entirely in software. You can for example use an array of digits as your underlying representation. May be like this:
class MyInt {
private int [] digits;
public MyInt(int noOfDigits) {
digits = new int[noOfDigits];
}
}
Once you do that you may use this class and store integers containing as many digits as you want, as long as you don't run out memory.
Perhaps Python is doing something like this inside its virtual machine. You may want to read this article on Arbitrary Precision Arithmetic to get the details.
A:
Not octaword. It implemented bignum structure to store arbitary-precision numbers.
A:
Python assigns to long integers (all ints in Python 3) just as much space as they need -- an array of "digits" (base being a power of 2) allocated as needed.
|
How do languages such as Python overcome C's Integral data limits?
|
While doing some random experimentation with a factorial program in C, Python and Scheme. I came across this fact:
In C, using 'unsigned long long' data type, the largest factorial I can print is of 65. which is '9223372036854775808' that is 19 digits as specified here.
In Python, I can find the factorial of a number as large as 999 which consists of a large number of digits, much more than 19.
How does CPython achieve this? Does it use a data type like 'octaword' ?
I might be missing some fundamental facts here. So, I would appreciate some insights and/or references to read. Thanks!
UPDATE: Thank you all for the explanation. Does that means, CPython is using the GNU Multi-precision library (or some other similar library)?
UPDATE 2: I am looking for Python's 'bignum' implementation in the sources. Where exactly it is? Its here at http://svn.python.org/view/python/trunk/Objects/longobject.c?view=markup. Thanks Baishampayan.
|
[
"It's called Arbitrary Precision Arithmetic. There's more here: http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic\n",
"Looking at the Python source code, it seems the long type (at least in pre-Python 3 code) is defined in longintrepr.h like this -\n/* Long integer representation.\n The absolute value of a number is equal to\n SUM(for i=0 through abs(ob_size)-1) ob_digit[i] * 2**(SHIFT*i)\n Negative numbers are represented with ob_size < 0;\n zero is represented by ob_size == 0.\n In a normalized number, ob_digit[abs(ob_size)-1] (the most significant\n digit) is never zero. Also, in all cases, for all valid i,\n 0 <= ob_digit[i] <= MASK.\n The allocation function takes care of allocating extra memory\n so that ob_digit[0] ... ob_digit[abs(ob_size)-1] are actually available.\n\n CAUTION: Generic code manipulating subtypes of PyVarObject has to\n aware that longs abuse ob_size's sign bit.\n*/\n\nstruct _longobject {\n PyObject_VAR_HEAD\n digit ob_digit[1];\n};\n\nThe actual usable interface of the long type is then defined in longobject.h by creating a new type PyLongObject like this -\ntypedef struct _longobject PyLongObject;\n\nAnd so on.\nThere is more stuff happening inside longobject.c, you can take a look at those for more details.\n",
"Data types such as int in C are directly mapped (more or less) to the data types supported by the processor. So the limits on C's int are essentially the limits imposed by the processor hardware.\nBut one can implement one's own int data type entirely in software. You can for example use an array of digits as your underlying representation. May be like this:\nclass MyInt {\n private int [] digits;\n public MyInt(int noOfDigits) {\n digits = new int[noOfDigits];\n }\n}\n\nOnce you do that you may use this class and store integers containing as many digits as you want, as long as you don't run out memory.\nPerhaps Python is doing something like this inside its virtual machine. You may want to read this article on Arbitrary Precision Arithmetic to get the details.\n",
"Not octaword. It implemented bignum structure to store arbitary-precision numbers.\n",
"Python assigns to long integers (all ints in Python 3) just as much space as they need -- an array of \"digits\" (base being a power of 2) allocated as needed.\n"
] |
[
9,
6,
4,
3,
1
] |
[] |
[] |
[
"c",
"integer",
"python",
"types"
] |
stackoverflow_0000867393_c_integer_python_types.txt
|
Q:
SOAPpy - reserved word in named parameter list
I'm using SOAPpy to access a SOAP Webservice. This call to the function findPathwaysByText works just fine:
server.findPathwaysByText (query= 'WP619', species = 'Mus musculus')
However, this call to the function login does not:
server.login (user='amarillion', pass='*****')
Because pass is a reserved word, python won't run this. Is there a workaround?
A:
You could try:
d = {'user':'amarillion', 'pass':'*****' }
server.login(**d)
This passes in the given dictionary as though they were keyword arguments (the **)
A:
You can say
server.login(user='amarillion', **{'pass': '*****'})
The double-asterix syntax here applies keyword arguments. Here's a simple example that shows what's happening:
def f(a, b):
return a + b
kwargs = {"a": 5, "b": 6}
return f(**kwargs) # same as saying f(a=5, b=6)
|
SOAPpy - reserved word in named parameter list
|
I'm using SOAPpy to access a SOAP Webservice. This call to the function findPathwaysByText works just fine:
server.findPathwaysByText (query= 'WP619', species = 'Mus musculus')
However, this call to the function login does not:
server.login (user='amarillion', pass='*****')
Because pass is a reserved word, python won't run this. Is there a workaround?
|
[
"You could try:\nd = {'user':'amarillion', 'pass':'*****' }\nserver.login(**d)\n\nThis passes in the given dictionary as though they were keyword arguments (the **)\n",
"You can say\nserver.login(user='amarillion', **{'pass': '*****'})\n\nThe double-asterix syntax here applies keyword arguments. Here's a simple example that shows what's happening:\ndef f(a, b):\n return a + b\n\nkwargs = {\"a\": 5, \"b\": 6}\nreturn f(**kwargs) # same as saying f(a=5, b=6)\n\n"
] |
[
5,
1
] |
[] |
[] |
[
"python",
"reserved_words",
"soap",
"soappy"
] |
stackoverflow_0000870455_python_reserved_words_soap_soappy.txt
|
Q:
How the method resolution and invocation works internally in Python?
How the methods invocation works in Python?
I mean, how the python virtual machine interpret it.
It's true that the python method resolution could be slower in Python that in Java.
What is late binding?
What are the differences on the reflection mechanism in these two languages?
Where to find good resources explaining these aspects?
A:
Method invocation in Python consists of two distinct separable steps. First an attribute lookup is done, then the result of that lookup is invoked. This means that the following two snippets have the same semantics:
foo.bar()
method = foo.bar
method()
Attribute lookup in Python is a rather complex process. Say we are looking up attribute named attr on object obj, meaning the following expression in Python code: obj.attr
First obj's instance dictionary is searched for "attr", then the instance dictionary of the class of obj and the dictionaries of its parent classes are searched in method resolution order for "attr".
Normally if a value is found on the instance, that is returned. But if the lookup on the class results in a value that has both the __get__ and __set__ methods (to be exact, if a dictionary lookup on the values class and parent classes has values for both those keys) then the class attribute is regarded as something called a "data descriptor". This means that the __get__ method on that value is called, passing in the object on which the lookup occurred and the result of that value is returned. If the class attribute isn't found or isn't a data descriptor the value from the instances dictionary is returned.
If there is no value in the instance dictionary, then the value from the class lookup is returned. Unless it happens to be a "non-data descriptor", i.e. it has the __get__ method. Then the __get__ method is invoked and the resulting value returned.
There is one more special case, if the obj happens to be a class, (an instance of the type type), then the instance value is also checked if it's a descriptor and invoked accordingly.
If no value is found on the instance nor its class hierarchy, and the obj's class has a __getattr__ method, that method is called.
The following shows the algorithm as encoded in Python, effectively doing what the getattr() function would do. (excluding any bugs that have slipped in)
NotFound = object() # A singleton to signify not found values
def lookup_attribute(obj, attr):
class_attr_value = lookup_attr_on_class(obj, attr)
if is_data_descriptor(class_attr_value):
return invoke_descriptor(class_attr_value, obj, obj.__class__)
if attr in obj.__dict__:
instance_attr_value = obj.__dict__[attr]
if isinstance(obj, type) and is_descriptor(instance_attr_value):
return invoke_descriptor(instance_attr_value, None, obj)
return instance_attr_value
if class_attr_value is NotFound:
getattr_method = lookup_attr_on_class(obj, '__getattr__')
if getattr_method is NotFound:
raise AttributeError()
return getattr_method(obj, attr)
if is_descriptor(class_attr_value):
return invoke_descriptor(class_attr_value, obj, obj.__class__)
return class_attr_value
def lookup_attr_on_class(obj, attr):
for parent_class in obj.__class__.__mro__:
if attr in parent_class.__dict__:
return parent_class.__dict__[attr]
return NotFound
def is_descriptor(obj):
if lookup_attr_on_class(obj, '__get__') is NotFound:
return False
return True
def is_data_descriptor(obj):
if not is_descriptor(obj) or lookup_attr_on_class(obj, '__set__') is NotFound :
return False
return True
def invoke_descriptor(descriptor, obj, cls):
descriptormethod = lookup_attr_on_class(descriptor, '__get__')
return descriptormethod(descriptor, obj, cls)
What does all this descriptor nonsense have to with method invocation you ask? Well the thing is, that functions are also objects, and they happen to implement the descriptor protocol. If the attribute lookup finds a function object on the class, it's __get__ methods gets called and returns a "bound method" object. A bound method is just a small wrapper around the function object that stores the object that the function was looked up on, and when invoked, prepends that object to the argument list (where usually for functions that are meant to methods the self argument is).
Here's some illustrative code:
class Function(object):
def __get__(self, obj, cls):
return BoundMethod(obj, cls, self.func)
# Init and call added so that it would work as a function
# decorator if you'd like to experiment with it yourself
def __init__(self, the_actual_implementation):
self.func = the_actual_implementation
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
class BoundMethod(object):
def __init__(self, obj, cls, func):
self.obj, self.cls, self.func = obj, cls, func
def __call__(self, *args, **kwargs):
if self.obj is not None:
return self.func(self.obj, *args, **kwargs)
elif isinstance(args[0], self.cls):
return self.func(*args, **kwargs)
raise TypeError("Unbound method expects an instance of %s as first arg" % self.cls)
For method resolution order (which in Python's case actually means attribute resolution order) Python uses the C3 algorithm from Dylan. It is too complicated to explain here, so if you're interested see this article. Unless you are doing some really funky inheritance hierarchies (and you shouldn't), it is enough to know that the lookup order is left to right, depth first, and all subclasses of a class are searched before that class is searched.
A:
Names (methods, functions, variables) are all resolved by looking at the namespace. Namespaces are implemented in CPython as dicts (hash maps).
When a name is not found in the instance namespace (dict), python goes for the class, and then for the base classes, following the method resolution order (MRO).
All resolving is made at runtime.
You can play around with the dis module to see how that happens in bytecode.
Simple example:
import dis
a = 1
class X(object):
def method1(self):
return 15
def test_namespace(b=None):
x = X()
x.method1()
print a
print b
dis.dis(test_namespace)
That prints:
9 0 LOAD_GLOBAL 0 (X)
3 CALL_FUNCTION 0
6 STORE_FAST 1 (x)
10 9 LOAD_FAST 1 (x)
12 LOAD_ATTR 1 (method1)
15 CALL_FUNCTION 0
18 POP_TOP
11 19 LOAD_GLOBAL 2 (a)
22 PRINT_ITEM
23 PRINT_NEWLINE
12 24 LOAD_FAST 0 (b)
27 PRINT_ITEM
28 PRINT_NEWLINE
29 LOAD_CONST 0 (None)
32 RETURN_VALUE
All LOADs are namespace lookups.
A:
It's true that the python method
resolution could be slower in Python
that in Java. What is late binding?
Late binding describes a strategy of how an interpreter or compiler of a particular language decides how to map an identifier to a piece of code. For example, consider writing obj.Foo() in C#. When you compile this, the compiler tries to find the referenced object and insert a reference to the location of the Foo method that will be invoked at runtime. All of this method resolution happens at compile time; we say that names are bound "early".
By contrast, Python binds names "late". Method resolution happens at run time: the interpreter simply tries to find the referenced Foo method with the right signature, and if it's not there, a runtime error occurs.
What are the differences on the
reflection mechanism in these two
languages?
Dynamic languages tend to have better reflection facilities than static languages, and Python is very powerful in this respect. Still, Java has pretty extensive ways to get at the internals of classes and methods. Nevertheless, you can't get around the verbosity of Java; you'll write much more code to do the same thing in Java than you would in Python. See the java.lang.reflect API.
|
How the method resolution and invocation works internally in Python?
|
How the methods invocation works in Python?
I mean, how the python virtual machine interpret it.
It's true that the python method resolution could be slower in Python that in Java.
What is late binding?
What are the differences on the reflection mechanism in these two languages?
Where to find good resources explaining these aspects?
|
[
"Method invocation in Python consists of two distinct separable steps. First an attribute lookup is done, then the result of that lookup is invoked. This means that the following two snippets have the same semantics:\nfoo.bar()\n\nmethod = foo.bar\nmethod()\n\nAttribute lookup in Python is a rather complex process. Say we are looking up attribute named attr on object obj, meaning the following expression in Python code: obj.attr\nFirst obj's instance dictionary is searched for \"attr\", then the instance dictionary of the class of obj and the dictionaries of its parent classes are searched in method resolution order for \"attr\".\nNormally if a value is found on the instance, that is returned. But if the lookup on the class results in a value that has both the __get__ and __set__ methods (to be exact, if a dictionary lookup on the values class and parent classes has values for both those keys) then the class attribute is regarded as something called a \"data descriptor\". This means that the __get__ method on that value is called, passing in the object on which the lookup occurred and the result of that value is returned. If the class attribute isn't found or isn't a data descriptor the value from the instances dictionary is returned.\nIf there is no value in the instance dictionary, then the value from the class lookup is returned. Unless it happens to be a \"non-data descriptor\", i.e. it has the __get__ method. Then the __get__ method is invoked and the resulting value returned.\nThere is one more special case, if the obj happens to be a class, (an instance of the type type), then the instance value is also checked if it's a descriptor and invoked accordingly.\nIf no value is found on the instance nor its class hierarchy, and the obj's class has a __getattr__ method, that method is called.\nThe following shows the algorithm as encoded in Python, effectively doing what the getattr() function would do. (excluding any bugs that have slipped in)\nNotFound = object() # A singleton to signify not found values\n\ndef lookup_attribute(obj, attr):\n class_attr_value = lookup_attr_on_class(obj, attr)\n\n if is_data_descriptor(class_attr_value):\n return invoke_descriptor(class_attr_value, obj, obj.__class__)\n\n if attr in obj.__dict__:\n instance_attr_value = obj.__dict__[attr]\n if isinstance(obj, type) and is_descriptor(instance_attr_value):\n return invoke_descriptor(instance_attr_value, None, obj)\n return instance_attr_value\n\n if class_attr_value is NotFound:\n getattr_method = lookup_attr_on_class(obj, '__getattr__')\n if getattr_method is NotFound:\n raise AttributeError()\n return getattr_method(obj, attr)\n\n if is_descriptor(class_attr_value):\n return invoke_descriptor(class_attr_value, obj, obj.__class__)\n\n return class_attr_value\n\ndef lookup_attr_on_class(obj, attr):\n for parent_class in obj.__class__.__mro__:\n if attr in parent_class.__dict__:\n return parent_class.__dict__[attr]\n return NotFound\n\ndef is_descriptor(obj):\n if lookup_attr_on_class(obj, '__get__') is NotFound:\n return False\n return True\n\ndef is_data_descriptor(obj):\n if not is_descriptor(obj) or lookup_attr_on_class(obj, '__set__') is NotFound :\n return False\n return True\n\ndef invoke_descriptor(descriptor, obj, cls):\n descriptormethod = lookup_attr_on_class(descriptor, '__get__')\n return descriptormethod(descriptor, obj, cls)\n\nWhat does all this descriptor nonsense have to with method invocation you ask? Well the thing is, that functions are also objects, and they happen to implement the descriptor protocol. If the attribute lookup finds a function object on the class, it's __get__ methods gets called and returns a \"bound method\" object. A bound method is just a small wrapper around the function object that stores the object that the function was looked up on, and when invoked, prepends that object to the argument list (where usually for functions that are meant to methods the self argument is).\nHere's some illustrative code:\nclass Function(object):\n def __get__(self, obj, cls):\n return BoundMethod(obj, cls, self.func)\n # Init and call added so that it would work as a function\n # decorator if you'd like to experiment with it yourself\n def __init__(self, the_actual_implementation):\n self.func = the_actual_implementation\n def __call__(self, *args, **kwargs):\n return self.func(*args, **kwargs)\n\nclass BoundMethod(object):\n def __init__(self, obj, cls, func):\n self.obj, self.cls, self.func = obj, cls, func\n def __call__(self, *args, **kwargs):\n if self.obj is not None:\n return self.func(self.obj, *args, **kwargs)\n elif isinstance(args[0], self.cls):\n return self.func(*args, **kwargs)\n raise TypeError(\"Unbound method expects an instance of %s as first arg\" % self.cls)\n\nFor method resolution order (which in Python's case actually means attribute resolution order) Python uses the C3 algorithm from Dylan. It is too complicated to explain here, so if you're interested see this article. Unless you are doing some really funky inheritance hierarchies (and you shouldn't), it is enough to know that the lookup order is left to right, depth first, and all subclasses of a class are searched before that class is searched.\n",
"Names (methods, functions, variables) are all resolved by looking at the namespace. Namespaces are implemented in CPython as dicts (hash maps).\nWhen a name is not found in the instance namespace (dict), python goes for the class, and then for the base classes, following the method resolution order (MRO).\nAll resolving is made at runtime.\nYou can play around with the dis module to see how that happens in bytecode.\nSimple example:\nimport dis\na = 1\n\nclass X(object):\n def method1(self):\n return 15\n\ndef test_namespace(b=None):\n x = X()\n x.method1()\n print a\n print b\n\ndis.dis(test_namespace)\n\nThat prints:\n 9 0 LOAD_GLOBAL 0 (X)\n 3 CALL_FUNCTION 0\n 6 STORE_FAST 1 (x)\n\n 10 9 LOAD_FAST 1 (x)\n 12 LOAD_ATTR 1 (method1)\n 15 CALL_FUNCTION 0\n 18 POP_TOP \n\n 11 19 LOAD_GLOBAL 2 (a)\n 22 PRINT_ITEM \n 23 PRINT_NEWLINE \n\n 12 24 LOAD_FAST 0 (b)\n 27 PRINT_ITEM \n 28 PRINT_NEWLINE \n 29 LOAD_CONST 0 (None)\n 32 RETURN_VALUE \n\nAll LOADs are namespace lookups.\n",
"\nIt's true that the python method\n resolution could be slower in Python\n that in Java. What is late binding?\n\nLate binding describes a strategy of how an interpreter or compiler of a particular language decides how to map an identifier to a piece of code. For example, consider writing obj.Foo() in C#. When you compile this, the compiler tries to find the referenced object and insert a reference to the location of the Foo method that will be invoked at runtime. All of this method resolution happens at compile time; we say that names are bound \"early\".\nBy contrast, Python binds names \"late\". Method resolution happens at run time: the interpreter simply tries to find the referenced Foo method with the right signature, and if it's not there, a runtime error occurs.\n\nWhat are the differences on the\n reflection mechanism in these two\n languages?\n\nDynamic languages tend to have better reflection facilities than static languages, and Python is very powerful in this respect. Still, Java has pretty extensive ways to get at the internals of classes and methods. Nevertheless, you can't get around the verbosity of Java; you'll write much more code to do the same thing in Java than you would in Python. See the java.lang.reflect API.\n"
] |
[
8,
4,
1
] |
[] |
[] |
[
"java",
"python"
] |
stackoverflow_0000852308_java_python.txt
|
Q:
How to generate a file with DDL in the engine's SQL dialect in SQLAlchemy?
Suppose I have an engine pointing at MySQL database:
engine = create_engine('mysql://arthurdent:answer42@localhost/dtdb', echo=True)
I can populate dtdb with tables, FKs, etc by:
metadata.create_all(engine)
Is there an easy way to generate the SQL file that contains all the DDL statements instead of actually applying these DDL statements to dtdb?
So far I have resorted to capturing SQLAlchemy log output produced by echo=True, and editing it by hand. But that's just too painful.
It looks like SA has pretty elaborate schema management API, but I haven't seen examples of simply streaming the schema definitions as text.
A:
The quick answer is in the SQLAlchemy 0.8 FAQ.
In SQLAlchemy 0.8 you need to do
engine = create_engine(
'mssql+pyodbc://./MyDb',
strategy='mock',
executor= lambda sql, *multiparams, **params: print (sql.compile(dialect=engine.dialect)))
In SQLAlchemy 0.9 the syntax is simplified.
engine = create_engine(
'mssql+pyodbc://./MyDb',
strategy='mock',
executor= lambda sql, *multiparams, **params: print (sql)
The longer answer is that capturing the output still has some slight issues. Like with the encoding of literals of types. But this hasn't been a big enough of an issue for anyone to step up and scratch their itch. You could always let SQLAlchemy programmatically create an empty database and dump the SQL from there.
The more difficult problem is the handling of schema migrations. This is where SQLAlchemy-migrate can help you.
|
How to generate a file with DDL in the engine's SQL dialect in SQLAlchemy?
|
Suppose I have an engine pointing at MySQL database:
engine = create_engine('mysql://arthurdent:answer42@localhost/dtdb', echo=True)
I can populate dtdb with tables, FKs, etc by:
metadata.create_all(engine)
Is there an easy way to generate the SQL file that contains all the DDL statements instead of actually applying these DDL statements to dtdb?
So far I have resorted to capturing SQLAlchemy log output produced by echo=True, and editing it by hand. But that's just too painful.
It looks like SA has pretty elaborate schema management API, but I haven't seen examples of simply streaming the schema definitions as text.
|
[
"The quick answer is in the SQLAlchemy 0.8 FAQ.\nIn SQLAlchemy 0.8 you need to do\nengine = create_engine(\n'mssql+pyodbc://./MyDb',\nstrategy='mock',\nexecutor= lambda sql, *multiparams, **params: print (sql.compile(dialect=engine.dialect)))\n\nIn SQLAlchemy 0.9 the syntax is simplified.\nengine = create_engine(\n'mssql+pyodbc://./MyDb',\nstrategy='mock',\nexecutor= lambda sql, *multiparams, **params: print (sql)\n\nThe longer answer is that capturing the output still has some slight issues. Like with the encoding of literals of types. But this hasn't been a big enough of an issue for anyone to step up and scratch their itch. You could always let SQLAlchemy programmatically create an empty database and dump the SQL from there.\nThe more difficult problem is the handling of schema migrations. This is where SQLAlchemy-migrate can help you.\n"
] |
[
15
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0000870925_python_sqlalchemy.txt
|
Q:
Python: Elegant way of dual/multiple iteration over the same list
I've written a bit of code like the following to compare items with other items further on in a list. Is there a more elegant pattern for this sort of dual iteration?
jump_item_iter = (j for j in items if some_cond)
try:
jump_item = jump_item_iter.next()
except StopIteration:
return
for item in items:
if jump_item is item:
try:
jump_item = jump_iter.next()
except StopIteration:
return
# do lots of stuff with item and jump_item
I don't think the "except StopIteration" is very elegant
Edit:
To hopefully make it clearer, I want to visit each item in a list and pair it with the next item further on in the list (jump_item) which satisfies some_cond.
A:
As far as I can see any of the existing solutions work on a general one shot, possiboly infinite iterator, all of them seem to require an iterable.
Heres a solution to that.
def batch_by(condition, seq):
it = iter(seq)
batch = [it.next()]
for jump_item in it:
if condition(jump_item):
for item in batch:
yield item, jump_item
batch = []
batch.append(jump_item)
This will easily work on infinite iterators:
from itertools import count, islice
is_prime = lambda n: n == 2 or all(n % div for div in xrange(2,n))
print list(islice(batch_by(is_prime, count()), 100))
This will print first 100 integers with the prime number that follows them.
A:
I have no idea what compare() is doing, but 80% of the time, you can do this with a trivial dictionary or pair of dictionaries. Jumping around in a list is a kind of linear search. Linear Search -- to the extent possible -- should always be replaced with either a direct reference (i.e., a dict) or a tree search (using the bisect module).
A:
paired_values = []
for elmt in reversed(items):
if <condition>:
current_val = elmt
try:
paired_values.append(current_val)
except NameError: # for the last elements of items that don't pass the condition
pass
paired_values.reverse()
for (item, jump_item) in zip(items, paired_values): # zip() truncates to len(paired_values)
# do lots of stuff
If the first element of items matches, then it is used as a jump_item. This is the only difference with your original code (and you might want this behavior).
A:
The following iterator is time and memory-efficient:
def jump_items(items):
number_to_be_returned = 0
for elmt in items:
if <condition(elmt)>:
for i in range(number_to_be_returned):
yield elmt
number_to_be_returned = 1
else:
number_to_be_returned += 1
for (item, jump_item) in zip(items, jump_items(items)):
# do lots of stuff
Note that you may actually want to set the first number_to_be_returned to 1...
A:
Write a generator function:
def myIterator(someValue):
yield (someValue[0], someValue[1])
for element1, element2 in myIterator(array):
# do something with those elements.
A:
I have no idea what you're trying to do with that code. But I'm 99% certain that whatever it is could probably be done in 2 lines. I also get the feeling that the '==' operator should be an 'is' operator, otherwise what is the compare() function doing? And what happens if the item returned from the second jump_iter.next call also equals 'item'? It seems like the algorithm would do the wrong thing since you'll compare the second and not the first.
A:
You could put the whole iteration into a single try structure, that way it would be clearer:
jump_item_iter = (j for j in items if some_cond)
try:
jump_item = jump_item_iter.next()
for item in items:
if jump_item is item:
jump_item = jump_iter.next()
# do lots of stuff with item and jump_item
except StopIteration:
pass
A:
So you want to compare pairs of items in the same list, the second item of the pair having to meet some condition. Normally, when you want to compare pairs in a list use zip (or itertools.izip):
for item1, item2 in zip(items, items[1:]):
compare(item1, item2)
Figure out how to fit your some_cond in this :)
A:
Are you basically trying to compare every item in the iterator with every other item in the original list?
To my mind this should just be a case of using two loops, rather than trying to fit it into one.
filtered_items = (j for j in items if some_cond)
for filtered in filtered_items:
for item in items:
if filtered != item:
compare(filtered, item)
A:
Here is one simple solution that might look a little cleaner:
for i, item in enumerate(items):
for next_item in items[i+1:]:
if some_cond(next_item):
break
# do some stuff with both items
The disadvantage is that you check the condition for next_item multiple times. But you can easily optimize this:
cond_items = [item if some_cond(item) else None for item in items]
for i, item in enumerate(items):
for next_item in cond_items[i+1:]:
if next_item is not None:
break
# do some stuff with both items
However, both solutions carry more overhead than the original solution from the question. And when you start using counters to work around this then I think it is better to use the iterator interface directly (as in the original solution).
A:
If I understand correctly, you want the main for item in items: to "chase" after an iterator that filters out some items. Well, there's not much you can do, except maybe wrap this into a chase_iterator(iterable, some_cond) generator, which would make your main code a little more readable.
Maybe that a more readable approach would be an "accumulator approach" (if the order of the compare() don't matter), like:
others = []
for item in items:
if some_cond(item):
for other in others:
compare(item, other)
others = []
else:
others.append(item)
A:
for i in range( 0, len( items ) ):
for j in range( i+1, len( items ) ):
if some_cond:
#do something
#items[i] = item, items[j] = jump_item
A:
With just iterators
def(lst, some_cond):
jump_item_iter = (j for j in lst if som_cond(j))
pairs = itertools.izip(lst, lst[1:])
for last in jump_item_iter:
for start, start_next in itertools.takewhile(lambda pair: pair[0] < last, pairs):
yield start, last
pairs = itertools.chain([(start_next, 'dummy')], pairs)
with the input: range(10) and some_cond = lambda x : x % 2
gives [(0, 1), (1, 3), (2, 3), (3, 5), (4, 5), (5, 7), (6, 7), (7, 9), (8, 9)]
(same that your example)
A:
Even better using itertools.groupby:
def h(lst, cond):
remain = lst
for last in (l for l in lst if cond(l)):
group = itertools.groupby(remain, key=lambda x: x < last)
for start in group.next()[1]:
yield start, last
remain = list(group.next()[1])
Usage:
lst = range(10)
cond = lambda x: x%2
print list(h(lst, cond))
will print
[(0, 1), (1, 3), (2, 3), (3, 5), (4, 5), (5, 7), (6, 7), (7, 9), (8, 9)]
A:
l = [j for j in items if some_cond]
for item, jump_item in zip(l, l[1:]):
# do lots of stuff with item and jump_item
If l = [j for j in range(10) if j%2 ==0] then the iteration is over: [(0, 2),(2, 4),(4, 6),(6, 8)].
A:
You could write your loop body as:
import itertools, functools, operator
for item in items:
jump_item_iter = itertools.dropwhile(functools.partial(operator.is_, item),
jump_item_iter)
# do something with item and jump_item_iter
dropwhile will return an iterator that skips over all those which match the condition (here "is item").
A:
You could do something like:
import itertools
def matcher(iterable, compare):
iterator= iter(iterable)
while True:
try: item= iterator.next()
except StopIteration: break
iterator, iterator2= itertools.tee(iterator)
for item2 in iterator2:
if compare(item, item2):
yield item, item2
but it's quite elaborate (and actually not very efficient), and it would be simpler if you just did a
items= list(iterable)
and then just write two loops over items.
Obviously, this won't work with infinite iterables, but your specification can only work on finite iterables.
|
Python: Elegant way of dual/multiple iteration over the same list
|
I've written a bit of code like the following to compare items with other items further on in a list. Is there a more elegant pattern for this sort of dual iteration?
jump_item_iter = (j for j in items if some_cond)
try:
jump_item = jump_item_iter.next()
except StopIteration:
return
for item in items:
if jump_item is item:
try:
jump_item = jump_iter.next()
except StopIteration:
return
# do lots of stuff with item and jump_item
I don't think the "except StopIteration" is very elegant
Edit:
To hopefully make it clearer, I want to visit each item in a list and pair it with the next item further on in the list (jump_item) which satisfies some_cond.
|
[
"As far as I can see any of the existing solutions work on a general one shot, possiboly infinite iterator, all of them seem to require an iterable.\nHeres a solution to that.\ndef batch_by(condition, seq):\n it = iter(seq)\n batch = [it.next()]\n for jump_item in it:\n if condition(jump_item):\n for item in batch:\n yield item, jump_item\n batch = []\n batch.append(jump_item)\n\nThis will easily work on infinite iterators:\nfrom itertools import count, islice\nis_prime = lambda n: n == 2 or all(n % div for div in xrange(2,n))\nprint list(islice(batch_by(is_prime, count()), 100))\n\nThis will print first 100 integers with the prime number that follows them.\n",
"I have no idea what compare() is doing, but 80% of the time, you can do this with a trivial dictionary or pair of dictionaries. Jumping around in a list is a kind of linear search. Linear Search -- to the extent possible -- should always be replaced with either a direct reference (i.e., a dict) or a tree search (using the bisect module).\n",
"paired_values = []\nfor elmt in reversed(items):\n if <condition>:\n current_val = elmt\n try:\n paired_values.append(current_val)\n except NameError: # for the last elements of items that don't pass the condition\n pass\npaired_values.reverse()\n\nfor (item, jump_item) in zip(items, paired_values): # zip() truncates to len(paired_values)\n # do lots of stuff\n\nIf the first element of items matches, then it is used as a jump_item. This is the only difference with your original code (and you might want this behavior).\n",
"The following iterator is time and memory-efficient:\ndef jump_items(items):\n number_to_be_returned = 0\n for elmt in items:\n if <condition(elmt)>:\n for i in range(number_to_be_returned):\n yield elmt\n number_to_be_returned = 1\n else:\n number_to_be_returned += 1\n\nfor (item, jump_item) in zip(items, jump_items(items)):\n # do lots of stuff\n\nNote that you may actually want to set the first number_to_be_returned to 1...\n",
"Write a generator function:\ndef myIterator(someValue):\n yield (someValue[0], someValue[1])\n\nfor element1, element2 in myIterator(array):\n # do something with those elements.\n\n",
"I have no idea what you're trying to do with that code. But I'm 99% certain that whatever it is could probably be done in 2 lines. I also get the feeling that the '==' operator should be an 'is' operator, otherwise what is the compare() function doing? And what happens if the item returned from the second jump_iter.next call also equals 'item'? It seems like the algorithm would do the wrong thing since you'll compare the second and not the first.\n",
"You could put the whole iteration into a single try structure, that way it would be clearer:\njump_item_iter = (j for j in items if some_cond)\ntry:\n jump_item = jump_item_iter.next()\n for item in items:\n if jump_item is item:\n jump_item = jump_iter.next()\n\n # do lots of stuff with item and jump_item\n\n except StopIteration:\n pass\n\n",
"So you want to compare pairs of items in the same list, the second item of the pair having to meet some condition. Normally, when you want to compare pairs in a list use zip (or itertools.izip):\nfor item1, item2 in zip(items, items[1:]):\n compare(item1, item2)\n\nFigure out how to fit your some_cond in this :)\n",
"Are you basically trying to compare every item in the iterator with every other item in the original list?\nTo my mind this should just be a case of using two loops, rather than trying to fit it into one.\n\n\nfiltered_items = (j for j in items if some_cond)\nfor filtered in filtered_items:\n for item in items:\n if filtered != item:\n compare(filtered, item)\n\n\n",
"Here is one simple solution that might look a little cleaner: \nfor i, item in enumerate(items):\n for next_item in items[i+1:]:\n if some_cond(next_item):\n break\n # do some stuff with both items\n\nThe disadvantage is that you check the condition for next_item multiple times. But you can easily optimize this:\ncond_items = [item if some_cond(item) else None for item in items]\nfor i, item in enumerate(items):\n for next_item in cond_items[i+1:]:\n if next_item is not None:\n break\n # do some stuff with both items\n\nHowever, both solutions carry more overhead than the original solution from the question. And when you start using counters to work around this then I think it is better to use the iterator interface directly (as in the original solution).\n",
"If I understand correctly, you want the main for item in items: to \"chase\" after an iterator that filters out some items. Well, there's not much you can do, except maybe wrap this into a chase_iterator(iterable, some_cond) generator, which would make your main code a little more readable.\nMaybe that a more readable approach would be an \"accumulator approach\" (if the order of the compare() don't matter), like:\nothers = []\nfor item in items:\n if some_cond(item):\n for other in others:\n compare(item, other)\n others = []\n else:\n others.append(item)\n\n",
"for i in range( 0, len( items ) ):\n for j in range( i+1, len( items ) ):\n if some_cond:\n #do something\n #items[i] = item, items[j] = jump_item\n\n",
"With just iterators\ndef(lst, some_cond):\n jump_item_iter = (j for j in lst if som_cond(j))\n pairs = itertools.izip(lst, lst[1:])\n for last in jump_item_iter:\n for start, start_next in itertools.takewhile(lambda pair: pair[0] < last, pairs):\n yield start, last\n pairs = itertools.chain([(start_next, 'dummy')], pairs)\n\nwith the input: range(10) and some_cond = lambda x : x % 2\ngives [(0, 1), (1, 3), (2, 3), (3, 5), (4, 5), (5, 7), (6, 7), (7, 9), (8, 9)]\n(same that your example)\n",
"Even better using itertools.groupby:\ndef h(lst, cond):\n remain = lst\n for last in (l for l in lst if cond(l)):\n group = itertools.groupby(remain, key=lambda x: x < last)\n for start in group.next()[1]:\n yield start, last\n remain = list(group.next()[1])\n\nUsage:\nlst = range(10)\ncond = lambda x: x%2\nprint list(h(lst, cond))\nwill print \n[(0, 1), (1, 3), (2, 3), (3, 5), (4, 5), (5, 7), (6, 7), (7, 9), (8, 9)]\n\n",
"l = [j for j in items if some_cond]\nfor item, jump_item in zip(l, l[1:]):\n # do lots of stuff with item and jump_item\n\nIf l = [j for j in range(10) if j%2 ==0] then the iteration is over: [(0, 2),(2, 4),(4, 6),(6, 8)].\n",
"You could write your loop body as:\nimport itertools, functools, operator\n\nfor item in items:\n jump_item_iter = itertools.dropwhile(functools.partial(operator.is_, item), \n jump_item_iter)\n\n # do something with item and jump_item_iter\n\ndropwhile will return an iterator that skips over all those which match the condition (here \"is item\").\n",
"You could do something like:\nimport itertools\n\ndef matcher(iterable, compare):\n iterator= iter(iterable)\n while True:\n try: item= iterator.next()\n except StopIteration: break\n iterator, iterator2= itertools.tee(iterator)\n for item2 in iterator2:\n if compare(item, item2):\n yield item, item2\n\nbut it's quite elaborate (and actually not very efficient), and it would be simpler if you just did a\nitems= list(iterable)\n\nand then just write two loops over items.\nObviously, this won't work with infinite iterables, but your specification can only work on finite iterables.\n"
] |
[
4,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"iterator",
"python"
] |
stackoverflow_0000867936_iterator_python.txt
|
Q:
"else" considered harmful in Python?
In an answer (by S.Lott) to a question about Python's try...else statement:
Actually, even on an if-statement, the
else: can be abused in truly terrible
ways creating bugs that are very hard
to find. [...]
Think twice about else:. It is
generally a problem. Avoid it except
in an if-statement and even then
consider documenting the else-
condition to make it explicit.
Is this a widely held opinion? Is else considered harmful?
Of course you can write confusing code with it but that's true of any other language construct. Even Python's for...else seems to me a very handy thing to have (less so for try...else).
A:
S.Lott has obviously seen some bad code out there. Haven't we all? I do not consider else harmful, though I've seen it used to write bad code. In those cases, all the surrounding code has been bad as well, so why blame poor else?
A:
No it is not harmful, it is necessary.
There should always be a catch-all statement. All switches should have a default. All pattern matching in an ML language should have a default.
The argument that it is impossible to reason what is true after a series of if statements is a fact of life. The computer is the biggest finite state machine out there, and it is silly to enumerate every single possibility in every situation.
If you are really afraid that unknown errors go unnoticed in else statements, is it really that hard to raise an exception there?
A:
Saying that else is considered harmful is a bit like saying that variables or classes are harmful. Heck, it's even like saying that goto is harmful. Sure, things can be misused. But at some point, you just have to trust programmers to be adults and be smart enough not to.
What it comes down to is this: if you're willing to not use something because an answer on SO or a blog post or even a famous paper by Dijkstra told you not to, you need to consider if programming is the right profession for you.
A:
I wouldn't say it is harmful, but there are times when the else statement can get you into trouble. For instance, if you need to do some processing based on an input value and there are only two valid input values. Only checking for one could introduce a bug.
eg:
The only valid inputs are 1 and 2:
if(input == 1)
{
//do processing
...
}
else
{
//do processing
...
}
In this case, using the else would allow all values other than 1 to be processed when it should only be for values 1 and 2.
A:
To me, the whole concept of certain popular language constructs being inherently bad is just plain wrong. Even goto has its place. I've seen very readable, maintainable code by the likes of Walter Bright and Linus Torvalds that uses it. It's much better to just teach programmers that readability counts and to use common sense than to arbitrarily declare certain constructs "harmful".
A:
If you write:
if foo:
# ...
elif bar:
# ...
# ...
then the reader may be left wondering: what if neither foo nor bar is true? Perhaps you know, from your understanding of the code, that it must be the case that either foo or bar. I would prefer to see:
if foo:
# ...
else:
# at this point, we know that bar is true.
# ...
# ...
or:
if foo:
# ...
else:
assert bar
# ...
# ...
This makes it clear to the reader how you expect control to flow, without requiring the reader to have intimate knowledge of where foo and bar come from.
(in the original case, you could still write a comment explaining what is happening, but I think I would then wonder: "Why not just use an else: clause?")
I think the point is not that you shouldn't use else:; rather, that an else: clause can allow you to write unclear code and you should try to recognise when this happens and add a little comment to help out any readers.
Which is true about most things in programming languages, really :-)
A:
Else is most useful when documenting assumptions about the code. It ensures that you have thought through both sides of an if statement.
Always using an else clause with each if statement is even a recommended practice in "Code Complete".
A:
Au contraire... In my opinion, there MUST be an else for every if. Granted, you can do stupid things, but you can abuse any construct if you try hard enough. You know the saying "a real programer can write FORTRAN in every language".
What I do lots of time is to write the else part as a comment, describing why there's nothing to be done.
A:
The rationale behind including the else statement (of try...else) in Python in the first place was to only catch the exceptions you really want to. Normally when you have a try...except block, there's some code that might raise an exception, and then there's some more code that should only run if the previous code was successful. Without an else block, you'd have to put all that code in the try block:
try:
something_that_might_raise_error()
do_this_only_if_that_was_ok()
except ValueError:
# whatever
The issue is, what if do_this_only_if_that_was_ok() raises a ValueError? It would get caught by the except statement, when you might not have wanted it to. That's the purpose of the else block:
try:
something_that_might_raise_error()
except ValueError:
# whatever
else:
do_this_only_if_that_was_ok()
I guess it's a matter of opinion to some extent, but I personally think this is a great idea, even though I use it very rarely. When I do use it, it just feels very appropriate (and besides, I think it helps clarify the code flow a bit)
A:
Seems to me that, for any language and any flow-control statement where there is a default scenario or side-effect, that scenario needs to have the same level of consideration. The logic in if or switch or while is only as good as the condition if(x) while(x) or for(...). Therefore the statement is not harmful but the logic in their condition is.
Therefore, as developers it is our responsibility to code with the wide scope of the else in-mind. Too many developers treat it as a 'if not the above' when in-fact it can ignore all common sense because the only logic in it is the negation of the preceding logic, which is often incomplete. (an algorithm design error itself)
I don't then consider 'else' any more harmful than off-by-ones in a for() loop or bad memory management. It's all about the algorithms. If your automata is complete in its scope and possible branches, and all are concrete and understood then there is no danger. The danger is misuse of the logic behind the expressions by people not realizing the impact of wide-scope logic. Computers are stupid, they do what they are told by their operator(in theory)
I do consider the try and catch to be dangerous because it can negate handling to an unknown quantity of code. Branching above the raise may contain a bug, highlighted by the raise itself. This is can be non-obvious. It is like turning a sequential set of instructions into a tree or graph of error handling, where each component is dependent on the branches in the parent. Odd. Mind you, I love C.
A:
There is a so called "dangling else" problem which is encountered in C family languages as follows:
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
This innocent code can be understood in two ways:
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
or
if (a==4)
if (b==2)
printf("here!");
else
printf("which one");
The problem is that the "else" is "dangling", one can confuse the owner of the else. Of course the compiler will not make this confusion, but it is valid for mortals.
Thanks to Python, we can not have a dangling else problem in Python since we have to write either
if a==4:
if b==2:
print "here!"
else:
print "which one"
or
if a==4:
if b==2:
print "here!"
else:
print "which one"
So that human eye catches it. And, nope, I do not think "else" is harmful, it is as harmful as "if".
A:
In the example posited of being hard to reason, it can be written explicitly, but the else is still necessary.
E.g.
if a < 10:
# condition stated explicitly
elif a > 10 and b < 10:
# condition confusing but at least explicit
else:
# Exactly what is true here?
# Can be hard to reason out what condition is true
Can be written
if a < 10:
# condition stated explicitly
elif a > 10 and b < 10:
# condition confusing but at least explicit
elif a > 10 and b >=10:
# else condition
else:
# Handle edge case with error?
A:
I think the point with respect to try...except...else is that it is an easy mistake to use it to create inconsistent state rather than fix it. It is not that it should be avoided at all costs, but it can be counter-productive.
Consider:
try:
file = open('somefile','r')
except IOError:
logger.error("File not found!")
else:
# Some file operations
file.close()
# Some code that no longer explicitly references 'file'
It would be real nice to say that the above block prevented code from trying to access a file that didn't exist, or a directory for which the user has no permissions, and to say that everything is encapsulated because it is within a try...except...else block. But in reality, a lot of code in the above form really should look like this:
try:
file = open('somefile','r')
except IOError:
logger.error("File not found!")
return False
# Some file operations
file.close()
# Some code that no longer explicitly references 'file'
You are often fooling yourself by saying that because file is no longer referenced in scope, it's okay to go on coding after the block, but in many cases something will come up where it just isn't okay. Or maybe a variable will later be created within the else block that isn't created in the except block.
This is how I would differentiate the if...else from try...except...else. In both cases, one must make the blocks parallel in most cases (variables and state set in one ought to be set in the other) but in the latter, coders often don't, likely because it's impossible or irrelevant. In such cases, it often will make a whole lot more sense to return to the caller than to try and keep working around what you think you will have in the best case scenario.
|
"else" considered harmful in Python?
|
In an answer (by S.Lott) to a question about Python's try...else statement:
Actually, even on an if-statement, the
else: can be abused in truly terrible
ways creating bugs that are very hard
to find. [...]
Think twice about else:. It is
generally a problem. Avoid it except
in an if-statement and even then
consider documenting the else-
condition to make it explicit.
Is this a widely held opinion? Is else considered harmful?
Of course you can write confusing code with it but that's true of any other language construct. Even Python's for...else seems to me a very handy thing to have (less so for try...else).
|
[
"S.Lott has obviously seen some bad code out there. Haven't we all? I do not consider else harmful, though I've seen it used to write bad code. In those cases, all the surrounding code has been bad as well, so why blame poor else?\n",
"No it is not harmful, it is necessary.\nThere should always be a catch-all statement. All switches should have a default. All pattern matching in an ML language should have a default.\nThe argument that it is impossible to reason what is true after a series of if statements is a fact of life. The computer is the biggest finite state machine out there, and it is silly to enumerate every single possibility in every situation.\nIf you are really afraid that unknown errors go unnoticed in else statements, is it really that hard to raise an exception there?\n",
"Saying that else is considered harmful is a bit like saying that variables or classes are harmful. Heck, it's even like saying that goto is harmful. Sure, things can be misused. But at some point, you just have to trust programmers to be adults and be smart enough not to.\nWhat it comes down to is this: if you're willing to not use something because an answer on SO or a blog post or even a famous paper by Dijkstra told you not to, you need to consider if programming is the right profession for you.\n",
"I wouldn't say it is harmful, but there are times when the else statement can get you into trouble. For instance, if you need to do some processing based on an input value and there are only two valid input values. Only checking for one could introduce a bug. \neg:\nThe only valid inputs are 1 and 2:\n\nif(input == 1)\n{\n //do processing\n ...\n}\nelse\n{\n //do processing \n ...\n}\n\nIn this case, using the else would allow all values other than 1 to be processed when it should only be for values 1 and 2.\n",
"To me, the whole concept of certain popular language constructs being inherently bad is just plain wrong. Even goto has its place. I've seen very readable, maintainable code by the likes of Walter Bright and Linus Torvalds that uses it. It's much better to just teach programmers that readability counts and to use common sense than to arbitrarily declare certain constructs \"harmful\".\n",
"If you write:\nif foo:\n # ...\nelif bar:\n # ...\n# ...\n\nthen the reader may be left wondering: what if neither foo nor bar is true? Perhaps you know, from your understanding of the code, that it must be the case that either foo or bar. I would prefer to see:\nif foo:\n # ...\nelse:\n # at this point, we know that bar is true.\n # ...\n# ...\n\nor:\nif foo:\n # ...\nelse:\n assert bar\n # ...\n# ...\n\nThis makes it clear to the reader how you expect control to flow, without requiring the reader to have intimate knowledge of where foo and bar come from.\n(in the original case, you could still write a comment explaining what is happening, but I think I would then wonder: \"Why not just use an else: clause?\")\nI think the point is not that you shouldn't use else:; rather, that an else: clause can allow you to write unclear code and you should try to recognise when this happens and add a little comment to help out any readers.\nWhich is true about most things in programming languages, really :-)\n",
"Else is most useful when documenting assumptions about the code. It ensures that you have thought through both sides of an if statement.\nAlways using an else clause with each if statement is even a recommended practice in \"Code Complete\".\n",
"Au contraire... In my opinion, there MUST be an else for every if. Granted, you can do stupid things, but you can abuse any construct if you try hard enough. You know the saying \"a real programer can write FORTRAN in every language\".\nWhat I do lots of time is to write the else part as a comment, describing why there's nothing to be done.\n",
"The rationale behind including the else statement (of try...else) in Python in the first place was to only catch the exceptions you really want to. Normally when you have a try...except block, there's some code that might raise an exception, and then there's some more code that should only run if the previous code was successful. Without an else block, you'd have to put all that code in the try block:\ntry:\n something_that_might_raise_error()\n do_this_only_if_that_was_ok()\nexcept ValueError:\n # whatever\n\nThe issue is, what if do_this_only_if_that_was_ok() raises a ValueError? It would get caught by the except statement, when you might not have wanted it to. That's the purpose of the else block:\ntry:\n something_that_might_raise_error()\nexcept ValueError:\n # whatever\nelse:\n do_this_only_if_that_was_ok()\n\nI guess it's a matter of opinion to some extent, but I personally think this is a great idea, even though I use it very rarely. When I do use it, it just feels very appropriate (and besides, I think it helps clarify the code flow a bit)\n",
"Seems to me that, for any language and any flow-control statement where there is a default scenario or side-effect, that scenario needs to have the same level of consideration. The logic in if or switch or while is only as good as the condition if(x) while(x) or for(...). Therefore the statement is not harmful but the logic in their condition is.\nTherefore, as developers it is our responsibility to code with the wide scope of the else in-mind. Too many developers treat it as a 'if not the above' when in-fact it can ignore all common sense because the only logic in it is the negation of the preceding logic, which is often incomplete. (an algorithm design error itself)\nI don't then consider 'else' any more harmful than off-by-ones in a for() loop or bad memory management. It's all about the algorithms. If your automata is complete in its scope and possible branches, and all are concrete and understood then there is no danger. The danger is misuse of the logic behind the expressions by people not realizing the impact of wide-scope logic. Computers are stupid, they do what they are told by their operator(in theory)\nI do consider the try and catch to be dangerous because it can negate handling to an unknown quantity of code. Branching above the raise may contain a bug, highlighted by the raise itself. This is can be non-obvious. It is like turning a sequential set of instructions into a tree or graph of error handling, where each component is dependent on the branches in the parent. Odd. Mind you, I love C.\n",
"There is a so called \"dangling else\" problem which is encountered in C family languages as follows:\nif (a==4)\nif (b==2)\nprintf(\"here!\");\nelse\nprintf(\"which one\");\n\nThis innocent code can be understood in two ways:\nif (a==4)\n if (b==2)\n printf(\"here!\");\n else\n printf(\"which one\");\n\nor\nif (a==4)\n if (b==2)\n printf(\"here!\");\nelse\n printf(\"which one\");\n\nThe problem is that the \"else\" is \"dangling\", one can confuse the owner of the else. Of course the compiler will not make this confusion, but it is valid for mortals.\nThanks to Python, we can not have a dangling else problem in Python since we have to write either\nif a==4:\n if b==2:\n print \"here!\"\nelse:\n print \"which one\"\n\nor\nif a==4:\n if b==2:\n print \"here!\"\n else:\n print \"which one\"\n\nSo that human eye catches it. And, nope, I do not think \"else\" is harmful, it is as harmful as \"if\".\n",
"In the example posited of being hard to reason, it can be written explicitly, but the else is still necessary.\nE.g. \nif a < 10: \n # condition stated explicitly \nelif a > 10 and b < 10: \n # condition confusing but at least explicit \nelse: \n # Exactly what is true here? \n # Can be hard to reason out what condition is true\n\nCan be written\nif a < 10: \n # condition stated explicitly \nelif a > 10 and b < 10: \n # condition confusing but at least explicit \nelif a > 10 and b >=10:\n # else condition\nelse: \n # Handle edge case with error?\n\n",
"I think the point with respect to try...except...else is that it is an easy mistake to use it to create inconsistent state rather than fix it. It is not that it should be avoided at all costs, but it can be counter-productive.\nConsider:\ntry:\n file = open('somefile','r')\nexcept IOError:\n logger.error(\"File not found!\")\nelse:\n # Some file operations\n file.close()\n# Some code that no longer explicitly references 'file'\n\nIt would be real nice to say that the above block prevented code from trying to access a file that didn't exist, or a directory for which the user has no permissions, and to say that everything is encapsulated because it is within a try...except...else block. But in reality, a lot of code in the above form really should look like this:\ntry:\n file = open('somefile','r')\nexcept IOError:\n logger.error(\"File not found!\")\n return False\n# Some file operations\nfile.close()\n# Some code that no longer explicitly references 'file'\n\nYou are often fooling yourself by saying that because file is no longer referenced in scope, it's okay to go on coding after the block, but in many cases something will come up where it just isn't okay. Or maybe a variable will later be created within the else block that isn't created in the except block.\nThis is how I would differentiate the if...else from try...except...else. In both cases, one must make the blocks parallel in most cases (variables and state set in one ought to be set in the other) but in the latter, coders often don't, likely because it's impossible or irrelevant. In such cases, it often will make a whole lot more sense to return to the caller than to try and keep working around what you think you will have in the best case scenario.\n"
] |
[
31,
15,
7,
7,
6,
4,
3,
3,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"if_statement",
"python"
] |
stackoverflow_0000865741_if_statement_python.txt
|
Q:
Production ready Python implementations besides CPython?
Except for CPython, which other Python implementations are currently usable for production systems?
The questions
What are the pros and cons of the various Python implementations?
I have been trying to wrap my head around the PyPy project. So, fast-foward 5-10 years in the future what will PyPy have to offer over CPython, Jython, and IronPython? and
Migrating from CPython to Jython
already shed some light on the pros/cons on the topic. I am wondering now, if those more exotic implementations are actually used in systems that have to run reliably. (possible examples? open-source?)
EDIT: I'm asking for code that needs the Python version >= 2.5
A:
CPython
Used in many, many products and production systems
Jython
I am aware of production systems and products (a transactional integration engine) based on Jython. In the latter case the product has been on the market since the early 2000's. Jython is a bit stagnant (although it seems to have picked up a bit lately) but it is mature and stable.
IronPython
This is the new kid on the block, although it does have some track record in products. It (particularly version 1.x) can be viewed as stable and ready for production use, and development is officially funded by Microsoft, who appear to have an interest in dynamic languages on top of the CLR. It is the greenest of the major python implementations, but appears to be reasonably stable.
Stackless Python
This is used extensively in EVE Online, and they seem to view it as production ready. Bear in mind that Stackless Python has been around for something like 10 years.
A:
At least one product, Resolver One, is said to be production-level and is totally based on IronPython.
Resolver One is a program that blends a familiar spreadsheet-like interface with the powerful Python programming language, giving you a tool with which to better analyse and present your data.
A:
I know that Jython is pretty mature and has been around for a long time.
Also, I'd take a look at Stackless python
A:
You can check http://www.portablepython.com/ which is the portable version of CPython. It is also bundled with very common and useful libraries and even an IDE, all portable.
There was Pyrex, which can be found at http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/. It is not Python, but very close. The Cython (not CPython) is based on Pyrex and can be found at http://www.cython.org/. They are both useful for creating C extensions for Python. Their languages are so Pythonic.
|
Production ready Python implementations besides CPython?
|
Except for CPython, which other Python implementations are currently usable for production systems?
The questions
What are the pros and cons of the various Python implementations?
I have been trying to wrap my head around the PyPy project. So, fast-foward 5-10 years in the future what will PyPy have to offer over CPython, Jython, and IronPython? and
Migrating from CPython to Jython
already shed some light on the pros/cons on the topic. I am wondering now, if those more exotic implementations are actually used in systems that have to run reliably. (possible examples? open-source?)
EDIT: I'm asking for code that needs the Python version >= 2.5
|
[
"CPython\nUsed in many, many products and production systems\nJython\nI am aware of production systems and products (a transactional integration engine) based on Jython. In the latter case the product has been on the market since the early 2000's. Jython is a bit stagnant (although it seems to have picked up a bit lately) but it is mature and stable.\nIronPython\nThis is the new kid on the block, although it does have some track record in products. It (particularly version 1.x) can be viewed as stable and ready for production use, and development is officially funded by Microsoft, who appear to have an interest in dynamic languages on top of the CLR. It is the greenest of the major python implementations, but appears to be reasonably stable.\nStackless Python\nThis is used extensively in EVE Online, and they seem to view it as production ready. Bear in mind that Stackless Python has been around for something like 10 years.\n",
"At least one product, Resolver One, is said to be production-level and is totally based on IronPython.\n\nResolver One is a program that blends a familiar spreadsheet-like interface with the powerful Python programming language, giving you a tool with which to better analyse and present your data.\n\n",
"I know that Jython is pretty mature and has been around for a long time.\nAlso, I'd take a look at Stackless python \n",
"You can check http://www.portablepython.com/ which is the portable version of CPython. It is also bundled with very common and useful libraries and even an IDE, all portable.\nThere was Pyrex, which can be found at http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/. It is not Python, but very close. The Cython (not CPython) is based on Pyrex and can be found at http://www.cython.org/. They are both useful for creating C extensions for Python. Their languages are so Pythonic.\n"
] |
[
10,
3,
0,
0
] |
[] |
[] |
[
"cpython",
"ironpython",
"jython",
"pypy",
"python"
] |
stackoverflow_0000852049_cpython_ironpython_jython_pypy_python.txt
|
Q:
XPath - How can I query for a parent node satisfying an attribute presence condition?
I need to query a node to determine if it has a parent node that contains a specified attribute. For instance:
<a b="value">
<b/>
</a>
From b as my focus element, I'd like to execute an XPath query:
..[@b]
that would return element a. The returned element must be the parent node of a, and should not contain any of a's siblings.
The lxml.etree library states that this is an invalid XPath expression.
A:
You can't combine the . or .. shorthands with a predicate. Instead, you'll need to use the full parent:: axis. The following should work for you:
parent::*[@b]
This will select the parent node (regardless of its local name), IFF it has a "b" attribute.
A:
I don't know about the lxml.etree library but ..[@b] is fully valid XPath (Update: see Ben Blank's comment). Identical to parent::a[@b], it will return context at the a element.
|
XPath - How can I query for a parent node satisfying an attribute presence condition?
|
I need to query a node to determine if it has a parent node that contains a specified attribute. For instance:
<a b="value">
<b/>
</a>
From b as my focus element, I'd like to execute an XPath query:
..[@b]
that would return element a. The returned element must be the parent node of a, and should not contain any of a's siblings.
The lxml.etree library states that this is an invalid XPath expression.
|
[
"You can't combine the . or .. shorthands with a predicate. Instead, you'll need to use the full parent:: axis. The following should work for you:\nparent::*[@b]\n\nThis will select the parent node (regardless of its local name), IFF it has a \"b\" attribute.\n",
"I don't know about the lxml.etree library but ..[@b] is fully valid XPath (Update: see Ben Blank's comment). Identical to parent::a[@b], it will return context at the a element.\n"
] |
[
4,
1
] |
[] |
[] |
[
"python",
"xml",
"xpath"
] |
stackoverflow_0000871188_python_xml_xpath.txt
|
Q:
Indentation in a Python GUI
As I write code in Python and suddenly feel like adding a new block in front of the code I have already written... the indentation of the complete code is affected...
It is a very tedious process to move to each line and change the indentation...is there a way to do auto indent or something?
For example:
def somefunction:
x =5
return x
If I want to add a control block
For example:
def somefunction:
if True:
x =5
return x
return 0
this small change of adding a control block took a lot of tab work...
Is there a shortcut or something to do this easily?
A:
I don't know what wacky planets everyone is coming from, but in most editors that don't date back to the stone age, indenting blocks of code typically only requires that a block of text be selected and Tab be pressed. On the flip side, Shift+Tab usually UNdents the block.
This is true for Visual Studio, Notepad2, e, Textmate, Slickedit, #Develop, etc. etc. etc.
If you're not doing large multi-file projects, I strongly recommend Notepad2. Its a very lightweight, free, easy-to-use notepad replacement with just enough code-centric features (line numbers, indentation guides, code highlighting, etc.)
A:
In the Idle editor, you can just select the lines you want to indent and hit Tab.
I should note that this doesn't actually insert any tabs into your source, just spaces.
A:
In IDLE I just use ctrl+] and ctrl+[ on a block of code.
A:
With emacs there's Python mode. In that mode you highlight and do:
ctrl-c >
ctrl-c <
A:
Use VI and never program the same again. :^)
A:
[Funny ;-)] Dude, I told you that you would need one developer less if you had this new keyboard model
Pythonic keyboard http://img22.imageshack.us/img22/7318/pythonkeyboard.jpg
A:
Vim: switch to visual mode, select the block, use > to indent (or < to unindent).
See also: Indent multiple lines quickly in vi
A:
If you are using vim there is a plugin specifically for this: Python_fn.vim
It provides useful python functions (and menu equivalents):
]t -- Jump to beginning of block
]e -- Jump to end of block
]v -- Select (Visual Line Mode) block
]< -- Shift block to left
]> -- Shift block to right
]# -- Comment selection
]u -- Uncomment selection
]c -- Select current/previous class
]d -- Select current/previous function
]<up> -- Jump to previous line with the same/lower indentation
]<down> -- Jump to next line with the same/lower indentation
A:
In TextMate, just highlight the lines you want to indent and use:
⌘ + [
or
⌘ + ]
To move the text in the appropriate direction.
A:
PyDev, which you can find at http://pydev.sourceforge.net/ has a "Code Formatter". It also has autoindent feature. It is a plugin for Eclipse which is freely available for Mac too.
Another option would be http://code.google.com/p/macvim/ if you are familiar or invest time for Vim, which has lots of autoindent features not just for Python.
But, do not forget that, in Python, indentation changes the meaning of the program unlike C family languages. For example for C or C#, a utility program can beautify the code according to the "{" and "}" symbols. But, in Python that would be ambiguous since a program can not format the following:
#Say we wrote the following and expect it to be formatted.
a = 1
for i in range(5):
print i
a = a + i
print a
Do you expect it to be
a = 1
for i in range(5):
print i
a = a + i
print a #Will print 5
or
a = 1
for i in range(5):
print i
a = a + i
print a #Will print 11
which are two different snippets.
A:
In Komodo the Tab and Shift Tab both work as expected to indent and unindent large blocks of code.
A:
In vim, you can enter:
>>
to indent a line. If you enter:
5>>
you indent the 5 lines at and below the cursor. 5<< does the reverse.
|
Indentation in a Python GUI
|
As I write code in Python and suddenly feel like adding a new block in front of the code I have already written... the indentation of the complete code is affected...
It is a very tedious process to move to each line and change the indentation...is there a way to do auto indent or something?
For example:
def somefunction:
x =5
return x
If I want to add a control block
For example:
def somefunction:
if True:
x =5
return x
return 0
this small change of adding a control block took a lot of tab work...
Is there a shortcut or something to do this easily?
|
[
"I don't know what wacky planets everyone is coming from, but in most editors that don't date back to the stone age, indenting blocks of code typically only requires that a block of text be selected and Tab be pressed. On the flip side, Shift+Tab usually UNdents the block.\nThis is true for Visual Studio, Notepad2, e, Textmate, Slickedit, #Develop, etc. etc. etc.\nIf you're not doing large multi-file projects, I strongly recommend Notepad2. Its a very lightweight, free, easy-to-use notepad replacement with just enough code-centric features (line numbers, indentation guides, code highlighting, etc.)\n",
"In the Idle editor, you can just select the lines you want to indent and hit Tab.\nI should note that this doesn't actually insert any tabs into your source, just spaces.\n",
"In IDLE I just use ctrl+] and ctrl+[ on a block of code.\n",
"With emacs there's Python mode. In that mode you highlight and do:\nctrl-c >\nctrl-c <\n\n",
"Use VI and never program the same again. :^)\n",
"[Funny ;-)] Dude, I told you that you would need one developer less if you had this new keyboard model\nPythonic keyboard http://img22.imageshack.us/img22/7318/pythonkeyboard.jpg\n",
"Vim: switch to visual mode, select the block, use > to indent (or < to unindent).\nSee also: Indent multiple lines quickly in vi\n",
"If you are using vim there is a plugin specifically for this: Python_fn.vim \nIt provides useful python functions (and menu equivalents):\n]t -- Jump to beginning of block\n]e -- Jump to end of block\n]v -- Select (Visual Line Mode) block\n]< -- Shift block to left\n]> -- Shift block to right\n]# -- Comment selection\n]u -- Uncomment selection\n]c -- Select current/previous class\n]d -- Select current/previous function\n]<up> -- Jump to previous line with the same/lower indentation\n]<down> -- Jump to next line with the same/lower indentation\n\n",
"In TextMate, just highlight the lines you want to indent and use:\n⌘ + [ \nor \n⌘ + ] \nTo move the text in the appropriate direction.\n",
"PyDev, which you can find at http://pydev.sourceforge.net/ has a \"Code Formatter\". It also has autoindent feature. It is a plugin for Eclipse which is freely available for Mac too.\nAnother option would be http://code.google.com/p/macvim/ if you are familiar or invest time for Vim, which has lots of autoindent features not just for Python.\nBut, do not forget that, in Python, indentation changes the meaning of the program unlike C family languages. For example for C or C#, a utility program can beautify the code according to the \"{\" and \"}\" symbols. But, in Python that would be ambiguous since a program can not format the following:\n#Say we wrote the following and expect it to be formatted.\na = 1\nfor i in range(5):\nprint i\na = a + i\nprint a\n\nDo you expect it to be\na = 1\nfor i in range(5):\n print i\na = a + i\nprint a #Will print 5\n\nor\na = 1\nfor i in range(5):\n print i\n a = a + i\nprint a #Will print 11\n\nwhich are two different snippets.\n",
"In Komodo the Tab and Shift Tab both work as expected to indent and unindent large blocks of code.\n",
"In vim, you can enter:\n>>\nto indent a line. If you enter:\n5>>\nyou indent the 5 lines at and below the cursor. 5<< does the reverse.\n"
] |
[
5,
3,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"indentation",
"python",
"user_interface"
] |
stackoverflow_0000869975_indentation_python_user_interface.txt
|
Q:
Can I segment a document in BeautifulSoup before converting it to text based on my analysis of the document?
I have some html files that I want to convert to text. I have played around with BeautifulSoup and made some progress on understanding how to use the instructions and can submit html and get back text.
However, my files have a lot of text that is formatted using table structures. For example I might have a paragraph of text that resides in a td tag within set of table tags
<table>
<td> here is some really useful information and there might be other markup tags but
this information is really textual in my eyes-I want to preserve it
</td>
</table>
And then there are the 'classic tables' that have data within the body of the table.
I want to be able to apply an algorithm to the table and set some rules that determine whether the table is ripped out before I convert the document to text.
I have figured out how to get the characteristics of my tables- for example to get the number of cols in each table:
numbCols=[]
for table in soup.findAll('table'):
rows=[]
for row in table.findAll('tr'):
columns=0
for column in row.findAll('td'):
columns+=1
rows.append(columns)
numbCols.append(rows)
so I can operate on numbCols and use the len of each item in the list and the values in each item in the list to analyze the characteristics of my tables and identify the ones I want to keep or discard.
I am not seeing an elegant way to the use this information with BeautifulSoup to get the text. I guess what I am trying to get at is suppose I analyze numbCols and decide that of the ten tables in a particular document I want to exclude tables 2, 4, 6, & 9. So the part of the html document includes everything but those tables. How can I segment my soup that way?
The solution I have come up with is first identify the position of each of the open and close table tags using finditer and getting the spans and then zipping the spans with the numbCols. I can then use this list to snip and join the pieces of my string together. Once this is finished I can then use BeautifulSoup to convert the html to text.
I feel sure that I should be able to do all of this in BeautifulSoup. Any suggestions or links to existing examples would be great. I should mention that my source files can be large and I have thousands to handle.
Didn't have the answer but I am getting closer
A:
Man I love this stuff
Assuming in a naive case that I want to delete all of the tables that have any rows with a column length greater than 3 My answer is
for table in soup.findAll('table'):
rows=[]
for row in table.findAll('tr'):
columns=0
for column in row.findAll('td'):
columns+=1
rows.append(columns)
if max(rows)>3:
table.delete()
You can do any processing you want at any level in that loop, it is only necessary to identify the test and get the right instance to test.
|
Can I segment a document in BeautifulSoup before converting it to text based on my analysis of the document?
|
I have some html files that I want to convert to text. I have played around with BeautifulSoup and made some progress on understanding how to use the instructions and can submit html and get back text.
However, my files have a lot of text that is formatted using table structures. For example I might have a paragraph of text that resides in a td tag within set of table tags
<table>
<td> here is some really useful information and there might be other markup tags but
this information is really textual in my eyes-I want to preserve it
</td>
</table>
And then there are the 'classic tables' that have data within the body of the table.
I want to be able to apply an algorithm to the table and set some rules that determine whether the table is ripped out before I convert the document to text.
I have figured out how to get the characteristics of my tables- for example to get the number of cols in each table:
numbCols=[]
for table in soup.findAll('table'):
rows=[]
for row in table.findAll('tr'):
columns=0
for column in row.findAll('td'):
columns+=1
rows.append(columns)
numbCols.append(rows)
so I can operate on numbCols and use the len of each item in the list and the values in each item in the list to analyze the characteristics of my tables and identify the ones I want to keep or discard.
I am not seeing an elegant way to the use this information with BeautifulSoup to get the text. I guess what I am trying to get at is suppose I analyze numbCols and decide that of the ten tables in a particular document I want to exclude tables 2, 4, 6, & 9. So the part of the html document includes everything but those tables. How can I segment my soup that way?
The solution I have come up with is first identify the position of each of the open and close table tags using finditer and getting the spans and then zipping the spans with the numbCols. I can then use this list to snip and join the pieces of my string together. Once this is finished I can then use BeautifulSoup to convert the html to text.
I feel sure that I should be able to do all of this in BeautifulSoup. Any suggestions or links to existing examples would be great. I should mention that my source files can be large and I have thousands to handle.
Didn't have the answer but I am getting closer
|
[
"Man I love this stuff\nAssuming in a naive case that I want to delete all of the tables that have any rows with a column length greater than 3 My answer is \nfor table in soup.findAll('table'):\n rows=[]\n for row in table.findAll('tr'):\n columns=0\n for column in row.findAll('td'):\n columns+=1\n rows.append(columns)\n if max(rows)>3:\n table.delete()\n\nYou can do any processing you want at any level in that loop, it is only necessary to identify the test and get the right instance to test.\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"python"
] |
stackoverflow_0000866772_beautifulsoup_python.txt
|
Q:
Simple List of All Java Standard Classes and Methods?
I'm building a very simple Java parser, to look for some specific usage models. This is in no way lex/yacc or any other form of interpreter/compiler for puposes of running the code.
When I encounter a word or a set of two words separated by a dot ("word.word"), I would like to know if that's a standard Java class (and method), e.g. "Integer", or some user defined name. I'm not interested in whether the proper classes were included/imported in the code (i.e. if the code compiles well), and the extreme cases of user defined classes that override the names of standard Java classes also does not interest me. In other words: I'm okay with false negative, I'm only interesting in being "mostly" right.
If there a place wher I could find a simple list of all the names of all Java standard classes and methods, in the form easily saved into a text file or database? (J2SE is okay, but J2EE is better). I'm familiar with http://java.sun.com/j2se/ etc, but it seems I need a terrible amount of manual work to extract all the names from there. Also, the most recent JDK is not neccesary, I can live with 1.4 or 1.5.
Clarification: I'm not working in Java but in Python, so I can't use Java-specific commands in my parsing mechanism.
Thanks
A:
What's wrong with the javadoc? The index lists all classes, methods, and static variables. You can probably grep for parenthesis.
A:
To get all classes and methods you can look at the index on
http://java.sun.com/javase/6/docs/api/index-files/index-1.html
This will be 10's of thousands classes and method which can be overwhelming.
I suggest instead you use auto-complete in your IDE. This will show you all the matching classes/methods appropriate based on context.
e.g. say you have a variable
long time = System.
This will show you all the methods in System which return a long value, such as
long time = System.nanoTime();
Even if you know a lot of the method/classes, this can save you a lot of typing.
A:
If you just want to create a list of all classes in Java and their methods (so that you can populate a database or an XML file), you may want to write an Eclipse-plugin that looks at the entire JavaCore model, and scans all of its classes (e.g., by searching all subtypes of Object). Then enumerate all the methods. You can do that technically to any library by including it in your context.
A:
IBM had a tool for creating XML from JavaDocs, if I am not mistaken:
http://www.ibm.com/developerworks/xml/library/x-tipjdoc/index.html
A:
There's also an option to either parse classlist file from jre/lib folder or open the jsse.jar file, list all classes there and make a list of them in dot-separated form by yourself.
A:
When I encounter a word or a set of two words separated by a dot ("word.word"), I would like to know if that's a standard Java class (and method), e.g. "Integer", or some user defined name.
If thats what you're after, you could do without a (limited) list of Java Classes by using some simple reflection:
http://java.sun.com/developer/technicalArticles/ALT/Reflection/
try {
Class.forName("word.word");
System.out.println("This is a valid class!");
} catch (ClassNotFoundException e) {
System.out.println("This is not a valid class.");
}
Something like this should be enough for your purposes, with he added benefit of not being limited to a subset of classes, and extensible by any libraries on the classpath.
|
Simple List of All Java Standard Classes and Methods?
|
I'm building a very simple Java parser, to look for some specific usage models. This is in no way lex/yacc or any other form of interpreter/compiler for puposes of running the code.
When I encounter a word or a set of two words separated by a dot ("word.word"), I would like to know if that's a standard Java class (and method), e.g. "Integer", or some user defined name. I'm not interested in whether the proper classes were included/imported in the code (i.e. if the code compiles well), and the extreme cases of user defined classes that override the names of standard Java classes also does not interest me. In other words: I'm okay with false negative, I'm only interesting in being "mostly" right.
If there a place wher I could find a simple list of all the names of all Java standard classes and methods, in the form easily saved into a text file or database? (J2SE is okay, but J2EE is better). I'm familiar with http://java.sun.com/j2se/ etc, but it seems I need a terrible amount of manual work to extract all the names from there. Also, the most recent JDK is not neccesary, I can live with 1.4 or 1.5.
Clarification: I'm not working in Java but in Python, so I can't use Java-specific commands in my parsing mechanism.
Thanks
|
[
"What's wrong with the javadoc? The index lists all classes, methods, and static variables. You can probably grep for parenthesis.\n",
"To get all classes and methods you can look at the index on\nhttp://java.sun.com/javase/6/docs/api/index-files/index-1.html\nThis will be 10's of thousands classes and method which can be overwhelming.\nI suggest instead you use auto-complete in your IDE. This will show you all the matching classes/methods appropriate based on context. \ne.g. say you have a variable\nlong time = System.\nThis will show you all the methods in System which return a long value, such as\nlong time = System.nanoTime();\nEven if you know a lot of the method/classes, this can save you a lot of typing.\n",
"If you just want to create a list of all classes in Java and their methods (so that you can populate a database or an XML file), you may want to write an Eclipse-plugin that looks at the entire JavaCore model, and scans all of its classes (e.g., by searching all subtypes of Object). Then enumerate all the methods. You can do that technically to any library by including it in your context.\n",
"IBM had a tool for creating XML from JavaDocs, if I am not mistaken:\nhttp://www.ibm.com/developerworks/xml/library/x-tipjdoc/index.html\n",
"There's also an option to either parse classlist file from jre/lib folder or open the jsse.jar file, list all classes there and make a list of them in dot-separated form by yourself.\n",
"\nWhen I encounter a word or a set of two words separated by a dot (\"word.word\"), I would like to know if that's a standard Java class (and method), e.g. \"Integer\", or some user defined name.\n\nIf thats what you're after, you could do without a (limited) list of Java Classes by using some simple reflection:\nhttp://java.sun.com/developer/technicalArticles/ALT/Reflection/\ntry {\n Class.forName(\"word.word\");\n System.out.println(\"This is a valid class!\");\n} catch (ClassNotFoundException e) {\n System.out.println(\"This is not a valid class.\");\n}\n\nSomething like this should be enough for your purposes, with he added benefit of not being limited to a subset of classes, and extensible by any libraries on the classpath.\n"
] |
[
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"java",
"parsing",
"python"
] |
stackoverflow_0000871812_java_parsing_python.txt
|
Q:
Google App Engine--Dynamically created templates
I'm trying to build an a simple CRUD admin section of my application. Basically, for a given Model, I want to have a template loop through the model's attributes into a simple table (once I do this, I can actually implement the CRUD part). A possible way to accomplish this is to dynamically generate a template with all the necessary template tags specific to that model.
Pseudocode:
def generate_tamplate(model):
template.write("<table border='1'>")
template.write("<tr>")
for attribute in model:
template.write("<td>%s</td>" % attribute)
template.write("</tr>")
template.write("<tr>")
for attribute in model:
template.write("<td>{{ %s.%s }}</td>" % model.attribute)
template.write("</tr>")
template.write("</table>")
Generating the proper text should not be difficult. I can follow my pseudocode model and do it Python. Two things im wondering:
1) Can I do this instead using Django's templating language? that is, use a template to generate a template
2) Once I generate the text, how can I wrote that to a file that webapp's template loader can access?
I remember a while back seeing something about loading template from the database. Is this possible with GAE?
THANKS!
A:
I saw this open source project a while back:
http://code.google.com/p/gae-django-dbtemplates/
Using a template to generate a template should be fine. Just render the template to a string. Here some code i use so i can stick some xml into memecache
path = os.path.join(os.path.dirname(__file__), 'line_chart.xml')
xml = template.render(path, template_values)
You can easily do something very similar and stick the result in the datastore.
A:
Yes, instead of doing template.writes, you can generate the next template - since template.render(...) just returns text. You can then store the text returned and put it into the DataStore, then retrieve it later and call .render(Context(...)) on it to return the html you want to generate.
You cannot write the generated template to a file - as AppEngine applications do not have write access to the filesystem, only read access.
If you change your 'generate_tamplate' function to use a template, the pseudocode could look like this:
from google.appengine.ext.webapp import template
def generate_tamplate(model):
t = template.render(path_to_template1.html, Context({'model':model}))
DataStoreTemplate(template=t, name=model.name).put()
''' Later, when you want to generate your page for that model '''
def generate_page(model):
t = DataStoreTemplate.all().filter("name =",model.name).get().template
htmlresult = t.render(Context({'model':model}))
return htmlresult
A:
Other option, that in my opinion simplifies writing apps for GAE a lot, is using user other templating language, like Mako, that allows you to embed Python code in the template, thus no fiddling required.
You would pass model data to the template (as simple as template.render(template_file, model=model), and the template would look something like this:
<table border='1'>
<tr>
% for attribute in model:
<td>${attribute}</td>
% endfor
</tr>
<tr>
% for attribute in model:
<td>${model.attribute}</td>
% endfor
</tr>
</table>
I followed this googled blog entry to get Mako in my app - it was quite simple and works like a charm.
|
Google App Engine--Dynamically created templates
|
I'm trying to build an a simple CRUD admin section of my application. Basically, for a given Model, I want to have a template loop through the model's attributes into a simple table (once I do this, I can actually implement the CRUD part). A possible way to accomplish this is to dynamically generate a template with all the necessary template tags specific to that model.
Pseudocode:
def generate_tamplate(model):
template.write("<table border='1'>")
template.write("<tr>")
for attribute in model:
template.write("<td>%s</td>" % attribute)
template.write("</tr>")
template.write("<tr>")
for attribute in model:
template.write("<td>{{ %s.%s }}</td>" % model.attribute)
template.write("</tr>")
template.write("</table>")
Generating the proper text should not be difficult. I can follow my pseudocode model and do it Python. Two things im wondering:
1) Can I do this instead using Django's templating language? that is, use a template to generate a template
2) Once I generate the text, how can I wrote that to a file that webapp's template loader can access?
I remember a while back seeing something about loading template from the database. Is this possible with GAE?
THANKS!
|
[
"I saw this open source project a while back: \nhttp://code.google.com/p/gae-django-dbtemplates/\nUsing a template to generate a template should be fine. Just render the template to a string. Here some code i use so i can stick some xml into memecache\npath = os.path.join(os.path.dirname(__file__), 'line_chart.xml')\nxml = template.render(path, template_values)\n\nYou can easily do something very similar and stick the result in the datastore.\n",
"Yes, instead of doing template.writes, you can generate the next template - since template.render(...) just returns text. You can then store the text returned and put it into the DataStore, then retrieve it later and call .render(Context(...)) on it to return the html you want to generate. \nYou cannot write the generated template to a file - as AppEngine applications do not have write access to the filesystem, only read access. \nIf you change your 'generate_tamplate' function to use a template, the pseudocode could look like this:\nfrom google.appengine.ext.webapp import template\n\ndef generate_tamplate(model):\n t = template.render(path_to_template1.html, Context({'model':model}))\n DataStoreTemplate(template=t, name=model.name).put()\n\n''' Later, when you want to generate your page for that model '''\ndef generate_page(model):\n t = DataStoreTemplate.all().filter(\"name =\",model.name).get().template\n htmlresult = t.render(Context({'model':model}))\n return htmlresult\n\n",
"Other option, that in my opinion simplifies writing apps for GAE a lot, is using user other templating language, like Mako, that allows you to embed Python code in the template, thus no fiddling required.\nYou would pass model data to the template (as simple as template.render(template_file, model=model), and the template would look something like this:\n<table border='1'>\n <tr>\n % for attribute in model:\n <td>${attribute}</td>\n % endfor\n </tr>\n <tr>\n % for attribute in model:\n <td>${model.attribute}</td>\n % endfor\n </tr>\n</table>\n\nI followed this googled blog entry to get Mako in my app - it was quite simple and works like a charm.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"django_templates",
"google_app_engine",
"python",
"templates"
] |
stackoverflow_0000744828_django_templates_google_app_engine_python_templates.txt
|
Q:
Creating a new terminal/shell window to simply display text
I want to pipe [edit: real-time text] the output of several subprocesses (sometimes chained, sometimes parallel) to a single terminal/tty window that is not the active python shell (be it an IDE, command-line, or a running script using tkinter). IPython is not an option. I need something that comes with the standard install. Prefer OS-agnostic solution, but needs to work on XP/Vista.
I'll post what I've tried already if you want it, but it’s embarrassing.
A:
A good solution in Unix would be named pipes. I know you asked about Windows, but there might be a similar approach in Windows, or this might be helpful for someone else.
on terminal 1:
mkfifo /tmp/display_data
myapp >> /tmp/display_data
on terminal 2 (bash):
tail -f /tmp/display_data
Edit: changed terminal 2 command to use "tail -f" instead of infinite loop.
A:
You say "pipe" so I assume you're dealing with text output from the subprocesses. A simple solution may be to just write output to files?
e.g. in the subprocess:
Redirect output %TEMP%\output.txt
On exit, copy output.txt to a directory your main process is watching.
In the main process:
Every second, examine directory for new files.
When files found, process and remove them.
You could encode the subprocess name in the output filename so you know how to process it.
A:
You could make a producer-customer system, where lines are inserted over a socket (nothing fancy here).
The customer would be a multithreaded socket server listening to connections and putting all lines into a Queue. In the separate thread it would get items from the queue and print it on the console. The program can be run from the cmd console or from the eclipse console as an external tool without much trouble.
From your point of view, it should be realtime. As a bonus, You can place producers and customers on separate boxes. Producers can even form a network.
Some Examples of socket programming with python can be found here. Look here for an tcp echoserver example and here for a tcp "hello world" socket client.
There also is an extension for windows that enables usage of named pipes.
On linux (possibly cygwin?) You could just tail -f named-fifo.
Good luck!
|
Creating a new terminal/shell window to simply display text
|
I want to pipe [edit: real-time text] the output of several subprocesses (sometimes chained, sometimes parallel) to a single terminal/tty window that is not the active python shell (be it an IDE, command-line, or a running script using tkinter). IPython is not an option. I need something that comes with the standard install. Prefer OS-agnostic solution, but needs to work on XP/Vista.
I'll post what I've tried already if you want it, but it’s embarrassing.
|
[
"A good solution in Unix would be named pipes. I know you asked about Windows, but there might be a similar approach in Windows, or this might be helpful for someone else.\non terminal 1:\nmkfifo /tmp/display_data\nmyapp >> /tmp/display_data\n\non terminal 2 (bash):\ntail -f /tmp/display_data\n\nEdit: changed terminal 2 command to use \"tail -f\" instead of infinite loop.\n",
"You say \"pipe\" so I assume you're dealing with text output from the subprocesses. A simple solution may be to just write output to files?\ne.g. in the subprocess:\n\nRedirect output %TEMP%\\output.txt\nOn exit, copy output.txt to a directory your main process is watching.\n\nIn the main process:\n\nEvery second, examine directory for new files.\nWhen files found, process and remove them.\n\nYou could encode the subprocess name in the output filename so you know how to process it.\n",
"You could make a producer-customer system, where lines are inserted over a socket (nothing fancy here).\nThe customer would be a multithreaded socket server listening to connections and putting all lines into a Queue. In the separate thread it would get items from the queue and print it on the console. The program can be run from the cmd console or from the eclipse console as an external tool without much trouble.\nFrom your point of view, it should be realtime. As a bonus, You can place producers and customers on separate boxes. Producers can even form a network.\nSome Examples of socket programming with python can be found here. Look here for an tcp echoserver example and here for a tcp \"hello world\" socket client.\nThere also is an extension for windows that enables usage of named pipes.\nOn linux (possibly cygwin?) You could just tail -f named-fifo.\nGood luck!\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"python",
"shell"
] |
stackoverflow_0000866737_python_shell.txt
|
Q:
Using exec() with recursive functions
I want to execute some Python code, typed at runtime, so I get the string and call
exec(pp, globals(), locals())
where pp is the string. It works fine, except for recursive calls, e. g., for example, this code is OK:
def horse():
robot.step()
robot.step()
robot.turn(-1)
robot.step()
while True:
horse()
But this one is not:
def horse():
robot.step()
robot.step()
robot.turn(-1)
robot.step()
horse()
horse()
NameError: global name 'horse' is not
defined
Is there a way to run recursive code as well?
UPDATE
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
Works if put on the top-level. But if moved inside a function:
def fn1():
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
fn1()
the same error occurs: NameError: global name 'rec' is not defined
A:
This surprised me too at first, and seems to be an odd corner case where exec is acting neither quite like a top-level definition, or a definition within an enclosing function. It looks like what is happening is that the function definition is being executed in the locals() dict you pass in. However, the defined function does not actually have access to this locals dict.
Normally, if you define a function at the toplevel, locals and globals are the same, so functions are visible within because they can see the function in the globals.
When a function is defined within another function's scope, python will notice that it is accessed within the function, and create a closure so that "horse" maps to the binding in the outer scope.
Here, it's a weird halfway case. exec is acting as if the definitions are at top-level, so no closures are created. However, since locals is not the same as globals, the definition doesn't go where the function can access it - its defined only in the inaccessible outer locals dict.
There are a couple of things you could do:
Use the same dictionary for both locals and globals. ie "exec s in locals(),locals()" (or better, just use your own dict). Providing only a globals() dict has the same effect - ie "exec s in mydict"
#
Put the func inside its own function, so that a closure is created. eg
s="""
def go():
def factorial(x):
if x==0: return 1
return x*factorial(x-1)
print factorial(10)
go()"""
Force the function to go into globals() rather than locals by putting a "global funcname" directive, as suggested by stephan's answer
A:
It works for me:
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
5
6
7
8
9
10
All I can say is that there is probably a bug in your code.
Edit
Here you go
def fn1():
glob = {}
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a, glob)
fn1()
A:
This works for me (added global rec). rec(5) calls the local rec, but rec(n+1) calls a global rec (which doesn't exist) without it.
def fn1():
a = """global rec
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
A:
"NameError: global name 'rec' is not defined" means it's looking for rec in the global scope, not the local scope. Looks like it's defining rec in the local scope but then attempting to execute in the global. Try printing locals() and globals() in side the string you're executing.
More info.
|
Using exec() with recursive functions
|
I want to execute some Python code, typed at runtime, so I get the string and call
exec(pp, globals(), locals())
where pp is the string. It works fine, except for recursive calls, e. g., for example, this code is OK:
def horse():
robot.step()
robot.step()
robot.turn(-1)
robot.step()
while True:
horse()
But this one is not:
def horse():
robot.step()
robot.step()
robot.turn(-1)
robot.step()
horse()
horse()
NameError: global name 'horse' is not
defined
Is there a way to run recursive code as well?
UPDATE
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
Works if put on the top-level. But if moved inside a function:
def fn1():
a = """\
def rec(n):
if n > 10:
return
print n
return rec(n+1)
rec(5)"""
exec(a)
fn1()
the same error occurs: NameError: global name 'rec' is not defined
|
[
"This surprised me too at first, and seems to be an odd corner case where exec is acting neither quite like a top-level definition, or a definition within an enclosing function. It looks like what is happening is that the function definition is being executed in the locals() dict you pass in. However, the defined function does not actually have access to this locals dict.\nNormally, if you define a function at the toplevel, locals and globals are the same, so functions are visible within because they can see the function in the globals.\nWhen a function is defined within another function's scope, python will notice that it is accessed within the function, and create a closure so that \"horse\" maps to the binding in the outer scope.\nHere, it's a weird halfway case. exec is acting as if the definitions are at top-level, so no closures are created. However, since locals is not the same as globals, the definition doesn't go where the function can access it - its defined only in the inaccessible outer locals dict.\nThere are a couple of things you could do:\n\nUse the same dictionary for both locals and globals. ie \"exec s in locals(),locals()\" (or better, just use your own dict). Providing only a globals() dict has the same effect - ie \"exec s in mydict\"\n#\nPut the func inside its own function, so that a closure is created. eg\ns=\"\"\"\ndef go():\n def factorial(x):\n if x==0: return 1\n return x*factorial(x-1)\n print factorial(10)\ngo()\"\"\"\n\nForce the function to go into globals() rather than locals by putting a \"global funcname\" directive, as suggested by stephan's answer\n\n",
"It works for me:\na = \"\"\"\\\ndef rec(n):\n if n > 10:\n return\n print n\n return rec(n+1)\n\nrec(5)\"\"\"\n\nexec(a)\n5\n6\n7\n8\n9\n10\n\nAll I can say is that there is probably a bug in your code.\nEdit\nHere you go\ndef fn1():\n glob = {}\n a = \"\"\"\\\ndef rec(n):\n if n > 10:\n return\n print n\n return rec(n+1)\n\nrec(5)\"\"\"\n exec(a, glob)\n\nfn1()\n\n",
"This works for me (added global rec). rec(5) calls the local rec, but rec(n+1) calls a global rec (which doesn't exist) without it.\ndef fn1():\n a = \"\"\"global rec\ndef rec(n):\n if n > 10:\n return\n print n\n return rec(n+1)\n\nrec(5)\"\"\"\n\n exec(a)\n\n",
"\"NameError: global name 'rec' is not defined\" means it's looking for rec in the global scope, not the local scope. Looks like it's defining rec in the local scope but then attempting to execute in the global. Try printing locals() and globals() in side the string you're executing.\nMore info.\n"
] |
[
6,
5,
3,
0
] |
[] |
[] |
[
"exec",
"python",
"recursion"
] |
stackoverflow_0000871887_exec_python_recursion.txt
|
Q:
Sorting a list of objects by attribute
I am trying to sort a list of objects in python, however this code will not work:
import datetime
class Day:
def __init__(self, date, text):
self.date = date
self.text = text
def __cmp__(self, other):
return cmp(self.date, other.date)
mylist = [Day(datetime.date(2009, 01, 02), "Jan 2"), Day(datetime.date(2009, 01, 01), "Jan 1")]
print mylist
print mylist.sort()
The output of this is:
[<__main__.Day instance at 0x519e0>, <__main__.Day instance at 0x51a08>]
None
Could somebody show me a good way solve this? Why is the sort() function returning None?
A:
mylist.sort() returns nothing, it sorts the list in place. Change it to
mylist.sort()
print mylist
to see the correct result.
See http://docs.python.org/library/stdtypes.html#mutable-sequence-types note 7.
The sort() and reverse() methods
modify the list in place for economy
of space when sorting or reversing a
large list. To remind you that they
operate by side effect, they don’t
return the sorted or reversed list.
A:
See sorted for a function that will return a sorted copy of any iterable.
|
Sorting a list of objects by attribute
|
I am trying to sort a list of objects in python, however this code will not work:
import datetime
class Day:
def __init__(self, date, text):
self.date = date
self.text = text
def __cmp__(self, other):
return cmp(self.date, other.date)
mylist = [Day(datetime.date(2009, 01, 02), "Jan 2"), Day(datetime.date(2009, 01, 01), "Jan 1")]
print mylist
print mylist.sort()
The output of this is:
[<__main__.Day instance at 0x519e0>, <__main__.Day instance at 0x51a08>]
None
Could somebody show me a good way solve this? Why is the sort() function returning None?
|
[
"mylist.sort() returns nothing, it sorts the list in place. Change it to \nmylist.sort()\nprint mylist\n\nto see the correct result. \nSee http://docs.python.org/library/stdtypes.html#mutable-sequence-types note 7.\n\nThe sort() and reverse() methods\n modify the list in place for economy\n of space when sorting or reversing a\n large list. To remind you that they\n operate by side effect, they don’t\n return the sorted or reversed list.\n\n",
"See sorted for a function that will return a sorted copy of any iterable.\n"
] |
[
5,
2
] |
[] |
[] |
[
"python",
"python_2.x"
] |
stackoverflow_0000872181_python_python_2.x.txt
|
Q:
numpy linear algebra basic help
This is what I need to do-
I have this equation-
Ax = y
Where A is a rational m*n matrix (m<=n), and x and y are vectors of
the right size. I know A and y, I don't know what x is equal to. I
also know that there is no x where Ax equals exactly y.
I want to find the vector x' such that Ax' is as close as possible to
y. Meaning that (Ax' - y) is as close as possible to (0,0,0,...0).
I know that I need to use either the lstsq function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#lstsq
or the svd function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#svd
I don't understand the documentation at all. Can someone please show
me how to use these functions to solve my problem.
Thanks a lot!!!
A:
The updated documentation may be a bit more helpful... looks like you want
numpy.linalg.lstsq(A, y)
A:
SVD is for the case of m < n, because you don't really have enough degrees of freedom.
The docs for lstsq don't look very helpful. I believe that's least square fitting, for the case where m > n.
If m < n, you'll want SVD.
A:
The SVD of matrix A gives you orthogonal matrices U and V and diagonal matrix Σ such that
A = U Σ V T
where
U UT = I ;
V VT = I
Hence, if
x A = y
then
x U Σ V T = y
x U Σ V T V = y V
x U Σ = y V
U T x Σ = y V
x Σ = U y V
x = Σ -1 U T y V
x = V T Σ -1 U T y
So given SVD of A you can get x.
Although for general matrices A B != B A, it is true for vector x that x U == U T x.
For example, consider x = ( x, y ), U = ( a, b ; c, d ):
x U = ( x, y ) ( a, b ; c, d )
= ( xa+yc, xb+yd )
= ( ax+cy, bx+dy )
= ( a, c; b, d ) ( x; y )
= U T x
It's fairly obvious when you look at the values in x U being the dot products of x and the columns of U, and the values in UTx being the dot products of the x and the rows of UT, and the relation of rows and columns in transposition
|
numpy linear algebra basic help
|
This is what I need to do-
I have this equation-
Ax = y
Where A is a rational m*n matrix (m<=n), and x and y are vectors of
the right size. I know A and y, I don't know what x is equal to. I
also know that there is no x where Ax equals exactly y.
I want to find the vector x' such that Ax' is as close as possible to
y. Meaning that (Ax' - y) is as close as possible to (0,0,0,...0).
I know that I need to use either the lstsq function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#lstsq
or the svd function:
http://www.scipy.org/doc/numpy_api_docs/numpy.linalg.linalg.html#svd
I don't understand the documentation at all. Can someone please show
me how to use these functions to solve my problem.
Thanks a lot!!!
|
[
"The updated documentation may be a bit more helpful... looks like you want\nnumpy.linalg.lstsq(A, y)\n\n",
"SVD is for the case of m < n, because you don't really have enough degrees of freedom.\nThe docs for lstsq don't look very helpful. I believe that's least square fitting, for the case where m > n.\nIf m < n, you'll want SVD. \n",
"The SVD of matrix A gives you orthogonal matrices U and V and diagonal matrix Σ such that\nA = U Σ V T\nwhere\nU UT = I ;\nV VT = I\nHence, if\nx A = y\nthen\nx U Σ V T = y\nx U Σ V T V = y V\nx U Σ = y V\nU T x Σ = y V\nx Σ = U y V\nx = Σ -1 U T y V\nx = V T Σ -1 U T y\nSo given SVD of A you can get x.\n\nAlthough for general matrices A B != B A, it is true for vector x that x U == U T x.\nFor example, consider x = ( x, y ), U = ( a, b ; c, d ):\nx U = ( x, y ) ( a, b ; c, d )\n= ( xa+yc, xb+yd )\n= ( ax+cy, bx+dy )\n= ( a, c; b, d ) ( x; y )\n= U T x\nIt's fairly obvious when you look at the values in x U being the dot products of x and the columns of U, and the values in UTx being the dot products of the x and the rows of UT, and the relation of rows and columns in transposition\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"linear_algebra",
"numpy",
"python",
"scipy",
"svd"
] |
stackoverflow_0000872376_linear_algebra_numpy_python_scipy_svd.txt
|
Q:
What's wrong with this Python code?
I'm very new so just learning, so go easy please!
start = int(input('How much did you start with?:' ))
if start < 0:
print("That's impossible! Try again.")
print(start = int(input('How much did you start with:' )))
if start >= 0:
print(inorout = raw_input('Cool! Now have you put money in or taken it out?: '))
if inorout == in:
print(in = int(raw_input('Well done! How much did you put in?:')))
print(int(start + in))
This always results in syntax error? I'm sure I'm doing something obvious wrong!
Thanks!
A:
You can't assign to variables in expressions in Python, like in C: print (start=int(input('blah'))) isn't correct. Do the assignment first in a separate statement.
The first line musn't be indented, but that might just be a copy and paste error.
The word in is a reserved word so you can't use it for variable names
A:
Assigning in statements is your problem.
Move the assignments out of print statements
A:
Consider prompting for input using a function wrapping a loop.
Don't use input for general user input, use raw_input instead
Wrap your script execution in a main function so it doesn't execute on import
def ask_positive_integer(prompt, warning="Enter a positive integer, please!"):
while True:
response = raw_input(prompt)
try:
response = int(response)
if response < 0:
print(warning)
else:
return response
except ValueError:
print(warning)
def ask_in_or_out(prompt, warning="In or out, please!"):
'''
returns True if 'in' False if 'out'
'''
while True:
response = raw_input(prompt)
if response.lower() in ('i', 'in'): return True
if response.lower() in ('o', 'ou', 'out'): return False
print warning
def main():
start = ask_positive_integer('How much did you start with?: ')
in_ = ask_in_or_out('Cool! Now have you put money in or taken it out?: ')
if in_:
in_amount = ask_positive_integer('Well done! How much did you put in?: ')
print(start + in_amount)
else:
out_amount = ask_positive_integer('Well done! How much did you take out?: ')
print(start - out_amount)
if __name__ == '__main__':
main()
|
What's wrong with this Python code?
|
I'm very new so just learning, so go easy please!
start = int(input('How much did you start with?:' ))
if start < 0:
print("That's impossible! Try again.")
print(start = int(input('How much did you start with:' )))
if start >= 0:
print(inorout = raw_input('Cool! Now have you put money in or taken it out?: '))
if inorout == in:
print(in = int(raw_input('Well done! How much did you put in?:')))
print(int(start + in))
This always results in syntax error? I'm sure I'm doing something obvious wrong!
Thanks!
|
[
"\nYou can't assign to variables in expressions in Python, like in C: print (start=int(input('blah'))) isn't correct. Do the assignment first in a separate statement.\nThe first line musn't be indented, but that might just be a copy and paste error.\nThe word in is a reserved word so you can't use it for variable names\n\n",
"Assigning in statements is your problem.\nMove the assignments out of print statements\n",
"\nConsider prompting for input using a function wrapping a loop.\nDon't use input for general user input, use raw_input instead\nWrap your script execution in a main function so it doesn't execute on import\n\n\ndef ask_positive_integer(prompt, warning=\"Enter a positive integer, please!\"):\n while True:\n response = raw_input(prompt)\n try:\n response = int(response)\n if response < 0:\n print(warning)\n else:\n return response\n except ValueError:\n print(warning)\n\ndef ask_in_or_out(prompt, warning=\"In or out, please!\"):\n '''\n returns True if 'in' False if 'out'\n '''\n while True:\n response = raw_input(prompt)\n if response.lower() in ('i', 'in'): return True\n if response.lower() in ('o', 'ou', 'out'): return False\n print warning\n\ndef main():\n start = ask_positive_integer('How much did you start with?: ')\n in_ = ask_in_or_out('Cool! Now have you put money in or taken it out?: ')\n if in_:\n in_amount = ask_positive_integer('Well done! How much did you put in?: ')\n print(start + in_amount)\n else:\n out_amount = ask_positive_integer('Well done! How much did you take out?: ')\n print(start - out_amount)\n\nif __name__ == '__main__':\n main()\n\n"
] |
[
7,
3,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000872119_python.txt
|
Q:
Most Efficient Way to Find Whether a Large List Contains a Specific String (Python)
I have a file containing roughly all the words in English (~60k words, ~500k characters). I want to test whether a certain word I receive as input is "in English" (i.e. if this exact word is in the list).
What would be the most efficient way to do this in Python?
The trivial solution is to load the file into a list and check whether the word is in that list. The list can be sorted, which I believe will shrink the complexity to O(logn). However I'm not sure about how Python implements searching through lists, and whether there's a performance penalty if such a large list is in memory. Can I "abuse" the fact I can put a cap on the length of words? (e.g. say the longest one is 15 characters long).
Please note I run the application on a machine with lots of memory, so I care less for memory consumption than for speed and CPU utilization.
Thanks
A:
The python Set is what you should try.
A set object is an unordered collection of distinct hashable objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference.
A:
A Trie structure would suit your purposes. There are undoubtedly Python implementations to be found out there...
A:
Sample Python code:
L = ['foo', 'bar', 'baz'] # Your list
s = set(L) # Converted to Set
print 'foo' in s # True
print 'blah' in s # False
A:
You're basically testing whether a member is in a set or not, right?
If so, and because you said you have lots of memory, why not just load all the words as keys in memcache, and then for every word, just check if it is present in memcache or not.
Or use that data structure that is used by bash to autocomplete command names - this is fast and highly efficient in memory (can't remember the name).
A:
500k character is not a large list. if items in your list are unique and you need to do this search repeatedly use set which would lower the complexity to O(1) in the best case.
A:
Two things:
The Python 'mutable set' type has an 'add' method ( s.add(item) ), so you could go right from reading (a line) from your big file straight into a set without using a list as an intermediate data structure.
Python lets you 'pickle' a data structure, so you could save your big set to a file and save the time of reinitiating the set.
Second, I've been looking for a list of all the single-syllable words in English for my own amusement, but the ones I've found mentioned seem to be proprietary. If it isn't being intrusive, could I ask whether your list of English words can be obtained by others?
A:
Others have given you the in-memory way using set(), and this is generally going to be the fastest way, and should not tax your memory for a 60k word dataset (a few MiBs at most). You should be able to construct your set with:
f=open('words.txt')
s = set(word.strip() for word in f)
However, it does require some time to load the set into memory. If you are checking lots of words, this is no problem - the lookup time will more than make up for it. However if you're only going to be checking one word per command execution (eg. this is a commandline app like "checkenglish [word]" ) the startup time will be longer than it would have taken you just to search through the file line by line.
If this is your situation, or you have a much bigger dataset, using an on-disk format may be better. The simplest way would be using the dbm module. Create such a database from a wordlist with:
import dbm
f=open('wordlist.txt')
db = dbm.open('words.db','c')
for word in f:
db[word] = '1'
f.close()
db.close()
Then your program can check membership with:
db = dbm.open('words.db','r')
if db.has_key(word):
print "%s is english" % word
else:
print "%s is not english" % word
This will be slower than a set lookup, since there will be disk access, but will be faster than searching, have low memory use and no significant initialisation time.
There are also other alternatives, such as using a SQL database (eg sqlite).
A:
If memory consumption isn't an issue and the words won't change, the fastest way to do this is put everything in a hash and search that way. In Python, this is the Set. You'll have constant-time lookup.
A:
Converting the list to a set will only be helpful if you repeatedly run this kind of query against the data, as will sorting the list and doing a binary search. If you're only going to pull data out of the list once, a plain old linear search is your best bet:
if 'foo' in some_list:
do_something()
Otherwise, your best bet is to use either a set as has been mentioned or a binary search. Which one you should choose depends largely on how big the data is and how much memory you can spare. I'm told that really large lists tend to benefit more from hashing, although the amount of memory that's taken up can be prohibitively expensive.
Finally, a third option is that you can import the data into a sqlite database and read directly from it. Sqlite is very fast and it may save you the trouble of loading the whole list from file. Python has a very good built-in sqlite library.
|
Most Efficient Way to Find Whether a Large List Contains a Specific String (Python)
|
I have a file containing roughly all the words in English (~60k words, ~500k characters). I want to test whether a certain word I receive as input is "in English" (i.e. if this exact word is in the list).
What would be the most efficient way to do this in Python?
The trivial solution is to load the file into a list and check whether the word is in that list. The list can be sorted, which I believe will shrink the complexity to O(logn). However I'm not sure about how Python implements searching through lists, and whether there's a performance penalty if such a large list is in memory. Can I "abuse" the fact I can put a cap on the length of words? (e.g. say the longest one is 15 characters long).
Please note I run the application on a machine with lots of memory, so I care less for memory consumption than for speed and CPU utilization.
Thanks
|
[
"The python Set is what you should try.\n\nA set object is an unordered collection of distinct hashable objects. Common uses include membership testing, removing duplicates from a sequence, and computing mathematical operations such as intersection, union, difference, and symmetric difference. \n\n",
"A Trie structure would suit your purposes. There are undoubtedly Python implementations to be found out there...\n",
"Sample Python code:\nL = ['foo', 'bar', 'baz'] # Your list\ns = set(L) # Converted to Set\n\nprint 'foo' in s # True\nprint 'blah' in s # False\n\n",
"You're basically testing whether a member is in a set or not, right?\nIf so, and because you said you have lots of memory, why not just load all the words as keys in memcache, and then for every word, just check if it is present in memcache or not.\nOr use that data structure that is used by bash to autocomplete command names - this is fast and highly efficient in memory (can't remember the name).\n",
"500k character is not a large list. if items in your list are unique and you need to do this search repeatedly use set which would lower the complexity to O(1) in the best case.\n",
"Two things: \nThe Python 'mutable set' type has an 'add' method ( s.add(item) ), so you could go right from reading (a line) from your big file straight into a set without using a list as an intermediate data structure. \nPython lets you 'pickle' a data structure, so you could save your big set to a file and save the time of reinitiating the set.\nSecond, I've been looking for a list of all the single-syllable words in English for my own amusement, but the ones I've found mentioned seem to be proprietary. If it isn't being intrusive, could I ask whether your list of English words can be obtained by others?\n",
"Others have given you the in-memory way using set(), and this is generally going to be the fastest way, and should not tax your memory for a 60k word dataset (a few MiBs at most). You should be able to construct your set with:\nf=open('words.txt')\ns = set(word.strip() for word in f)\n\nHowever, it does require some time to load the set into memory. If you are checking lots of words, this is no problem - the lookup time will more than make up for it. However if you're only going to be checking one word per command execution (eg. this is a commandline app like \"checkenglish [word]\" ) the startup time will be longer than it would have taken you just to search through the file line by line.\nIf this is your situation, or you have a much bigger dataset, using an on-disk format may be better. The simplest way would be using the dbm module. Create such a database from a wordlist with:\nimport dbm\nf=open('wordlist.txt')\ndb = dbm.open('words.db','c')\nfor word in f:\n db[word] = '1'\nf.close()\ndb.close()\n\nThen your program can check membership with:\ndb = dbm.open('words.db','r')\nif db.has_key(word):\n print \"%s is english\" % word\nelse:\n print \"%s is not english\" % word\n\nThis will be slower than a set lookup, since there will be disk access, but will be faster than searching, have low memory use and no significant initialisation time.\nThere are also other alternatives, such as using a SQL database (eg sqlite).\n",
"If memory consumption isn't an issue and the words won't change, the fastest way to do this is put everything in a hash and search that way. In Python, this is the Set. You'll have constant-time lookup.\n",
"Converting the list to a set will only be helpful if you repeatedly run this kind of query against the data, as will sorting the list and doing a binary search. If you're only going to pull data out of the list once, a plain old linear search is your best bet:\nif 'foo' in some_list:\n do_something()\n\nOtherwise, your best bet is to use either a set as has been mentioned or a binary search. Which one you should choose depends largely on how big the data is and how much memory you can spare. I'm told that really large lists tend to benefit more from hashing, although the amount of memory that's taken up can be prohibitively expensive.\nFinally, a third option is that you can import the data into a sqlite database and read directly from it. Sqlite is very fast and it may save you the trouble of loading the whole list from file. Python has a very good built-in sqlite library.\n"
] |
[
24,
4,
4,
2,
2,
2,
2,
1,
1
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0000872290_python_string.txt
|
Q:
Python program using os.pipe and os.fork() issue
I've recently needed to write a script that performs an os.fork() to split into two processes. The child process becomes a server process and passes data back to the parent process using a pipe created with os.pipe(). The child closes the 'r' end of the pipe and the parent closes the 'w' end of the pipe, as usual. I convert the returns from pipe() into file objects with os.fdopen.
The problem I'm having is this: The process successfully forks, and the child becomes a server. Everything works great and the child dutifully writes data to the open 'w' end of the pipe. Unfortunately the parent end of the pipe does two strange things:
A) It blocks on the read() operation on the 'r' end of the pipe.
Secondly, it fails to read any data that was put on the pipe unless the 'w' end is entirely closed.
I immediately thought that buffering was the problem and added pipe.flush() calls, but these didn't help.
Can anyone shed some light on why the data doesn't appear until the writing end is fully closed? And is there a strategy to make the read() call non blocking?
This is my first Python program that forked or used pipes, so forgive me if I've made a simple mistake.
A:
Are you using read() without specifying a size, or treating the pipe as an iterator (for line in f)? If so, that's probably the source of your problem - read() is defined to read until the end of the file before returning, rather than just read what is available for reading. That will mean it will block until the child calls close().
In the example code linked to, this is OK - the parent is acting in a blocking manner, and just using the child for isolation purposes. If you want to continue, then either use non-blocking IO as in the code you posted (but be prepared to deal with half-complete data), or read in chunks (eg r.read(size) or r.readline()) which will block only until a specific size / line has been read. (you'll still need to call flush on the child)
It looks like treating the pipe as an iterator is using some further buffer as well, for "for line in r:" may not give you what you want if you need each line to be immediately consumed. It may be possible to disable this, but just specifying 0 for the buffer size in fdopen doesn't seem sufficient.
Heres some sample code that should work:
import os, sys, time
r,w=os.pipe()
r,w=os.fdopen(r,'r',0), os.fdopen(w,'w',0)
pid = os.fork()
if pid: # Parent
w.close()
while 1:
data=r.readline()
if not data: break
print "parent read: " + data.strip()
else: # Child
r.close()
for i in range(10):
print >>w, "line %s" % i
w.flush()
time.sleep(1)
A:
Using
fcntl.fcntl(readPipe, fcntl.F_SETFL, os.O_NONBLOCK)
Before invoking the read() solved both problems. The read() call is no longer blocking and the data is appearing after just a flush() on the writing end.
A:
I see you have solved the problem of blocking i/o and buffering.
A note if you decide to try a different approach: subprocess is the equivalent / a replacement for the fork/exec idiom. It seems like that's not what you're doing: you have just a fork (not an exec) and exchanging data between the two processes -- in this case the multiprocessing module (in Python 2.6+) would be a better fit.
|
Python program using os.pipe and os.fork() issue
|
I've recently needed to write a script that performs an os.fork() to split into two processes. The child process becomes a server process and passes data back to the parent process using a pipe created with os.pipe(). The child closes the 'r' end of the pipe and the parent closes the 'w' end of the pipe, as usual. I convert the returns from pipe() into file objects with os.fdopen.
The problem I'm having is this: The process successfully forks, and the child becomes a server. Everything works great and the child dutifully writes data to the open 'w' end of the pipe. Unfortunately the parent end of the pipe does two strange things:
A) It blocks on the read() operation on the 'r' end of the pipe.
Secondly, it fails to read any data that was put on the pipe unless the 'w' end is entirely closed.
I immediately thought that buffering was the problem and added pipe.flush() calls, but these didn't help.
Can anyone shed some light on why the data doesn't appear until the writing end is fully closed? And is there a strategy to make the read() call non blocking?
This is my first Python program that forked or used pipes, so forgive me if I've made a simple mistake.
|
[
"Are you using read() without specifying a size, or treating the pipe as an iterator (for line in f)? If so, that's probably the source of your problem - read() is defined to read until the end of the file before returning, rather than just read what is available for reading. That will mean it will block until the child calls close().\nIn the example code linked to, this is OK - the parent is acting in a blocking manner, and just using the child for isolation purposes. If you want to continue, then either use non-blocking IO as in the code you posted (but be prepared to deal with half-complete data), or read in chunks (eg r.read(size) or r.readline()) which will block only until a specific size / line has been read. (you'll still need to call flush on the child)\nIt looks like treating the pipe as an iterator is using some further buffer as well, for \"for line in r:\" may not give you what you want if you need each line to be immediately consumed. It may be possible to disable this, but just specifying 0 for the buffer size in fdopen doesn't seem sufficient.\nHeres some sample code that should work:\nimport os, sys, time\n\nr,w=os.pipe()\nr,w=os.fdopen(r,'r',0), os.fdopen(w,'w',0)\n\npid = os.fork()\nif pid: # Parent\n w.close()\n while 1:\n data=r.readline()\n if not data: break\n print \"parent read: \" + data.strip()\nelse: # Child\n r.close()\n for i in range(10):\n print >>w, \"line %s\" % i\n w.flush()\n time.sleep(1)\n\n",
"Using \nfcntl.fcntl(readPipe, fcntl.F_SETFL, os.O_NONBLOCK)\nBefore invoking the read() solved both problems. The read() call is no longer blocking and the data is appearing after just a flush() on the writing end.\n",
"I see you have solved the problem of blocking i/o and buffering.\nA note if you decide to try a different approach: subprocess is the equivalent / a replacement for the fork/exec idiom. It seems like that's not what you're doing: you have just a fork (not an exec) and exchanging data between the two processes -- in this case the multiprocessing module (in Python 2.6+) would be a better fit. \n"
] |
[
13,
6,
5
] |
[
"The \"parent\" vs. \"child\" part of fork in a Python application is silly. It's a legacy from 16-bit unix days. It's an affectation from a day when fork/exec and exec were Important Things to make the most of a tiny little processor.\nBreak your Python code into two separate parts: parent and child.\nThe parent part should use subprocess to run the child part. \nA fork and exec may happen somewhere in there -- but you don't need to care.\n"
] |
[
-10
] |
[
"fork",
"pipe",
"python"
] |
stackoverflow_0000871447_fork_pipe_python.txt
|
Q:
Some help with some Python code
Can anyone tell me why num_chars and num_rows have to be the same?
from ctypes import *
num_chars = 8
num_rows = 8
num_cols = 6
buffer = create_string_buffer (num_chars*num_rows*num_cols+num_chars)
for char in range(num_chars):
for row in range(num_rows):
for col in range(num_cols):
if char == num_chars-1 and col == num_cols-1:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = '|'
buffer[row*num_rows*num_cols+char*num_cols+col+row+1] = '\n'
elif col == num_cols-1:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = '|'
else:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = ('.', '*')[char>row]
print buffer.value
The output
.....|*****|*****|*****|*****|*****|*****|*****|
.....|.....|*****|*****|*****|*****|*****|*****|
.....|.....|.....|*****|*****|*****|*****|*****|
.....|.....|.....|.....|*****|*****|*****|*****|
.....|.....|.....|.....|.....|*****|*****|*****|
.....|.....|.....|.....|.....|.....|*****|*****|
.....|.....|.....|.....|.....|.....|.....|*****|
.....|.....|.....|.....|.....|.....|.....|.....|
And now changing num_chars to 15.
.....|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
.....|*****|*****|*****|*****|*****|*****|*****|
A:
You said you are using ctypes because you want mutable char buffer for this. But you can get the output you want from list comprehension
num_chars = 5
num_rows = 8
empty = ['.' * num_chars]
full = ['*' * num_chars]
print '\n'.join(
'|'.join(empty * (i + 1) + (num_rows - i - 1) * full)
for i in xrange(num_rows)
)
.....|*****|*****|*****|*****|*****|*****|*****
.....|.....|*****|*****|*****|*****|*****|*****
.....|.....|.....|*****|*****|*****|*****|*****
.....|.....|.....|.....|*****|*****|*****|*****
.....|.....|.....|.....|.....|*****|*****|*****
.....|.....|.....|.....|.....|.....|*****|*****
.....|.....|.....|.....|.....|.....|.....|*****
.....|.....|.....|.....|.....|.....|.....|.....
EDIT
I'll show you how you can use list comprehensions to draw whatever char bitmap you want to draw. The idea is simple. Build a boolean array with True in the places you want to print the character and False otherwise. And just use the 'or' trick to print the right character. This example will build a chess like board. You can use the same concept to draw any shape you want.
rows = 5
cols = 6
char = '#'
empty = '.'
bitmap = [[ (i + j)%2 == 0 for i in xrange(cols)] for j in xrange(rows)]
print '\n'.join(
'|'.join(bitmap[j][i] * char or empty for i in xrange(cols))
for j in xrange(rows)
)
A:
There we go. I had rownum_rows instead of rownum_chars I must need a Dr Pepper. And by the way, this wasn't homework. It's for an LCD project.
num_chars = 10
num_rows = 8
num_cols = 6
buffer = create_string_buffer (num_chars*num_rows*num_cols+num_chars)
for char in range(num_chars):
for row in range(num_rows):
for col in range(num_cols):
if char == num_chars-1 and col == num_cols-1:
buffer[row*num_chars*num_cols+char*num_cols+col+row] = '|'
buffer[row*num_chars*num_cols+char*num_cols+col+row+1] = '\n'
elif col == num_cols-1:
buffer[row*num_chars*num_cols+char*num_cols+col+row] = '|'
else:
buffer[row*num_chars*num_cols+char*num_cols+col+row] = ('.', '*')[char>row]
print repr(buffer.raw)
print buffer.value
|
Some help with some Python code
|
Can anyone tell me why num_chars and num_rows have to be the same?
from ctypes import *
num_chars = 8
num_rows = 8
num_cols = 6
buffer = create_string_buffer (num_chars*num_rows*num_cols+num_chars)
for char in range(num_chars):
for row in range(num_rows):
for col in range(num_cols):
if char == num_chars-1 and col == num_cols-1:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = '|'
buffer[row*num_rows*num_cols+char*num_cols+col+row+1] = '\n'
elif col == num_cols-1:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = '|'
else:
buffer[row*num_rows*num_cols+char*num_cols+col+row] = ('.', '*')[char>row]
print buffer.value
The output
.....|*****|*****|*****|*****|*****|*****|*****|
.....|.....|*****|*****|*****|*****|*****|*****|
.....|.....|.....|*****|*****|*****|*****|*****|
.....|.....|.....|.....|*****|*****|*****|*****|
.....|.....|.....|.....|.....|*****|*****|*****|
.....|.....|.....|.....|.....|.....|*****|*****|
.....|.....|.....|.....|.....|.....|.....|*****|
.....|.....|.....|.....|.....|.....|.....|.....|
And now changing num_chars to 15.
.....|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
*****|*****|*****|*****|*****|*****|*****|*****|
.....|*****|*****|*****|*****|*****|*****|*****|
|
[
"You said you are using ctypes because you want mutable char buffer for this. But you can get the output you want from list comprehension\nnum_chars = 5\nnum_rows = 8\nempty = ['.' * num_chars]\nfull = ['*' * num_chars]\nprint '\\n'.join(\n '|'.join(empty * (i + 1) + (num_rows - i - 1) * full)\n for i in xrange(num_rows)\n)\n\n.....|*****|*****|*****|*****|*****|*****|*****\n.....|.....|*****|*****|*****|*****|*****|*****\n.....|.....|.....|*****|*****|*****|*****|*****\n.....|.....|.....|.....|*****|*****|*****|*****\n.....|.....|.....|.....|.....|*****|*****|*****\n.....|.....|.....|.....|.....|.....|*****|*****\n.....|.....|.....|.....|.....|.....|.....|*****\n.....|.....|.....|.....|.....|.....|.....|.....\n\nEDIT\nI'll show you how you can use list comprehensions to draw whatever char bitmap you want to draw. The idea is simple. Build a boolean array with True in the places you want to print the character and False otherwise. And just use the 'or' trick to print the right character. This example will build a chess like board. You can use the same concept to draw any shape you want.\nrows = 5\ncols = 6\nchar = '#'\nempty = '.'\nbitmap = [[ (i + j)%2 == 0 for i in xrange(cols)] for j in xrange(rows)]\nprint '\\n'.join(\n '|'.join(bitmap[j][i] * char or empty for i in xrange(cols))\n for j in xrange(rows)\n)\n\n",
"There we go. I had rownum_rows instead of rownum_chars I must need a Dr Pepper. And by the way, this wasn't homework. It's for an LCD project.\nnum_chars = 10\nnum_rows = 8\nnum_cols = 6\n\nbuffer = create_string_buffer (num_chars*num_rows*num_cols+num_chars)\n\nfor char in range(num_chars):\n for row in range(num_rows):\n for col in range(num_cols):\n if char == num_chars-1 and col == num_cols-1:\n buffer[row*num_chars*num_cols+char*num_cols+col+row] = '|'\n buffer[row*num_chars*num_cols+char*num_cols+col+row+1] = '\\n'\n elif col == num_cols-1:\n buffer[row*num_chars*num_cols+char*num_cols+col+row] = '|'\n else:\n buffer[row*num_chars*num_cols+char*num_cols+col+row] = ('.', '*')[char>row]\n\nprint repr(buffer.raw)\nprint buffer.value\n\n"
] |
[
5,
1
] |
[] |
[] |
[
"ctypes",
"python"
] |
stackoverflow_0000872566_ctypes_python.txt
|
Q:
Python generates an IO error while interleaving open/close/readline/write on the same file
I'm learning Python-this gives me an IO error-
f = open('money.txt')
while True:
currentmoney = float(f.readline())
print(currentmoney, end='')
if currentmoney >= 0:
howmuch = (float(input('How much did you put in or take out?:')))
now = currentmoney + howmuch
print(now)
str(now)
f.close()
f = open('money.txt', 'w')
f.write(str(now))
f.close()
Thanks!
A:
The while True is going to loop forever unless you break it with break.
The I/O error is probably because when you have run through the loop once the last thing you do is f.close(), which closes the file. When execution continues with the loop in the line currentmoney = float(f.readline()): f will be a closed filehandle that you can't read from.
A:
well theres a couple of things...
you open(money.txt) outside the while loop but you close it after the first iteration...
(technically you close, reopen & close again)
Put when the loop comes round the second time, f will be closed and f.readLine() will most likely fail
A:
You close your file only if the IF condition is satisfied, otherwise you attempt to reopen it after the IF block. Depending on the result you want to achieve you will want to either remove the f.close call, or add an ELSE branch and remove the second f.open call. Anyway let me warn you that the str(now) in your IF block is just deprecated as you're not saving the result of that call anywhere.
A:
You'll get an IO Error on your first line if money.txt doesn't exist.
A:
Can I piggyback a question? The following has puzzled me for some time. I always get an IOError from these 'open()' statements, so I've stopped checking for the error. (Don't like to do that!) What's wrong with my code? The 'if IOError:' test shown in comments was originally right after the statement with 'open()'.
if __name__ == '__main__':
#get name of input file and open() infobj
infname = sys.argv[1]
print 'infname is: %s' % (sys.argv[1])
infobj = open( infname, 'rU' )
print 'infobj is: %s' % infobj
# 'if IOError:' always evals to True!?!
# if IOError:
# print 'IOError opening file tmp with mode rU.'
# sys.exit( 1)
#get name of output file and open() outfobj
outfname = sys.argv[2]
print 'outfname is: %s' % (sys.argv[2])
outfobj = open( outfname, 'w' )
print 'outfobj is: %s' % outfobj
# if IOError:
# print 'IOError opening file otmp with mode w.'
# sys.exit( 2)
|
Python generates an IO error while interleaving open/close/readline/write on the same file
|
I'm learning Python-this gives me an IO error-
f = open('money.txt')
while True:
currentmoney = float(f.readline())
print(currentmoney, end='')
if currentmoney >= 0:
howmuch = (float(input('How much did you put in or take out?:')))
now = currentmoney + howmuch
print(now)
str(now)
f.close()
f = open('money.txt', 'w')
f.write(str(now))
f.close()
Thanks!
|
[
"The while True is going to loop forever unless you break it with break.\nThe I/O error is probably because when you have run through the loop once the last thing you do is f.close(), which closes the file. When execution continues with the loop in the line currentmoney = float(f.readline()): f will be a closed filehandle that you can't read from.\n",
"well theres a couple of things...\nyou open(money.txt) outside the while loop but you close it after the first iteration...\n(technically you close, reopen & close again)\nPut when the loop comes round the second time, f will be closed and f.readLine() will most likely fail\n",
"You close your file only if the IF condition is satisfied, otherwise you attempt to reopen it after the IF block. Depending on the result you want to achieve you will want to either remove the f.close call, or add an ELSE branch and remove the second f.open call. Anyway let me warn you that the str(now) in your IF block is just deprecated as you're not saving the result of that call anywhere.\n",
"You'll get an IO Error on your first line if money.txt doesn't exist.\n",
"Can I piggyback a question? The following has puzzled me for some time. I always get an IOError from these 'open()' statements, so I've stopped checking for the error. (Don't like to do that!) What's wrong with my code? The 'if IOError:' test shown in comments was originally right after the statement with 'open()'.\nif __name__ == '__main__':\n#get name of input file and open() infobj\n infname = sys.argv[1]\n print 'infname is: %s' % (sys.argv[1])\n infobj = open( infname, 'rU' )\n print 'infobj is: %s' % infobj\n# 'if IOError:' always evals to True!?!\n# if IOError:\n# print 'IOError opening file tmp with mode rU.'\n# sys.exit( 1)\n\n#get name of output file and open() outfobj\n outfname = sys.argv[2]\n print 'outfname is: %s' % (sys.argv[2])\n outfobj = open( outfname, 'w' )\n print 'outfobj is: %s' % outfobj\n# if IOError:\n# print 'IOError opening file otmp with mode w.'\n# sys.exit( 2)\n\n"
] |
[
3,
2,
0,
0,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0000872680_python_python_3.x.txt
|
Q:
Pythonic Way to Initialize (Complex) Static Data Members
I have a class with a complex data member that I want to keep "static". I want to initialize it once, using a function. How Pythonic is something like this:
def generate_data():
... do some analysis and return complex object e.g. list ...
class Coo:
data_member = generate_data()
... rest of class code ...
The function generate_data takes a long while to complete and returns data that remains constant in the scope of a running program. I don't want it to run every time class Coo is instantiated.
Also, to verify, as long as I don't assign anything to data_member in __init__, it will remain "static"? What if a method in Coo appends some value to data_member (assuming it's a list) - will this addition be available to the rest of the instances?
Thanks
A:
You're right on all counts. data_member will be created once, and will be available to all instances of coo. If any instance modifies it, that modification will be visible to all other instances.
Here's an example that demonstrates all this, with its output shown at the end:
def generate_data():
print "Generating"
return [1,2,3]
class coo:
data_member = generate_data()
def modify(self):
self.data_member.append(4)
def display(self):
print self.data_member
x = coo()
y = coo()
y.modify()
x.display()
# Output:
# Generating
# [1, 2, 3, 4]
A:
As others have answered you're right -- I'll add one more thing to be aware of: If an instance modifies the object coo.data_member itself, for example
self.data_member.append('foo')
then the modification is seen by the rest of the instances. However if you do
self.data_member = new_object
then a new instance member is created which overrides the class member and is only visible to that instance, not the others. The difference is not always easy to spot, for example self.data_member += 'foo' vs. self.data_member = self.data_member + 'foo'.
To avoid this you probably should always refer to the object as coo.data_member (not through self).
A:
The statement data_member = generate_data() will be executed only once, when class coo: ... is executed. In majority of cases class statements occur at module level and are executed when module is imported. So data_member = generate_data() will be executed only once when you import module with class coo for the first time.
All instances of coo class will share data_member and can access it by writing coo.data_member. Any changes made to coo.data_member will be immediately visible by any coo instance. An instance can have its own data_member attribute. This attribute can be set by typing self.data_member = ... and will be visible only to that instance. The "static" data_member can still be accessed by typing coo.data_member.
|
Pythonic Way to Initialize (Complex) Static Data Members
|
I have a class with a complex data member that I want to keep "static". I want to initialize it once, using a function. How Pythonic is something like this:
def generate_data():
... do some analysis and return complex object e.g. list ...
class Coo:
data_member = generate_data()
... rest of class code ...
The function generate_data takes a long while to complete and returns data that remains constant in the scope of a running program. I don't want it to run every time class Coo is instantiated.
Also, to verify, as long as I don't assign anything to data_member in __init__, it will remain "static"? What if a method in Coo appends some value to data_member (assuming it's a list) - will this addition be available to the rest of the instances?
Thanks
|
[
"You're right on all counts. data_member will be created once, and will be available to all instances of coo. If any instance modifies it, that modification will be visible to all other instances.\nHere's an example that demonstrates all this, with its output shown at the end:\ndef generate_data():\n print \"Generating\"\n return [1,2,3]\n\nclass coo:\n data_member = generate_data()\n def modify(self):\n self.data_member.append(4)\n\n def display(self):\n print self.data_member\n\nx = coo()\ny = coo()\ny.modify()\nx.display()\n\n# Output:\n# Generating\n# [1, 2, 3, 4]\n\n",
"As others have answered you're right -- I'll add one more thing to be aware of: If an instance modifies the object coo.data_member itself, for example\nself.data_member.append('foo')\n\nthen the modification is seen by the rest of the instances. However if you do \nself.data_member = new_object\n\nthen a new instance member is created which overrides the class member and is only visible to that instance, not the others. The difference is not always easy to spot, for example self.data_member += 'foo' vs. self.data_member = self.data_member + 'foo'.\nTo avoid this you probably should always refer to the object as coo.data_member (not through self).\n",
"The statement data_member = generate_data() will be executed only once, when class coo: ... is executed. In majority of cases class statements occur at module level and are executed when module is imported. So data_member = generate_data() will be executed only once when you import module with class coo for the first time.\nAll instances of coo class will share data_member and can access it by writing coo.data_member. Any changes made to coo.data_member will be immediately visible by any coo instance. An instance can have its own data_member attribute. This attribute can be set by typing self.data_member = ... and will be visible only to that instance. The \"static\" data_member can still be accessed by typing coo.data_member.\n"
] |
[
18,
13,
6
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0000872973_class_python.txt
|
Q:
pycurl: RETURNTRANSFER option doesn't exist
I'm using pycurl to access a JSON web API, but when I try to use the following:
ocurl.setopt(pycurl.URL, gaurl) # host + endpoint
ocurl.setopt(pycurl.RETURNTRANSFER, 1)
ocurl.setopt(pycurl.HTTPHEADER, gaheader) # Send extra headers
ocurl.setopt(pycurl.CUSTOMREQUEST, "POST") # HTTP POST req
ocurl.setopt(pycurl.CONNECTTIMEOUT, 2)
and execute the script, it fails.
File "getdata.py", line 46, in apicall
ocurl.setopt(pycurl.RETURNTRANSFER, 1)
AttributeError: 'module' object has no attribute 'RETURNTRANSFER'
I haven't a clue what's going on, and why RETURNTRANSFER doesn't appear to exist while all the other options do.
A:
The manual shows the usage being something like this:
>>> import pycurl
>>> import StringIO
>>> b = StringIO.StringIO()
>>> conn = pycurl.Curl()
>>> conn.setopt(pycurl.URL, 'http://www.example.org')
>>> conn.setopt(pycurl.WRITEFUNCTION, b.write)
>>> conn.perform()
>>> print b.getvalue()
<HTML>
<HEAD>
<TITLE>Example Web Page</TITLE>
</HEAD>
<body>
<p>You have reached this web page by typing "example.com",
"example.net",
or "example.org" into your web browser.</p>
<p>These domain names are reserved for use in documentation and are not availabl
e
for registration. See <a href="http://www.rfc-editor.org/rfc/rfc2606.txt">RFC
2606</a>, Section 3.</p>
</BODY>
</HTML>
Seems a little roundabout, but I'm not a big fan of PycURL...
A:
CURLOPT_RETURNTRANSFER is not a libcurl option, it is but provided within the PHP/CURL binding
A:
Have you tried executing print dir(pycurl) and see if the option exists in the attribute list?
|
pycurl: RETURNTRANSFER option doesn't exist
|
I'm using pycurl to access a JSON web API, but when I try to use the following:
ocurl.setopt(pycurl.URL, gaurl) # host + endpoint
ocurl.setopt(pycurl.RETURNTRANSFER, 1)
ocurl.setopt(pycurl.HTTPHEADER, gaheader) # Send extra headers
ocurl.setopt(pycurl.CUSTOMREQUEST, "POST") # HTTP POST req
ocurl.setopt(pycurl.CONNECTTIMEOUT, 2)
and execute the script, it fails.
File "getdata.py", line 46, in apicall
ocurl.setopt(pycurl.RETURNTRANSFER, 1)
AttributeError: 'module' object has no attribute 'RETURNTRANSFER'
I haven't a clue what's going on, and why RETURNTRANSFER doesn't appear to exist while all the other options do.
|
[
"The manual shows the usage being something like this:\n>>> import pycurl\n>>> import StringIO\n>>> b = StringIO.StringIO()\n>>> conn = pycurl.Curl()\n>>> conn.setopt(pycurl.URL, 'http://www.example.org')\n>>> conn.setopt(pycurl.WRITEFUNCTION, b.write)\n>>> conn.perform()\n>>> print b.getvalue()\n<HTML>\n<HEAD>\n <TITLE>Example Web Page</TITLE>\n</HEAD>\n<body>\n<p>You have reached this web page by typing "example.com",\n"example.net",\n or "example.org" into your web browser.</p>\n<p>These domain names are reserved for use in documentation and are not availabl\ne\n for registration. See <a href=\"http://www.rfc-editor.org/rfc/rfc2606.txt\">RFC\n\n 2606</a>, Section 3.</p>\n</BODY>\n</HTML>\n\nSeems a little roundabout, but I'm not a big fan of PycURL...\n",
"CURLOPT_RETURNTRANSFER is not a libcurl option, it is but provided within the PHP/CURL binding\n",
"Have you tried executing print dir(pycurl) and see if the option exists in the attribute list?\n"
] |
[
7,
5,
0
] |
[] |
[] |
[
"attributeerror",
"curl",
"libcurl",
"pycurl",
"python"
] |
stackoverflow_0000872844_attributeerror_curl_libcurl_pycurl_python.txt
|
Q:
Converting to safe unicode in python
I'm dealing with unknown data and trying to insert into a MySQL database using Python/Django. I'm getting some errors that I don't quite understand and am looking for some help. Here is the error.
Incorrect string value: '\xEF\xBF\xBDs m...'
My guess is that the string is not being properly converted to unicode? Here is my code for unicode conversion.
s = unicode(content, "utf-8", errors="replace")
Without the above unicode conversion, the error I get is
'utf8' codec can't decode byte 0x92 in position 31: unexpected code byte. You passed in 'Fabulous home on one of Decatur\x92s most
Any help is appreciated!
A:
What is the original encoding? I'm assuming "cp1252", from pixelbeat's answer. In that case, you can do
>>> orig # Byte string, encoded in cp1252
'Fabulous home on one of Decatur\x92s most'
>>> uni = orig.decode('cp1252')
>>> uni # Unicode string
u'Fabulous home on one of Decatur\u2019s most'
>>> s = uni.encode('utf8')
>>> s # Correct byte string encoded in utf-8
'Fabulous home on one of Decatur\xe2\x80\x99s most'
A:
0x92 is right single curly quote in windows cp1252 encoding.
\xEF\xBF\xBD is the UTF8 encoding of the unicode replacement character
(which was inserted instead of the erroneous cp1252 character).
So it looks like your database is not accepting the valid UTF8 data?
2 options:
1. Perhaps you should be using unicode(content,"cp1252")
2. If you want to insert UTF-8 into the DB, then you'll need to config it appropriately. I'll leave that answer to others more knowledgeable
A:
The "Fabulous..." string doesn't look like utf-8: 0x92 is above 128 and as such should be a continuation of a multi-byte character. However, in that string it appears on its own (apparently representing an apostrophe).
|
Converting to safe unicode in python
|
I'm dealing with unknown data and trying to insert into a MySQL database using Python/Django. I'm getting some errors that I don't quite understand and am looking for some help. Here is the error.
Incorrect string value: '\xEF\xBF\xBDs m...'
My guess is that the string is not being properly converted to unicode? Here is my code for unicode conversion.
s = unicode(content, "utf-8", errors="replace")
Without the above unicode conversion, the error I get is
'utf8' codec can't decode byte 0x92 in position 31: unexpected code byte. You passed in 'Fabulous home on one of Decatur\x92s most
Any help is appreciated!
|
[
"What is the original encoding? I'm assuming \"cp1252\", from pixelbeat's answer. In that case, you can do\n>>> orig # Byte string, encoded in cp1252\n'Fabulous home on one of Decatur\\x92s most' \n\n>>> uni = orig.decode('cp1252')\n>>> uni # Unicode string\nu'Fabulous home on one of Decatur\\u2019s most'\n\n>>> s = uni.encode('utf8') \n>>> s # Correct byte string encoded in utf-8\n'Fabulous home on one of Decatur\\xe2\\x80\\x99s most'\n\n",
"0x92 is right single curly quote in windows cp1252 encoding.\n\\xEF\\xBF\\xBD is the UTF8 encoding of the unicode replacement character\n(which was inserted instead of the erroneous cp1252 character).\nSo it looks like your database is not accepting the valid UTF8 data?\n2 options:\n 1. Perhaps you should be using unicode(content,\"cp1252\")\n 2. If you want to insert UTF-8 into the DB, then you'll need to config it appropriately. I'll leave that answer to others more knowledgeable \n",
"The \"Fabulous...\" string doesn't look like utf-8: 0x92 is above 128 and as such should be a continuation of a multi-byte character. However, in that string it appears on its own (apparently representing an apostrophe).\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"django",
"python",
"unicode"
] |
stackoverflow_0000873419_django_python_unicode.txt
|
Q:
How to install cogen python coroutine framework on Mac OS X
I did
sudo easy_install cogen
and got :
Searching for cogen
Best match: cogen 0.2.1
Processing cogen-0.2.1-py2.5.egg
cogen 0.2.1 is already the active version in easy-install.pth
Using /Library/Python/2.5/site-packages/cogen-0.2.1-py2.5.egg
Processing dependencies for cogen
Searching for py-kqueue>=2.0
Reading http://pypi.python.org/simple/py-kqueue/
Best match: py-kqueue 2.0.1
Downloading http://pypi.python.org/packages/source/p/py-kqueue/py-kqueue-2.0.1.zip#md5=98d0c0d76c1ff827b3de33ac0073d2e7
Processing py-kqueue-2.0.1.zip
Running py-kqueue-2.0.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-M8cj_5/py-kqueue-2.0.1/egg-dist-tmp-lDR6ry
kqueuemodule.c: In function ‘kqueue_new_kevent’:
kqueuemodule.c:71: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_keventType_setattr’:
kqueuemodule.c:217: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_new_kevent’:
kqueuemodule.c:71: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_keventType_setattr’:
kqueuemodule.c:217: warning: assignment makes pointer from integer without a cast
No eggs found in /tmp/easy_install-M8cj_5/py-kqueue-2.0.1/egg-dist-tmp-lDR6ry (setup script problem?)
error: Could not find required distribution py-kqueue>=2.0
Would appreciate any pointers on how to get the dependencies installed on Mac OS X.
A:
It seems to be some problem with setuptools -- the dependencies are compiled succesfully but not installed. FWIW it works for me (OSX 10.5.6, MacPython 2.5).
I would try reinstalling setuptools, and if that fails downloading and "python setup.py install"ing cogen and py-kqueue manually.
A:
Try downloading 'pip' (http://pypi.python.org/pypi/pip) and use that instead of easy_install. It just worked for me.
|
How to install cogen python coroutine framework on Mac OS X
|
I did
sudo easy_install cogen
and got :
Searching for cogen
Best match: cogen 0.2.1
Processing cogen-0.2.1-py2.5.egg
cogen 0.2.1 is already the active version in easy-install.pth
Using /Library/Python/2.5/site-packages/cogen-0.2.1-py2.5.egg
Processing dependencies for cogen
Searching for py-kqueue>=2.0
Reading http://pypi.python.org/simple/py-kqueue/
Best match: py-kqueue 2.0.1
Downloading http://pypi.python.org/packages/source/p/py-kqueue/py-kqueue-2.0.1.zip#md5=98d0c0d76c1ff827b3de33ac0073d2e7
Processing py-kqueue-2.0.1.zip
Running py-kqueue-2.0.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-M8cj_5/py-kqueue-2.0.1/egg-dist-tmp-lDR6ry
kqueuemodule.c: In function ‘kqueue_new_kevent’:
kqueuemodule.c:71: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_keventType_setattr’:
kqueuemodule.c:217: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_new_kevent’:
kqueuemodule.c:71: warning: assignment makes pointer from integer without a cast
kqueuemodule.c: In function ‘kqueue_keventType_setattr’:
kqueuemodule.c:217: warning: assignment makes pointer from integer without a cast
No eggs found in /tmp/easy_install-M8cj_5/py-kqueue-2.0.1/egg-dist-tmp-lDR6ry (setup script problem?)
error: Could not find required distribution py-kqueue>=2.0
Would appreciate any pointers on how to get the dependencies installed on Mac OS X.
|
[
"It seems to be some problem with setuptools -- the dependencies are compiled succesfully but not installed. FWIW it works for me (OSX 10.5.6, MacPython 2.5). \nI would try reinstalling setuptools, and if that fails downloading and \"python setup.py install\"ing cogen and py-kqueue manually. \n",
"Try downloading 'pip' (http://pypi.python.org/pypi/pip) and use that instead of easy_install. It just worked for me.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000873577_python.txt
|
Q:
port python code to javascript
indices[i:] = indices[i+1:] + indices[i:i+1]
Hope someone helps.
A:
I'm fairly new to Python but if I understand the code correctly, it reconstructs a list from a given offset into every item following offset+1 and the item at the offset.
Running it seems to confirm this:
>>> indices = ['one','two','three','four','five','six']
>>> i = 2
>>> indices[i:] = indices[i+1:] + indices[i:i+1]
>>> indices
['one', 'two', 'four', 'five', 'six', 'three']
In Javascript can be written:
indices = indices.concat( indices.splice( i, 1 ) );
Same entire sequence would go:
>>> var indices = ['one','two','three','four','five','six'];
>>> var i = 2;
>>> indices = indices.concat( indices.splice( i, 1 ) );
>>> indices
["one", "two", "four", "five", "six", "three"]
This works because splice is destructive to the array but returns removed elements, which may then be handed to concat.
A:
You will want to look at Array.slice()
var temp=indices.slice(i+1).concat(indices.slice(i, i+1));
var arr=[];
for (var j=0; j<temp.length; j++){
arr[j+i]=temp[i];
}
|
port python code to javascript
|
indices[i:] = indices[i+1:] + indices[i:i+1]
Hope someone helps.
|
[
"I'm fairly new to Python but if I understand the code correctly, it reconstructs a list from a given offset into every item following offset+1 and the item at the offset.\nRunning it seems to confirm this:\n>>> indices = ['one','two','three','four','five','six']\n>>> i = 2\n>>> indices[i:] = indices[i+1:] + indices[i:i+1]\n>>> indices\n['one', 'two', 'four', 'five', 'six', 'three']\n\nIn Javascript can be written:\nindices = indices.concat( indices.splice( i, 1 ) );\n\nSame entire sequence would go:\n>>> var indices = ['one','two','three','four','five','six'];\n>>> var i = 2;\n>>> indices = indices.concat( indices.splice( i, 1 ) );\n>>> indices\n[\"one\", \"two\", \"four\", \"five\", \"six\", \"three\"]\n\nThis works because splice is destructive to the array but returns removed elements, which may then be handed to concat. \n",
"You will want to look at Array.slice()\nvar temp=indices.slice(i+1).concat(indices.slice(i, i+1));\nvar arr=[];\nfor (var j=0; j<temp.length; j++){\n arr[j+i]=temp[i];\n}\n\n"
] |
[
6,
1
] |
[] |
[] |
[
"javascript",
"porting",
"python"
] |
stackoverflow_0000872366_javascript_porting_python.txt
|
Q:
How would one make Python objects persistent in a web-app?
I'm writing a reasonably complex web application. The Python backend runs an algorithm whose state depends on data stored in several interrelated database tables which does not change often, plus user specific data which does change often. The algorithm's per-user state undergoes many small changes as a user works with the application. This algorithm is used often during each user's work to make certain important decisions.
For performance reasons, re-initializing the state on every request from the (semi-normalized) database data quickly becomes non-feasible. It would be highly preferable, for example, to cache the state's Python object in some way so that it can simply be used and/or updated whenever necessary. However, since this is a web application, there several processes serving requests, so using a global variable is out of the question.
I've tried serializing the relevant object (via pickle) and saving the serialized data to the DB, and am now experimenting with caching the serialized data via memcached. However, this still has the significant overhead of serializing and deserializing the object often.
I've looked at shared memory solutions but the only relevant thing I've found is POSH. However POSH doesn't seem to be widely used and I don't feel easy integrating such an experimental component into my application.
I need some advice! This is my first shot at developing a web application, so I'm hoping this is a common enough issue that there are well-known solutions to such problems. At this point solutions which assume the Python back-end is running on a single server would be sufficient, but extra points for solutions which scale to multiple servers as well :)
Notes:
I have this application working, currently live and with active users. I started out without doing any premature optimization, and then optimized as needed. I've done the measuring and testing to make sure the above mentioned issue is the actual bottleneck. I'm sure pretty sure I could squeeze more performance out of the current setup, but I wanted to ask if there's a better way.
The setup itself is still a work in progress; assume that the system's architecture can be whatever suites your solution.
A:
Be cautious of premature optimization.
Addition: The "Python backend runs an algorithm whose state..." is the session in the web framework. That's it. Let the Django framework maintain session state in cache. Period.
"The algorithm's per-user state undergoes many small changes as a user works with the application." Most web frameworks offer a cached session object. Often it is very high performance. See Django's session documentation for this.
Advice. [Revised]
It appears you have something that works. Leverage to learn your framework, learn the tools, and learn what knobs you can turn without breaking a sweat. Specifically, using session state.
Second, fiddle with caching, session management, and things that are easy to adjust, and see if you have enough speed. Find out whether MySQL socket or named pipe is faster by trying them out. These are the no-programming optimizations.
Third, measure performance to find your actual bottleneck. Be prepared to provide (and defend) the measurements as fine-grained enough to be useful and stable enough to providing meaningful comparison of alternatives.
For example, show the performance difference between persistent sessions and cached sessions.
A:
I think that the multiprocessing framework has what might be applicable here - namely the shared ctypes module.
Multiprocessing is fairly new to Python, so it might have some oddities. I am not quite sure whether the solution works with processes not spawned via multiprocessing.
A:
I think you can give ZODB a shot.
"A major feature of ZODB is transparency. You do not need to write any code to explicitly read or write your objects to or from a database. You just put your persistent objects into a container that works just like a Python dictionary. Everything inside this dictionary is saved in the database. This dictionary is said to be the "root" of the database. It's like a magic bag; any Python object that you put inside it becomes persistent."
Initailly it was a integral part of Zope, but lately a standalone package is also available.
It has the following limitation:
"Actually there are a few restrictions on what you can store in the ZODB. You can store any objects that can be "pickled" into a standard, cross-platform serial format. Objects like lists, dictionaries, and numbers can be pickled. Objects like files, sockets, and Python code objects, cannot be stored in the database because they cannot be pickled."
I have read it but haven't given it a shot myself though.
Other possible thing could be a in-memory sqlite db, that may speed up the process a bit - being an in-memory db, but still you would have to do the serialization stuff and all.
Note: In memory db is expensive on resources.
Here is a link: http://www.zope.org/Documentation/Articles/ZODB1
A:
First of all your approach is not a common web development practice. Even multi threading is being used, web applications are designed to be able to run multi-processing environments, for both scalability and easier deployment .
If you need to just initialize a large object, and do not need to change later, you can do it easily by using a global variable that is initialized while your WSGI application is being created, or the module contains the object is being loaded etc, multi processing will do fine for you.
If you need to change the object and access it from every thread, you need to be sure your object is thread safe, use locks to ensure that. And use a single server context, a process. Any multi threading python server will serve you well, also FCGI is a good choice for this kind of design.
But, if multiple threads are accessing and changing your object the locks may have a really bad effect on your performance gain, which is likely to make all the benefits go away.
A:
This is Durus, a persistent object system for applications written in the Python
programming language. Durus offers an easy way to use and maintain a consistent
collection of object instances used by one or more processes. Access and change of a
persistent instances is managed through a cached Connection instance which includes
commit() and abort() methods so that changes are transactional.
http://www.mems-exchange.org/software/durus/
I've used it before in some research code, where I wanted to persist the results of certain computations. I eventually switched to pytables as it met my needs better.
A:
Another option is to review the requirement for state, it sounds like if the serialisation is the bottle neck then the object is very large. Do you really need an object that large?
I know in the Stackoverflow podcast 27 the reddit guys discuss what they use for state, so that maybe useful to listen to.
|
How would one make Python objects persistent in a web-app?
|
I'm writing a reasonably complex web application. The Python backend runs an algorithm whose state depends on data stored in several interrelated database tables which does not change often, plus user specific data which does change often. The algorithm's per-user state undergoes many small changes as a user works with the application. This algorithm is used often during each user's work to make certain important decisions.
For performance reasons, re-initializing the state on every request from the (semi-normalized) database data quickly becomes non-feasible. It would be highly preferable, for example, to cache the state's Python object in some way so that it can simply be used and/or updated whenever necessary. However, since this is a web application, there several processes serving requests, so using a global variable is out of the question.
I've tried serializing the relevant object (via pickle) and saving the serialized data to the DB, and am now experimenting with caching the serialized data via memcached. However, this still has the significant overhead of serializing and deserializing the object often.
I've looked at shared memory solutions but the only relevant thing I've found is POSH. However POSH doesn't seem to be widely used and I don't feel easy integrating such an experimental component into my application.
I need some advice! This is my first shot at developing a web application, so I'm hoping this is a common enough issue that there are well-known solutions to such problems. At this point solutions which assume the Python back-end is running on a single server would be sufficient, but extra points for solutions which scale to multiple servers as well :)
Notes:
I have this application working, currently live and with active users. I started out without doing any premature optimization, and then optimized as needed. I've done the measuring and testing to make sure the above mentioned issue is the actual bottleneck. I'm sure pretty sure I could squeeze more performance out of the current setup, but I wanted to ask if there's a better way.
The setup itself is still a work in progress; assume that the system's architecture can be whatever suites your solution.
|
[
"Be cautious of premature optimization.\nAddition: The \"Python backend runs an algorithm whose state...\" is the session in the web framework. That's it. Let the Django framework maintain session state in cache. Period. \n\"The algorithm's per-user state undergoes many small changes as a user works with the application.\" Most web frameworks offer a cached session object. Often it is very high performance. See Django's session documentation for this. \nAdvice. [Revised]\nIt appears you have something that works. Leverage to learn your framework, learn the tools, and learn what knobs you can turn without breaking a sweat. Specifically, using session state.\nSecond, fiddle with caching, session management, and things that are easy to adjust, and see if you have enough speed. Find out whether MySQL socket or named pipe is faster by trying them out. These are the no-programming optimizations.\nThird, measure performance to find your actual bottleneck. Be prepared to provide (and defend) the measurements as fine-grained enough to be useful and stable enough to providing meaningful comparison of alternatives. \nFor example, show the performance difference between persistent sessions and cached sessions.\n",
"I think that the multiprocessing framework has what might be applicable here - namely the shared ctypes module. \nMultiprocessing is fairly new to Python, so it might have some oddities. I am not quite sure whether the solution works with processes not spawned via multiprocessing.\n",
"I think you can give ZODB a shot.\n\"A major feature of ZODB is transparency. You do not need to write any code to explicitly read or write your objects to or from a database. You just put your persistent objects into a container that works just like a Python dictionary. Everything inside this dictionary is saved in the database. This dictionary is said to be the \"root\" of the database. It's like a magic bag; any Python object that you put inside it becomes persistent.\"\nInitailly it was a integral part of Zope, but lately a standalone package is also available.\nIt has the following limitation:\n\"Actually there are a few restrictions on what you can store in the ZODB. You can store any objects that can be \"pickled\" into a standard, cross-platform serial format. Objects like lists, dictionaries, and numbers can be pickled. Objects like files, sockets, and Python code objects, cannot be stored in the database because they cannot be pickled.\"\nI have read it but haven't given it a shot myself though.\nOther possible thing could be a in-memory sqlite db, that may speed up the process a bit - being an in-memory db, but still you would have to do the serialization stuff and all.\nNote: In memory db is expensive on resources.\nHere is a link: http://www.zope.org/Documentation/Articles/ZODB1\n",
"First of all your approach is not a common web development practice. Even multi threading is being used, web applications are designed to be able to run multi-processing environments, for both scalability and easier deployment .\nIf you need to just initialize a large object, and do not need to change later, you can do it easily by using a global variable that is initialized while your WSGI application is being created, or the module contains the object is being loaded etc, multi processing will do fine for you.\nIf you need to change the object and access it from every thread, you need to be sure your object is thread safe, use locks to ensure that. And use a single server context, a process. Any multi threading python server will serve you well, also FCGI is a good choice for this kind of design.\nBut, if multiple threads are accessing and changing your object the locks may have a really bad effect on your performance gain, which is likely to make all the benefits go away.\n",
"\nThis is Durus, a persistent object system for applications written in the Python\n programming language. Durus offers an easy way to use and maintain a consistent \n collection of object instances used by one or more processes. Access and change of a \n persistent instances is managed through a cached Connection instance which includes \n commit() and abort() methods so that changes are transactional. \n\nhttp://www.mems-exchange.org/software/durus/\nI've used it before in some research code, where I wanted to persist the results of certain computations. I eventually switched to pytables as it met my needs better.\n",
"Another option is to review the requirement for state, it sounds like if the serialisation is the bottle neck then the object is very large. Do you really need an object that large?\nI know in the Stackoverflow podcast 27 the reddit guys discuss what they use for state, so that maybe useful to listen to. \n"
] |
[
8,
4,
2,
2,
2,
1
] |
[] |
[] |
[
"concurrency",
"persistence",
"python",
"web_applications"
] |
stackoverflow_0000330367_concurrency_persistence_python_web_applications.txt
|
Q:
mod_php vs mod_python
Why mod_python is oop but mod_php is not ?
Example :We go to www.example.com/dir1/dir2
if you use mod_python apache opens www/dir1.py and calls dir2 method
but if you use php module apache opens www/dir1/dir2/index.php
A:
Let's talk about mod_python vs. mod_php.
Since the Python language is NOT specifically designed for serving web pages, mod_python must do some additional work.
Since the PHP language IS specifically designed to serve web pages, mod_php simply starts a named PHP module.
In the case of mod_python (different from mod_fastcgi or mod_wsgi), the designer of mod_python decided that the best way to invoke Python is to call a method of an object defined in a module. Hopefully, that method of that object will write the headers and web page content to stdout.
In the case of mod_wsgi, the designer decided that the best way to invoke Python is to call a function (or callable object) in a module. Hopefully that function will use the supplied object to create header and return a list strings with the web page content.
In the case of mod_php, they just invoke the module because PHP modules are designed to serve web pages. The program runs and the result of that run is a page of content; usually HTML.
The reason mod_php works one way, mod_python works another and mod_wsgi works a third way is because they're different. Written by different people with different definitions of the way to produce a web page.
The php page can be object-oriented. A mod_wsgi function may be a callable object or it may be a simplef unction. Those are design choices.
The mod_python requirement for a class is a design choice made by the designer of mod_python.
The reason "why" is never useful. The reason "why" is because someone decided to design it that way.
A:
Because mod_python is abstracting the URL into a "RPC-like" mechanism. You can achieve the same in PHP. I always did it by hand, but I am pretty sure there are prepackaged pear modules for this.
Note the mod_python behavior is not forcibly "the best". It all amounts to the type of design you want to give to your application, and how it behaves to the clients. As far as I see, the mod_python approach assumes that for every URL you have a function, like mapping the module structure into a URL tree. This is technically not a particularly nice approach, because there's a tight correlation between the technology you are using (mod_python) and the URL layout of your application.
On the reason why, theres' no reason. Some people like one food, and some other like another. Design is, in some cases, a matter of taste and artistic choices, not only technical restrains.
A:
I think you have some misconceptions about how HTTP works. Nothing in the http standard requires you to have a certain file as a resource. It is just the way how mod_php works, that for a given path, this path is translated to a php file on the disk of the server, which in turn is interpreted by the compiler.
The mod_python module on the other hand is much more generic, it allows you to map any resource to a call to some python object. It just happens that some configurations are available out of the box, to make it easier to start with. In most cases the dispatch of the url is managed by your framework, and how that works is up to the framework implementor.
Because of the generic nature of the mod_python module you are also able to access some apache features which are not available in the mod_php module, for instance you may write your own authentication handler, which my not only apply to your python webapp, but also to other apps in your apache as well.
A:
in PHP you can program your web pages the top-to-bottom scripts, procedural programming and function calls, OOP. this is the main reason why PHP was first created, and how it evolved. mod_php is just a module for web servers to utilize PHP as a preprocessor. so it just passes HTTP information and the PHP script to PHP interpreter.
the PHP way of web page creation is do what you want; write a top-to-bottom script, define functions in different files and include those and call functions, or write your app in OOP, you can also use many full-featured frameworks today to make sure your application design and structure meets today best practices and design patterns.
I'm new to Python, and am not familiar with web programming with python. but as much as I know, python was not created to make web programming easier. it was intended to be a general purpose programming language, so although it might be possible to write simple top-to-bottom scripts in python and run them as web page responses (I'm not sure if such thing is possible), it is not the pythonic way, and so I think developers of the mod_python wanted web programming in python to be in a pythonic way.
|
mod_php vs mod_python
|
Why mod_python is oop but mod_php is not ?
Example :We go to www.example.com/dir1/dir2
if you use mod_python apache opens www/dir1.py and calls dir2 method
but if you use php module apache opens www/dir1/dir2/index.php
|
[
"Let's talk about mod_python vs. mod_php.\nSince the Python language is NOT specifically designed for serving web pages, mod_python must do some additional work. \nSince the PHP language IS specifically designed to serve web pages, mod_php simply starts a named PHP module.\nIn the case of mod_python (different from mod_fastcgi or mod_wsgi), the designer of mod_python decided that the best way to invoke Python is to call a method of an object defined in a module. Hopefully, that method of that object will write the headers and web page content to stdout. \nIn the case of mod_wsgi, the designer decided that the best way to invoke Python is to call a function (or callable object) in a module. Hopefully that function will use the supplied object to create header and return a list strings with the web page content.\nIn the case of mod_php, they just invoke the module because PHP modules are designed to serve web pages. The program runs and the result of that run is a page of content; usually HTML.\nThe reason mod_php works one way, mod_python works another and mod_wsgi works a third way is because they're different. Written by different people with different definitions of the way to produce a web page. \nThe php page can be object-oriented. A mod_wsgi function may be a callable object or it may be a simplef unction. Those are design choices.\nThe mod_python requirement for a class is a design choice made by the designer of mod_python.\nThe reason \"why\" is never useful. The reason \"why\" is because someone decided to design it that way.\n",
"Because mod_python is abstracting the URL into a \"RPC-like\" mechanism. You can achieve the same in PHP. I always did it by hand, but I am pretty sure there are prepackaged pear modules for this.\nNote the mod_python behavior is not forcibly \"the best\". It all amounts to the type of design you want to give to your application, and how it behaves to the clients. As far as I see, the mod_python approach assumes that for every URL you have a function, like mapping the module structure into a URL tree. This is technically not a particularly nice approach, because there's a tight correlation between the technology you are using (mod_python) and the URL layout of your application.\nOn the reason why, theres' no reason. Some people like one food, and some other like another. Design is, in some cases, a matter of taste and artistic choices, not only technical restrains. \n",
"I think you have some misconceptions about how HTTP works. Nothing in the http standard requires you to have a certain file as a resource. It is just the way how mod_php works, that for a given path, this path is translated to a php file on the disk of the server, which in turn is interpreted by the compiler.\nThe mod_python module on the other hand is much more generic, it allows you to map any resource to a call to some python object. It just happens that some configurations are available out of the box, to make it easier to start with. In most cases the dispatch of the url is managed by your framework, and how that works is up to the framework implementor.\nBecause of the generic nature of the mod_python module you are also able to access some apache features which are not available in the mod_php module, for instance you may write your own authentication handler, which my not only apply to your python webapp, but also to other apps in your apache as well.\n",
"in PHP you can program your web pages the top-to-bottom scripts, procedural programming and function calls, OOP. this is the main reason why PHP was first created, and how it evolved. mod_php is just a module for web servers to utilize PHP as a preprocessor. so it just passes HTTP information and the PHP script to PHP interpreter.\nthe PHP way of web page creation is do what you want; write a top-to-bottom script, define functions in different files and include those and call functions, or write your app in OOP, you can also use many full-featured frameworks today to make sure your application design and structure meets today best practices and design patterns.\nI'm new to Python, and am not familiar with web programming with python. but as much as I know, python was not created to make web programming easier. it was intended to be a general purpose programming language, so although it might be possible to write simple top-to-bottom scripts in python and run them as web page responses (I'm not sure if such thing is possible), it is not the pythonic way, and so I think developers of the mod_python wanted web programming in python to be in a pythonic way.\n"
] |
[
8,
4,
4,
0
] |
[
"Perhaps I misunderstand your question, but both Python and PHP support both procedural and object-oriented programming. (Though one could argue that Python's support for OO is the stronger of the two.)\n",
"See Class and Objects in PHP 5\n"
] |
[
-1,
-1
] |
[
"mod_python",
"php",
"python"
] |
stackoverflow_0000872695_mod_python_php_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.