content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Tips on upgrading to python 3.0? So with the final releases of Python 3.0 (and now 3.1), a lot of people are facing the worry of how to upgrade without losing half their codebase due to backwards incompatibility. What are people's best tips for avoiding the many pitfalls that will almost-inevitably result from switching to the next-generation of python? Probably a good place to start is "use 2to3 to convert your python 2.x code to 3.x" :-) A: First, this question is very similar to How are you planning on handling the migration to Python 3?. Check the answers there. There is also a section in the Python Wiki about porting applications to Python 3.x The Release Notes for python 3.0 contains a section about porting. I'm quoting the tips there: (Prerequisite:) Start with excellent test coverage. Port to Python 2.6. This should be no more work than the average por from Python 2.x to Python 2.(x+1). Make sure all your tests pass. (Still using 2.6:) Turn on the -3 command line switch. This enables warnings about features that will be removed (or change) in 3.0. Run your test suite again, and fix code that you get warnings about until there are no warnings left, and all your tests still pass. Run the 2to3 source-to-source translator over your source code tree. (See 2to3 - Automated Python 2 to 3 code translation for more on this tool.) Run the result of the translation under Python 3.0. Manually fix up any remaining issues, fixing problems until all tests pass again. It is not recommended to try to write source code that runs unchanged under both Python 2.6 and 3.0; you’d have to use a very contorted coding style, e.g. avoiding print statements, metaclasses, and much more. If you are maintaining a library that needs to support both Python 2.6 and Python 3.0, the best approach is to modify step 3 above by editing the 2.6 version of the source code and running the 2to3 translator again, rather than editing the 3.0 version of the source code. A: I write a free book about this. You can read it here: http://python3porting.com/ In short: Make sure all your third party libraries are available for Python 3. Prepare your code by removing common ambiguities: Use // if you really want integer division. Make sure you flag binary files with the 'b' flag when you open them, to clearly indicate if you mean the data to be binary or not. The higher your test coverage is, the better. Make sure it runs without warnings under "Python 2.7 -3". Now run 2to3. Fix any bugs. That's it, more or less. A: Without a really compelling reason to upgrade, I would stick with what works. I looked at upgrading the scripts I use daily and it was too much work for no benefit that I could see. "If it ain't broke, don't fix it!"
Tips on upgrading to python 3.0?
So with the final releases of Python 3.0 (and now 3.1), a lot of people are facing the worry of how to upgrade without losing half their codebase due to backwards incompatibility. What are people's best tips for avoiding the many pitfalls that will almost-inevitably result from switching to the next-generation of python? Probably a good place to start is "use 2to3 to convert your python 2.x code to 3.x" :-)
[ "First, this question is very similar to How are you planning on handling the migration to Python 3?. Check the answers there.\nThere is also a section in the Python Wiki about porting applications to Python 3.x\nThe Release Notes for python 3.0 contains a section about porting. I'm quoting the tips there:\n\n\n(Prerequisite:) Start with excellent test coverage.\nPort to Python 2.6. This should be no more work than the average por\n from Python 2.x to Python 2.(x+1).\n Make sure all your tests pass.\n(Still using 2.6:) Turn on the -3 command line switch. This enables warnings about features that will be\n removed (or change) in 3.0. Run your\n test suite again, and fix code that\n you get warnings about until there are\n no warnings left, and all your tests\n still pass.\nRun the 2to3 source-to-source translator over your source code tree.\n (See 2to3 - Automated Python 2 to 3\n code translation for more on this\n tool.) Run the result of the\n translation under Python 3.0. Manually\n fix up any remaining issues, fixing\n problems until all tests pass again.\n\nIt is not recommended to try to write\n source code that runs unchanged under\n both Python 2.6 and 3.0; you’d have to\n use a very contorted coding style,\n e.g. avoiding print statements,\n metaclasses, and much more. If you are\n maintaining a library that needs to\n support both Python 2.6 and Python\n 3.0, the best approach is to modify step 3 above by editing the 2.6\n version of the source code and running\n the 2to3 translator again, rather than\n editing the 3.0 version of the source\n code.\n\n", "I write a free book about this. You can read it here:\nhttp://python3porting.com/\nIn short:\n\nMake sure all your third party libraries are available for Python 3.\nPrepare your code by removing common ambiguities:\n\n\nUse // if you really want integer division.\nMake sure you flag binary files with the 'b' flag when you open them, to clearly\nindicate if you mean the data to be binary or not.\n\nThe higher your test coverage is, the better.\nMake sure it runs without warnings under \"Python 2.7 -3\".\nNow run 2to3.\nFix any bugs.\n\nThat's it, more or less.\n", "Without a really compelling reason to upgrade, I would stick with what works. I looked at upgrading the scripts I use daily and it was too much work for no benefit that I could see. \n\"If it ain't broke, don't fix it!\"\n" ]
[ 3, 3, 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001072028_python_python_3.x.txt
Q: see if two files have the same content in python Possible Duplicates: Finding duplicate files and removing them. In Python, is there a concise way of comparing whether the contents of two text files are the same? What is the easiest way to see if two files are the same content-wise in Python. One thing I can do is md5 each file and compare. Is there a better way? A: Yes, I think hashing the file would be the best way if you have to compare several files and store hashes for later comparison. As hash can clash, a byte-by-byte comparison may be done depending on the use case. Generally byte-by-byte comparison would be sufficient and efficient, which filecmp module already does + other things too. See http://docs.python.org/library/filecmp.html e.g. >>> import filecmp >>> filecmp.cmp('file1.txt', 'file1.txt') True >>> filecmp.cmp('file1.txt', 'file2.txt') False Speed consideration: Usually if only two files have to be compared, hashing them and comparing them would be slower instead of simple byte-by-byte comparison if done efficiently. e.g. code below tries to time hash vs byte-by-byte Disclaimer: this is not the best way of timing or comparing two algo. and there is need for improvements but it does give rough idea. If you think it should be improved do tell me I will change it. import random import string import hashlib import time def getRandText(N): return "".join([random.choice(string.printable) for i in xrange(N)]) N=1000000 randText1 = getRandText(N) randText2 = getRandText(N) def cmpHash(text1, text2): hash1 = hashlib.md5() hash1.update(text1) hash1 = hash1.hexdigest() hash2 = hashlib.md5() hash2.update(text2) hash2 = hash2.hexdigest() return hash1 == hash2 def cmpByteByByte(text1, text2): return text1 == text2 for cmpFunc in (cmpHash, cmpByteByByte): st = time.time() for i in range(10): cmpFunc(randText1, randText2) print cmpFunc.func_name,time.time()-st and the output is cmpHash 0.234999895096 cmpByteByByte 0.0 A: I'm not sure if you want to find duplicate files or just compare two single files. If the latter, the above approach (filecmp) is better, if the former, the following approach is better. There are lots of duplicate files detection questions here. Assuming they are not very small and that performance is important, you can Compare file sizes first, discarding all which doesn't match If file sizes match, compare using the biggest hash you can handle, hashing chunks of files to avoid reading the whole big file Here's is an answer with Python implementations (I prefer the one by nosklo, BTW)
see if two files have the same content in python
Possible Duplicates: Finding duplicate files and removing them. In Python, is there a concise way of comparing whether the contents of two text files are the same? What is the easiest way to see if two files are the same content-wise in Python. One thing I can do is md5 each file and compare. Is there a better way?
[ "Yes, I think hashing the file would be the best way if you have to compare several files and store hashes for later comparison. As hash can clash, a byte-by-byte comparison may be done depending on the use case.\nGenerally byte-by-byte comparison would be sufficient and efficient, which filecmp module already does + other things too.\nSee http://docs.python.org/library/filecmp.html\ne.g.\n>>> import filecmp\n>>> filecmp.cmp('file1.txt', 'file1.txt')\nTrue\n>>> filecmp.cmp('file1.txt', 'file2.txt')\nFalse\n\nSpeed consideration:\nUsually if only two files have to be compared, hashing them and comparing them would be slower instead of simple byte-by-byte comparison if done efficiently. e.g. code below tries to time hash vs byte-by-byte\nDisclaimer: this is not the best way of timing or comparing two algo. and there is need for improvements but it does give rough idea. If you think it should be improved do tell me I will change it.\nimport random\nimport string\nimport hashlib\nimport time\n\ndef getRandText(N):\n return \"\".join([random.choice(string.printable) for i in xrange(N)])\n\nN=1000000\nrandText1 = getRandText(N)\nrandText2 = getRandText(N)\n\ndef cmpHash(text1, text2):\n hash1 = hashlib.md5()\n hash1.update(text1)\n hash1 = hash1.hexdigest()\n\n hash2 = hashlib.md5()\n hash2.update(text2)\n hash2 = hash2.hexdigest()\n\n return hash1 == hash2\n\ndef cmpByteByByte(text1, text2):\n return text1 == text2\n\nfor cmpFunc in (cmpHash, cmpByteByByte):\n st = time.time()\n for i in range(10):\n cmpFunc(randText1, randText2)\n print cmpFunc.func_name,time.time()-st\n\nand the output is\ncmpHash 0.234999895096\ncmpByteByByte 0.0\n\n", "I'm not sure if you want to find duplicate files or just compare two single files. If the latter, the above approach (filecmp) is better, if the former, the following approach is better.\nThere are lots of duplicate files detection questions here. Assuming they are not very small and that performance is important, you can\n\nCompare file sizes first, discarding all which doesn't match\nIf file sizes match, compare using the biggest hash you can handle, hashing chunks of files to avoid reading the whole big file\n\nHere's is an answer with Python implementations (I prefer the one by nosklo, BTW)\n" ]
[ 168, 6 ]
[]
[]
[ "file", "python" ]
stackoverflow_0001072569_file_python.txt
Q: Playing MMS streams within Python I'm writing a XM desktop application (I plan on releasing the source on github when I'm finished if anyone is interested) Anyway, the one part I know very little about is how to play media within Python (I'm using PyQt for the frontend). Basically, I have a mms:// url that I need to play. I was wondering if there is a library that could accomplish this or something, really I just need someone to point me in the right direction. I know its possible, because SMplayer (Python implementation of MPlayer that uses Qt) works with MMS, I may have to take a peak at they're source if worse comes to worse. A: You can have a look at PyMedia PyGame wxPython Here is a code snippet of doing a similar thing with wxPython. All of these can play media files.
Playing MMS streams within Python
I'm writing a XM desktop application (I plan on releasing the source on github when I'm finished if anyone is interested) Anyway, the one part I know very little about is how to play media within Python (I'm using PyQt for the frontend). Basically, I have a mms:// url that I need to play. I was wondering if there is a library that could accomplish this or something, really I just need someone to point me in the right direction. I know its possible, because SMplayer (Python implementation of MPlayer that uses Qt) works with MMS, I may have to take a peak at they're source if worse comes to worse.
[ "You can have a look at \n\nPyMedia\nPyGame\nwxPython\n\nHere is a code snippet of doing a similar thing with wxPython.\nAll of these can play media files.\n" ]
[ 2 ]
[]
[]
[ "audio_streaming", "mms", "pyqt", "python", "streaming" ]
stackoverflow_0001072652_audio_streaming_mms_pyqt_python_streaming.txt
Q: Printing python modulus operator as it is over command line I want to print modulus operator as it is over the command line: E.g this is how the output should look like: 1%2 2%4 or 30% 40% I am using the print statement like this: print 'computing %s % %s' % (num1, num2) Its throwing the default error: TypeError: not all arguments converted during string formatting For now I am using: print 'computing 1'+'%'+'2' which prints: computing 1%2 But tell me how to get this done using the first approach(:print 'computing %s % %s' % (num1,num2)) A: Escape the % sign with another % sign, like this: print 'computing %s %% %s' % (num1, num2) A: print 'computing %s %% %s' % (num1, num2)
Printing python modulus operator as it is over command line
I want to print modulus operator as it is over the command line: E.g this is how the output should look like: 1%2 2%4 or 30% 40% I am using the print statement like this: print 'computing %s % %s' % (num1, num2) Its throwing the default error: TypeError: not all arguments converted during string formatting For now I am using: print 'computing 1'+'%'+'2' which prints: computing 1%2 But tell me how to get this done using the first approach(:print 'computing %s % %s' % (num1,num2))
[ "Escape the % sign with another % sign, like this:\nprint 'computing %s %% %s' % (num1, num2)\n\n", "print 'computing %s %% %s' % (num1, num2)\n\n" ]
[ 9, 1 ]
[]
[]
[ "modulo", "operators", "python" ]
stackoverflow_0001072951_modulo_operators_python.txt
Q: date is added in background while adding time in datastore GAE 3.Why does "jan 1st 1970" gets added in the startime field in datastore when I am doing the below statements? (hour,min) = self.request.get('starttime').split(":") #if either of them is null or empty string then int will throw exception if hour and min : datastoremodel.starttime = datetime.time(int(hour), int(min)) Although when I retrieve it only time comes through? I wonder what date is doing in datastore? Any clues? A: google app engine doc says class TimeProperty(verbose_name=None, auto_now=False, auto_now_add=False, ...) A time property, without a date. Takes a Python standard library datetime.time value. See DateTimeProperty for more information. Value type: datetime.time. This is converted to a datetime.datetime internally. so to convert time to a date , start of epoch time "jan 1st 1970" is added
date is added in background while adding time in datastore GAE
3.Why does "jan 1st 1970" gets added in the startime field in datastore when I am doing the below statements? (hour,min) = self.request.get('starttime').split(":") #if either of them is null or empty string then int will throw exception if hour and min : datastoremodel.starttime = datetime.time(int(hour), int(min)) Although when I retrieve it only time comes through? I wonder what date is doing in datastore? Any clues?
[ "google app engine doc says\nclass TimeProperty(verbose_name=None, auto_now=False, auto_now_add=False, ...)\nA time property, without a date. Takes a Python standard library datetime.time value. See DateTimeProperty for more information.\nValue type: datetime.time. This is converted to a datetime.datetime internally.\n\nso to convert time to a date , start of epoch time \"jan 1st 1970\" is added\n" ]
[ 2 ]
[]
[]
[ "datetime", "google_app_engine", "python" ]
stackoverflow_0001072877_datetime_google_app_engine_python.txt
Q: Calculate score in a pyramid score system I am trying to calculate gamescores for a bunch over users and I haven't really got it yet. It is a pyramid game where you can invite people, and the people you invite is placed beneth you in the relations tree. So if i invite X and X invites Y i get kickback from both of them. Let's say 10%^steps... So from X i get 10% of his score and 1% from Y, and X get 10% from Y. So to calculate this i was thinking that each "player" had a function that calculated his total score. This function had to be recursive and "know" how far in the tree it was so that it would kick back the right values. def get_score(player): if children: score = player.points for child in children: score += child.points*math.pow(.1, get_ancestors(child)) score += get_score(child) return score else: return player.points But this doesnt work proper, it gives what i believe is the right values in some levels but not in others. So think my function is broken. Anybody got an idea on how to solve this? A: I doubt these two lines score += child.points*math.pow(.1, get_ancestors(child)) score += get_score(child) this is a simple recursive structure so i think something like below will suffice score += get_score(child)*.1 and recursive beauty will take care of itself you also do not need 'if children:' check so does it help def get_score(player): score = player.points for child in children: score += get_score(child)*.1 return score A: This can have very different implementations, depending on the way the score must be calculating : Do you need to propagate the result of each gain, in real time ? In that case , you start from the bottom of the pyramid and give the feedback until the top. Can you afford the result to be calculated at the end of the game for everybody ? In that cas you can just set a method on each player and only call the one at the top. E.G for the second option You used a functional approach. While this is valid, I am more into OO so I'll go this way : class Player(object) : def __init__(self) : self.score = 0; self.children = [] def updateScore(self) : self.score = self.score + sum(((children.score * 10 / 100) for children in self.children)) class Pyramid(object) : def __init__(self) : self.top_child = Player() def updateScore(self, player = None) : if player == None : player = self.top_child for child in player.children : self.updateScore(child) child.updateScore() You may want to use itertools to make it less CPU and memory intensive.
Calculate score in a pyramid score system
I am trying to calculate gamescores for a bunch over users and I haven't really got it yet. It is a pyramid game where you can invite people, and the people you invite is placed beneth you in the relations tree. So if i invite X and X invites Y i get kickback from both of them. Let's say 10%^steps... So from X i get 10% of his score and 1% from Y, and X get 10% from Y. So to calculate this i was thinking that each "player" had a function that calculated his total score. This function had to be recursive and "know" how far in the tree it was so that it would kick back the right values. def get_score(player): if children: score = player.points for child in children: score += child.points*math.pow(.1, get_ancestors(child)) score += get_score(child) return score else: return player.points But this doesnt work proper, it gives what i believe is the right values in some levels but not in others. So think my function is broken. Anybody got an idea on how to solve this?
[ "I doubt these two lines\nscore += child.points*math.pow(.1, get_ancestors(child))\nscore += get_score(child)\n\nthis is a simple recursive structure so i think something like below will suffice\nscore += get_score(child)*.1\n\nand recursive beauty will take care of itself\nyou also do not need 'if children:' check\nso does it help\ndef get_score(player):\n score = player.points\n for child in children:\n score += get_score(child)*.1\n return score\n\n", "This can have very different implementations, depending on the way the score must be calculating :\n\nDo you need to propagate the result of each gain, in real time ? In that case , you start from the bottom of the pyramid and give the feedback until the top.\nCan you afford the result to be calculated at the end of the game for everybody ? In that cas you can just set a method on each player and only call the one at the top.\n\nE.G for the second option\nYou used a functional approach. While this is valid, I am more into OO so I'll go this way :\nclass Player(object) :\n\n def __init__(self) :\n self.score = 0;\n self.children = []\n\n def updateScore(self) :\n self.score = self.score + sum(((children.score * 10 / 100) for children in self.children))\n\n\nclass Pyramid(object) :\n\n def __init__(self) :\n self.top_child = Player()\n\n def updateScore(self, player = None) :\n\n if player == None :\n player = self.top_child\n\n for child in player.children :\n self.updateScore(child)\n child.updateScore()\n\nYou may want to use itertools to make it less CPU and memory intensive. \n" ]
[ 2, 0 ]
[]
[]
[ "python", "recursion" ]
stackoverflow_0001072977_python_recursion.txt
Q: Updating one aspect of a Pygame surface I have an application written in python that's basically an etch-a-sketch, you move pixels around with WASD and arrow keys and it leaves a trail. However, I want to add a counter for the amount of pixels on the screen. How do I have the counter update without updating the entire surface and pwning the pixel drawings? A: Use Surface.blit(source, dest, area=None, special_flags = 0): return Rect dest can be a pair of coordinates representing the upper left corner of the source. You probably want to erase the your old counter value, before you blit the new one. For this you can capture the background before you blit your counter value for the first time. Then blit that image everytime before you update the counter value. In addition you should make the background of surface that your blitting transparent. Assuming you have black font on white background, you could use: source.set_colorkey((255,255,255))
Updating one aspect of a Pygame surface
I have an application written in python that's basically an etch-a-sketch, you move pixels around with WASD and arrow keys and it leaves a trail. However, I want to add a counter for the amount of pixels on the screen. How do I have the counter update without updating the entire surface and pwning the pixel drawings?
[ "Use Surface.blit(source, dest, area=None, special_flags = 0): return Rect\ndest can be a pair of coordinates representing the upper left corner of the source.\nYou probably want to erase the your old counter value, before you blit the new one. For this you can capture the background before you blit your counter value for the first time. Then blit that image everytime before you update the counter value. \nIn addition you should make the background of surface that your blitting transparent. Assuming you have black font on white background, you could use:\nsource.set_colorkey((255,255,255))\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001072639_pygame_python.txt
Q: Form Initialization with ToscaWidgets Question: How do I prefill a CheckBoxTable from ToscaWidgets with values. Background: I've looked everywhere and I can't seem to figure out how to initialize a particular form field with ToscaWidgets. Most form fields seem to respond just fine to initialization, like if I create a form with a single TextField in it when I render the form in the template and pass in fieldValue=x where fieldValue is the name of the TextField and x is some string the TextField will be filled with x. My problem is with all multiple select field, in particular CheckBoxTable. No matter what I pass in it will not initialize the multiple select. Here is an example of what I'm talking about, it's a user edit page with a CheckBoxTable for groups so you can select several or no groups out of a list of several groups fetched from the databse: What I have: My widget is: from tw import forms class UserForm(forms.TableForm): show_errors = True submit_text = "Create User" clientOptions = [(-1, "Select a Client")] groupOptions = [(-1, "Select a Group")] fields = [forms.TextField('name', label_text='User Name', validator=String(not_empty=True), size=40), forms.Spacer(), forms.SingleSelectField('clientID', label_text='Client Name', validator=Int(min=0), options=clientOptions), forms.Spacer(), forms.CheckBoxTable('groups', lable_text='Groups', validator=Set(), options=groupOptions, num_cols=3), forms.Spacer(), forms.PasswordField('password', label_text="Password", validator=String(not_empty=True, min=6), size=40), forms.PasswordField('passwordAgain', label_text="Repeat Password", validator=String(not_empty=True, min=6), size=40), forms.HiddenField('id')] editUserForm = UserForm("createUserForm", action='alterUser', submit_text="Edit User") In my controller I have: result = model.DBSession.query(model.User).filter_by(id=kw['id']).first() tmpl_context.form = editUserForm clientOptions=model.DBSession.query(model.Client.id, model.Client.name) groupOptions=model.DBSession.query(model.Group.id, model.Group.name) formChildArgs = dict(clientID=dict(options=clientOptions), groups=dict(options=groupOptions)) userAttributes=dict(id=result.id, name=result.name, groups=[g.id for g in result.groups], clientID=result.clientID, password=result.password, passwordAgain=result.password) return dict(verb="Edit", modelName = "User", modelAttributes=userAttributes, formChildArgs=formChildArgs, page='editUser') and in my template (Mako) I have: ${tmpl_context.form(modelAttributes, child_args=formChildArgs) | n} What I've tried: In my userAttributs dictionary I've tried: groups=[g.id for g in result.groups] groups=[g.name for g in result.groups] groups=[(g.id, g.name) for g in result.groups] groups=[[g.id, g.name) for g in result.groups] groups=result.groups What I get: The result of all of this code is a User edit form with data pre-filled with the user data except for the CheckBoxTable. The CheckBoxTable has all of the groups in my database displaying and empty, what I need for for them to be displaying but have the groups the user is apart of checked. I thought the code in the model attributes would do this, since that's what it does for every other field, but there must be some fundamental thing I'm missing about CheckBoxTable instantiation. Specs: I'm using Turbogears 2 with ToscaWidgets 0.9.7 forms and Mako for templating. A: set them via the value param. import tw.forms f = tw.forms.TableForm(fields=[tw.forms.CheckBoxTable("name",options=(("foo"),("bar")))]) f(value={"name":{"foo":True,"bar":False}}) >>> u'<form xmlns="http://www.w3.org/1999/xhtml" action="" method="post" class="tableform">\n <table border="0" cellspacing="0" cellpadding="2">\n<tr id="name.container" class="even" title="">\n <td class="labelcol">\n <label id="name.label" for="name" class="fieldlabel">Name</label>\n </td>\n <td class="fieldcol">\n <table id="name" class="checkboxtable">\n <tbody>\n <tr>\n <td>\n <input id="name_0" value="foo" name="name" type="checkbox" checked="checked" />\n <label for="name_0">foo</label>\n </td>\n </tr><tr>\n <td>\n <input id="name_1" value="bar" name="name" type="checkbox" />\n <label for="name_1">bar</label>\n </td>\n </tr>\n </tbody>\n</table>\n </td>\n </tr><tr id="submit.container" class="odd" title="">\n <td class="labelcol">\n </td>\n <td class="fieldcol">\n <input type="submit" class="submitbutton" value="Submit" />\n </td>\n </tr>\n </table>\n</form>'
Form Initialization with ToscaWidgets
Question: How do I prefill a CheckBoxTable from ToscaWidgets with values. Background: I've looked everywhere and I can't seem to figure out how to initialize a particular form field with ToscaWidgets. Most form fields seem to respond just fine to initialization, like if I create a form with a single TextField in it when I render the form in the template and pass in fieldValue=x where fieldValue is the name of the TextField and x is some string the TextField will be filled with x. My problem is with all multiple select field, in particular CheckBoxTable. No matter what I pass in it will not initialize the multiple select. Here is an example of what I'm talking about, it's a user edit page with a CheckBoxTable for groups so you can select several or no groups out of a list of several groups fetched from the databse: What I have: My widget is: from tw import forms class UserForm(forms.TableForm): show_errors = True submit_text = "Create User" clientOptions = [(-1, "Select a Client")] groupOptions = [(-1, "Select a Group")] fields = [forms.TextField('name', label_text='User Name', validator=String(not_empty=True), size=40), forms.Spacer(), forms.SingleSelectField('clientID', label_text='Client Name', validator=Int(min=0), options=clientOptions), forms.Spacer(), forms.CheckBoxTable('groups', lable_text='Groups', validator=Set(), options=groupOptions, num_cols=3), forms.Spacer(), forms.PasswordField('password', label_text="Password", validator=String(not_empty=True, min=6), size=40), forms.PasswordField('passwordAgain', label_text="Repeat Password", validator=String(not_empty=True, min=6), size=40), forms.HiddenField('id')] editUserForm = UserForm("createUserForm", action='alterUser', submit_text="Edit User") In my controller I have: result = model.DBSession.query(model.User).filter_by(id=kw['id']).first() tmpl_context.form = editUserForm clientOptions=model.DBSession.query(model.Client.id, model.Client.name) groupOptions=model.DBSession.query(model.Group.id, model.Group.name) formChildArgs = dict(clientID=dict(options=clientOptions), groups=dict(options=groupOptions)) userAttributes=dict(id=result.id, name=result.name, groups=[g.id for g in result.groups], clientID=result.clientID, password=result.password, passwordAgain=result.password) return dict(verb="Edit", modelName = "User", modelAttributes=userAttributes, formChildArgs=formChildArgs, page='editUser') and in my template (Mako) I have: ${tmpl_context.form(modelAttributes, child_args=formChildArgs) | n} What I've tried: In my userAttributs dictionary I've tried: groups=[g.id for g in result.groups] groups=[g.name for g in result.groups] groups=[(g.id, g.name) for g in result.groups] groups=[[g.id, g.name) for g in result.groups] groups=result.groups What I get: The result of all of this code is a User edit form with data pre-filled with the user data except for the CheckBoxTable. The CheckBoxTable has all of the groups in my database displaying and empty, what I need for for them to be displaying but have the groups the user is apart of checked. I thought the code in the model attributes would do this, since that's what it does for every other field, but there must be some fundamental thing I'm missing about CheckBoxTable instantiation. Specs: I'm using Turbogears 2 with ToscaWidgets 0.9.7 forms and Mako for templating.
[ "set them via the value param. \nimport tw.forms\nf = tw.forms.TableForm(fields=[tw.forms.CheckBoxTable(\"name\",options=((\"foo\"),(\"bar\")))]) \nf(value={\"name\":{\"foo\":True,\"bar\":False}})\n>>> u'<form xmlns=\"http://www.w3.org/1999/xhtml\" action=\"\" method=\"post\" class=\"tableform\">\\n <table border=\"0\" cellspacing=\"0\" cellpadding=\"2\">\\n<tr id=\"name.container\" class=\"even\" title=\"\">\\n <td class=\"labelcol\">\\n <label id=\"name.label\" for=\"name\" class=\"fieldlabel\">Name</label>\\n </td>\\n <td class=\"fieldcol\">\\n <table id=\"name\" class=\"checkboxtable\">\\n <tbody>\\n <tr>\\n <td>\\n\n <input id=\"name_0\" value=\"foo\" name=\"name\" type=\"checkbox\" checked=\"checked\" />\\n <label for=\"name_0\">foo</label>\\n </td>\\n </tr><tr>\\n <td>\\n <input id=\"name_1\" value=\"bar\" name=\"name\" type=\"checkbox\" />\\n <label for=\"name_1\">bar</label>\\n </td>\\n </tr>\\n\n</tbody>\\n</table>\\n </td>\\n </tr><tr id=\"submit.container\" class=\"odd\" title=\"\">\\n <td class=\"labelcol\">\\n </td>\\n\n <td class=\"fieldcol\">\\n <input type=\"submit\" class=\"submitbutton\" value=\"Submit\" />\\n </td>\\n </tr>\\n </table>\\n</form>'\n\n" ]
[ 1 ]
[]
[]
[ "forms", "mako", "python", "toscawidgets", "turbogears" ]
stackoverflow_0001071277_forms_mako_python_toscawidgets_turbogears.txt
Q: How to reduce color palette with PIL I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling. A: That's easy, just use the undocumented colors argument: result = image.convert('P', palette=Image.ADAPTIVE, colors=5) I'm using Image.ADAPTIVE to avoid dithering A: I assume you want to do something more sophisticated than posterize. "Sampling" as you say, will take some finesse, as the 5 most common colors in the image are likely to be similar to one another. Maybe take a look at the 5 most separated peaks in a histogram. A: The short answer is to use the Image.quantize method. For more info, see: How do I convert any image to a 4-color paletted image using the Python Imaging Library ?
How to reduce color palette with PIL
I'm not sure how I would go about reducing the color palette of a PIL Image. I would like to reduce an image's palette to the 5 prominent colors found in that image. My overall goal is to do some basic color sampling.
[ "That's easy, just use the undocumented colors argument:\nresult = image.convert('P', palette=Image.ADAPTIVE, colors=5)\n\nI'm using Image.ADAPTIVE to avoid dithering\n", "I assume you want to do something more sophisticated than posterize. \"Sampling\" as you say, will take some finesse, as the 5 most common colors in the image are likely to be similar to one another. Maybe take a look at the 5 most separated peaks in a histogram.\n", "The short answer is to use the Image.quantize method. For more info, see: How do I convert any image to a 4-color paletted image using the Python Imaging Library ?\n" ]
[ 39, 5, 3 ]
[]
[]
[ "colors", "image", "palette", "python", "python_imaging_library" ]
stackoverflow_0001065945_colors_image_palette_python_python_imaging_library.txt
Q: Pyinstaller traceback Traceback (most recent call last): File "<string>", line 137, in <module> File C:\Python26\buildSVG_Resizer\out1.pyz/encodings", line 100, in search_function TypeError: importHook() got an unexpected keyword argument 'level' The imports in my .py file are: import xml.etree.ElementTree as ET import os, stat import tkFileDialog My script parses SVG's (xml) in a directory and then replaces values if they are out of range. This script runs fine through the console. I can post the whole script if that will help. Thanks for anything. A: importHook in iu.py (top level of pyinstaller) does accept a level= named argument, so the message is quite perplexing and suggests a bad installation. What output do you get from cd'ing to pyinstaller's top directory and doing: svn log -r HEAD ? Should currently be r685 | giovannibajo | 2009-06-30 05:19:59 -0700 (Tue, 30 Jun 2009) | 3 lines Preliminar support for creating a bundle on Mac OSX. Yet to be integrated into Makespec.py. If you get something older, svn up to make sure you have the current version, and try packaging your project again (from the start, since, if you've used some defective version, there might have been incorrect intermediate files generated).
Pyinstaller traceback
Traceback (most recent call last): File "<string>", line 137, in <module> File C:\Python26\buildSVG_Resizer\out1.pyz/encodings", line 100, in search_function TypeError: importHook() got an unexpected keyword argument 'level' The imports in my .py file are: import xml.etree.ElementTree as ET import os, stat import tkFileDialog My script parses SVG's (xml) in a directory and then replaces values if they are out of range. This script runs fine through the console. I can post the whole script if that will help. Thanks for anything.
[ "importHook in iu.py (top level of pyinstaller) does accept a level= named argument, so the message is quite perplexing and suggests a bad installation.\nWhat output do you get from cd'ing to pyinstaller's top directory and doing:\nsvn log -r HEAD\n\n? Should currently be \nr685 | giovannibajo | 2009-06-30 05:19:59 -0700 (Tue, 30 Jun 2009) | 3 lines\n\nPreliminar support for creating a bundle on Mac OSX.\nYet to be integrated into Makespec.py.\n\nIf you get something older, svn up to make sure you have the current version, and try packaging your project again (from the start, since, if you've used some defective version, there might have been incorrect intermediate files generated).\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001074144_python.txt
Q: Learning Python for a .NET developer I have been doing active development in C# for several years now. I primarily build enterprise application and in house frameworks on the .NET stack. I've never had the need to use any other mainstream high level languages besides C# for my tasks, since .NET is the standard platform we use. There are some legacy Python applications that I have been asked to support going forward, I have no exposure to python and dynamic languages in general(although I've done a fair bit of JavaScript). I was hoping to get some guidance/advise to aid in how to go about learning a language like python for the statically typed mind. EDIT: Using IronPython is not an option! A: Foord and Muirhead's IronPython in Action is an amazingly good book, perfectly suitable for teaching Python to .NET folks as well as teaching .NET to Python folks. I may be biased, as I was a tech reviewer and Foord is a friend, but I've had other cases in the past where a friend wrote a book and I tech reviewed it -- and ended up deciding the book was just wrong and publicly saying so (way to lose friends, but, I just can't tell a lie, not where Python is concerned at least!-) Edit: If you're forbidden from moving to IronPython (which would probably support your legacy apps just fine, btw), there are better answers: Mark Pilgrim's Dive into Python is often considered the best Python intro for the experienced developer, and my own Python in a Nutshell has been praised as the fastest way onboard for superstar developers. I am of course biased in favor of these -- Mark is a colleague, and my wife was a key tech editor for his book (and my own as well), and obviously I'm biased in favor of my own book too;-). But then, I do tend to be biased towards a lot of the best Python books, as I've either had a hand in their editing, or am friends with their authors, or both;-). A: Hardest thing I was confronted to in using python coming from Java was to properly wrap my head around the Duck Typing thing... At first I thought it was just plain horrible and just dressed the hairs on the back on my neck. Next is the scope by convention, but that one is pretty easy. And the importance of white spaces gave me a few bumps. However once you ease yourself in the language's concision and speed of development you learn to appreciate it a lot more. After a while I thought it was the best thing that ever happened to me !! :-) here are a few things that helped me a lot : First I started with this book and got the basics of the language and for everyday use the Python Quick Reference Card was very helpful. Also the console will be your best ally to try quick things and solidify your learning. For IDEs, coming from the eclipse world PyDev was a natural choice for me, but there were many more to choose from. But if you are more familiar to the Visual Studio environment the Python Tools for Visual Studio is pretty darn good too. Good luck, Hopefully you'll find Python as much fun as I did. A: There is a big initial hurdle of getting comfortable with dynamic typing. The first step is when you look at Python-code and realize that variables aren't defined anywhere, you just create them out of thin air, which feels like jumping over a cliff. There is a brief moment before your hang glider catches the air properly. And then it's going to take time before you trust your newfound dynamic wings, and you probably only can get their by doing aerobatics with them. Learn how python handles references, have fun with monkey-patching methods, duck type various animals. Try to learn some ugly tricks. And although you can't use IronPython for this, there is no reason you can't use it to learn Python. A: You're going to experience quite a bit of culture-shock going from C# to the wild duck-typed outback of Python. Lack of types and intellisense can be pretty daunting. Good thing that you're experienced in JavaScript. Also know that indent-sensitive block rules of Python can be very confusing for the inexperience (usually you either love it or hate it :-) Apart from that the biggest challenge moving from one language to another is usually the framework. Getting to know all the classes and functions Just Takes Time unfortunately. A: For an experienced developer learning Python, Dive Into Python is a very good book. Wesley Chun's Core Python Programming book takes a more "ground up" approach, which may be a little slow for an experienced developer. But it allowed for very easy comparisons of the basic syntax and operators compared to other languages. Wesley's writing style is very easy to read, and his example projects are non-trivial enough to actually be interesting. The Python Cookbook is an excellent reference for learning to program in a 'Pythonic' way. This book contains hundreds (?) of examples of how to solve common everyday problems with Python. In general, the "Cookbook" series will expose you to the idioms of the language faster than any other book. Whenever I need to learn a new programming language, I start using it for all the 'daily maintenance' tasks that come up - all the little things that I would normally solve with a shell script or with common unix tools - I start to use the new language to solve those problems. Since you have .NET experience, IronPython is probably a good way to leverage that knowledge while learning Python. Even if you only install IronPython in a personal sandbox...and use it for all your daily busy work coding tasks - that can be a great way to learn the syntax and idioms of Python. A: I would recommend using IronPython to help you learn. It is an implementation of Python on the .NET framework. So you can use/learn Python with access to the .NET class library. A good place to start is by downloading IronPython and looking at IronPython in Action, which is a very good book looking at Python on the .NET framework. EDIT: Since IronPython is not an option, disregard this answer. Thanks though. A: The book Pro IronPython is worth reading too if you have time. A: I would recommend just to read a book about it. A book for beginners. It'll contain many stuff you already know but you won't miss anything regarding using a dynamic language. I can point you to Dive into Python, which seems to be very friendly, or The Python Tutorial which seems to be very to the point (that's how I learned).
Learning Python for a .NET developer
I have been doing active development in C# for several years now. I primarily build enterprise application and in house frameworks on the .NET stack. I've never had the need to use any other mainstream high level languages besides C# for my tasks, since .NET is the standard platform we use. There are some legacy Python applications that I have been asked to support going forward, I have no exposure to python and dynamic languages in general(although I've done a fair bit of JavaScript). I was hoping to get some guidance/advise to aid in how to go about learning a language like python for the statically typed mind. EDIT: Using IronPython is not an option!
[ "Foord and Muirhead's IronPython in Action is an amazingly good book, perfectly suitable for teaching Python to .NET folks as well as teaching .NET to Python folks. I may be biased, as I was a tech reviewer and Foord is a friend, but I've had other cases in the past where a friend wrote a book and I tech reviewed it -- and ended up deciding the book was just wrong and publicly saying so (way to lose friends, but, I just can't tell a lie, not where Python is concerned at least!-)\nEdit: If you're forbidden from moving to IronPython (which would probably support your legacy apps just fine, btw), there are better answers: Mark Pilgrim's Dive into Python is often considered the best Python intro for the experienced developer, and my own Python in a Nutshell has been praised as the fastest way onboard for superstar developers. I am of course biased in favor of these -- Mark is a colleague, and my wife was a key tech editor for his book (and my own as well), and obviously I'm biased in favor of my own book too;-). But then, I do tend to be biased towards a lot of the best Python books, as I've either had a hand in their editing, or am friends with their authors, or both;-).\n", "Hardest thing I was confronted to in using python coming from Java was to properly wrap my head around the Duck Typing thing... At first I thought it was just plain horrible and just dressed the hairs on the back on my neck.\nNext is the scope by convention, but that one is pretty easy. And the importance of white spaces gave me a few bumps.\nHowever once you ease yourself in the language's concision and speed of development you learn to appreciate it a lot more. After a while I thought it was the best thing that ever happened to me !! :-)\nhere are a few things that helped me a lot :\nFirst I started with this book and got the basics of the language and for everyday use the Python Quick Reference Card was very helpful. Also the console will be your best ally to try quick things and solidify your learning. \nFor IDEs, coming from the eclipse world PyDev was a natural choice for me, but there were many more to choose from. But if you are more familiar to the Visual Studio environment the Python Tools for Visual Studio is pretty darn good too.\nGood luck, Hopefully you'll find Python as much fun as I did.\n", "There is a big initial hurdle of getting comfortable with dynamic typing. The first step is when you look at Python-code and realize that variables aren't defined anywhere, you just create them out of thin air, which feels like jumping over a cliff. There is a brief moment before your hang glider catches the air properly. \nAnd then it's going to take time before you trust your newfound dynamic wings, and you probably only can get their by doing aerobatics with them. Learn how python handles references, have fun with monkey-patching methods, duck type various animals. Try to learn some ugly tricks.\nAnd although you can't use IronPython for this, there is no reason you can't use it to learn Python.\n", "You're going to experience quite a bit of culture-shock going from C# to the wild duck-typed outback of Python. Lack of types and intellisense can be pretty daunting. Good thing that you're experienced in JavaScript. Also know that indent-sensitive block rules of Python can be very confusing for the inexperience (usually you either love it or hate it :-)\nApart from that the biggest challenge moving from one language to another is usually the framework. Getting to know all the classes and functions Just Takes Time unfortunately.\n", "For an experienced developer learning Python, Dive Into Python is a very good book. \nWesley Chun's Core Python Programming book takes a more \"ground up\" approach, which may be a little slow for an experienced developer. But it allowed for very easy comparisons of the basic syntax and operators compared to other languages. Wesley's writing style is very easy to read, and his example projects are non-trivial enough to actually be interesting. \nThe Python Cookbook is an excellent reference for learning to program in a 'Pythonic' way. This book contains hundreds (?) of examples of how to solve common everyday problems with Python. In general, the \"Cookbook\" series will expose you to the idioms of the language faster than any other book. \nWhenever I need to learn a new programming language, I start using it for all the 'daily maintenance' tasks that come up - all the little things that I would normally solve with a shell script or with common unix tools - I start to use the new language to solve those problems. Since you have .NET experience, IronPython is probably a good way to leverage that knowledge while learning Python. Even if you only install IronPython in a personal sandbox...and use it for all your daily busy work coding tasks - that can be a great way to learn the syntax and idioms of Python. \n", "I would recommend using IronPython to help you learn. It is an implementation of Python on the .NET framework. So you can use/learn Python with access to the .NET class library.\nA good place to start is by downloading IronPython and looking at IronPython in Action, which is a very good book looking at Python on the .NET framework.\nEDIT: Since IronPython is not an option, disregard this answer. Thanks though.\n", "The book Pro IronPython is worth reading too if you have time.\n", "I would recommend just to read a book about it. A book for beginners. It'll contain many stuff you already know but you won't miss anything regarding using a dynamic language. I can point you to Dive into Python, which seems to be very friendly, or The Python Tutorial which seems to be very to the point (that's how I learned).\n" ]
[ 22, 8, 4, 3, 3, 2, 2, 2 ]
[]
[]
[ ".net", "c#", "dynamic_languages", "python" ]
stackoverflow_0001072530_.net_c#_dynamic_languages_python.txt
Q: Does anyone know of source code for a web based study group? I'm looking for source code for a web based study group. I'd prefer something in Python or C#. I have searched google but I'm finding mostly existing study groups on particular topics and not software to host an online study group. Can anyone help out? Edit: Ah, I was unfamiliar with the buzzwords "Learning Management System" or "Virtual Learning Environment". Moodle is indeed the type of thing I was looking for, even if it is written in the horrible php "language". Thanks. A: I am not quite sure what you mean by "host an online study group". If it is about people collaborate to learn something, I think moodle is what you are looking for. Here is the wikipedia lemma for moodle: Moodle is a free and open source e-learning software platform, also known as a Course Management System, Learning Management System, or Virtual Learning Environment. It has a significant user base with 49,256 registered sites with 28,177,443 users in 2,571,855 courses (as of February, 2009). Here is how the moodle people describe it themselves: Moodle is an Open Source Course Management System (CMS), also known as a Learning Management System (LMS) or a Virtual Learning Environment (VLE). It has become very popular among educators around the world as a tool for creating online dynamic web sites for their students. It is not written in Python or C#, but PHP and released under GPL. You can install it on your webserver or use free moodle hosting like e-socrates. A: There seems to be work on a django based course MS - http://peach3.nl/trac . However, the source is not available yet :( I am interested in developping such an app (free open-source) in django (which I am learning), so if anybody else wants to help email me (a web designer would help a lot). Rado
Does anyone know of source code for a web based study group?
I'm looking for source code for a web based study group. I'd prefer something in Python or C#. I have searched google but I'm finding mostly existing study groups on particular topics and not software to host an online study group. Can anyone help out? Edit: Ah, I was unfamiliar with the buzzwords "Learning Management System" or "Virtual Learning Environment". Moodle is indeed the type of thing I was looking for, even if it is written in the horrible php "language". Thanks.
[ "I am not quite sure what you mean by \"host an online study group\".\nIf it is about people collaborate to learn something, I think moodle is what you are looking for.\nHere is the wikipedia lemma for moodle:\n\nMoodle is a free and open source\n e-learning software platform, also\n known as a Course Management System,\n Learning Management System, or Virtual\n Learning Environment. It has a\n significant user base with 49,256\n registered sites with 28,177,443 users\n in 2,571,855 courses (as of February,\n 2009).\n\nHere is how the moodle people describe it themselves:\n\nMoodle is an Open Source Course\n Management System (CMS), also known as\n a Learning Management System (LMS) or\n a Virtual Learning Environment (VLE).\n It has become very popular among\n educators around the world as a tool\n for creating online dynamic web sites\n for their students.\n\nIt is not written in Python or C#, but PHP and released under GPL.\nYou can install it on your webserver or use free moodle hosting like e-socrates.\n", "There seems to be work on a django based course MS - http://peach3.nl/trac . However, the source is not available yet :(\nI am interested in developping such an app (free open-source) in django (which I am learning), so if anybody else wants to help email me (a web designer would help a lot).\nRado\n" ]
[ 4, 0 ]
[]
[]
[ "c#", "python" ]
stackoverflow_0001071433_c#_python.txt
Q: How do I find the memory address of a Python / Django model object? An ordinary object, I can use o.__repr__() to see something like '<__main__.A object at 0x9d78fec>' But, say, a Django User just returns <User:bob> How can I see the actual address of one of these, or compare whether two such model-objects are actually the same object or not? A: id() will return the identity of the object (generally implemented as the address), which is guaranteed unique for two objects which exist at the same point in time. However the obvious way to check whether two objects are identical is to use the operator explicitely designed for this: is ie. if obj1 is obj2: # Objects are identical. A: You can get the id of any object: a = object() print hex(id(a)) Although for CPython, this is the address of the object, this is not guaranteed I believe (and may be different on other implementations like iron python). Same id means same object, though. A: Use the id() function for that. Or equivalently just compare "obj1 is obj2".
How do I find the memory address of a Python / Django model object?
An ordinary object, I can use o.__repr__() to see something like '<__main__.A object at 0x9d78fec>' But, say, a Django User just returns <User:bob> How can I see the actual address of one of these, or compare whether two such model-objects are actually the same object or not?
[ "id() will return the identity of the object (generally implemented as the address), which is guaranteed unique for two objects which exist at the same point in time. However the obvious way to check whether two objects are identical is to use the operator explicitely designed for this: is\nie.\n if obj1 is obj2: \n # Objects are identical.\n\n", "You can get the id of any object:\na = object()\nprint hex(id(a))\n\nAlthough for CPython, this is the address of the object, this is not guaranteed I believe (and may be different on other implementations like iron python). Same id means same object, though.\n", "Use the id() function for that. Or equivalently just compare \"obj1 is obj2\".\n" ]
[ 7, 2, 1 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001075106_django_django_models_python.txt
Q: Generating an image thumbnail that is <10KB and did not lose proportions Notice how for every image that google indexes it has a small thumbnail. These thumbnails are: Less than 10KB in size. The proportions of the width / height are the same as the ones in the original image. I would like to write a function (in python) that would take an image and create a thumbnail with these to properties. EDIT: So there are 3 answers right now and all of them are half right. I need a function that not only resizes the image proportionally, but also makes sure the file size is smaller than 10KB. How do I do that? A: In this short post, Mike Fletcher and the effbot show (and discuss detailed variation) an excellent approach for the task you want to do. Edit: as for the 10K requirement, it's hard (to say the least;-) to predict how well an image will compress, depending on the image's format, since today's compression algorithms are so subtle. If you want your thumbnail to be just about as large (in pixels) as feasible while respecting a <10K requirement, you may have to use a "trial and error" approach making successively more refined guesses about the scaling factor, until you reach an acceptable result. For example, here's a "binary search" approach to getting the correct size (there may well be better strategies!), with ample print statements &c to explain what's going on...: import Image import cStringIO import math import os import stat # try no more than 10 times, then give up MAX_TRIES = 10 def getThumbnail(filename, max_bytes=(10*1024)): '''Get a thumbnail image of filename, <max_bytes''' original_size = os.stat(filename)[stat.ST_SIZE] print "Original file size: %.1f KB" % (original_size/1024.) image = Image.open(filename) image.load() print "Original image size: %dx%d pixels" % image.size min_bytes = int(0.9 * max_bytes) largest_side = max(image.size) smallest_side = 16 for attempt in range(MAX_TRIES): try_side = (largest_side + smallest_side) / 2 print "Attempt #%d of %d" % (attempt+1, MAX_TRIES) print "Side must be within [%d:%d], now trying %d" % ( smallest_side, largest_side, try_side) thumb = image.copy() thumb.thumbnail((try_side,try_side), Image.ANTIALIAS) afile = cStringIO.StringIO() thumb.save(afile, "PNG") resulting_size = len(afile.getvalue()) afile.close() print "Reduced file size: %.1f KB" % (resulting_size/1024.) print "Reduced image size: %dx%d pixels" % thumb.size if min_bytes <= resulting_size <= max_bytes: print "Success!" return thumb elif resulting_size > max_bytes: print "Too large (>%d), reducing more" % max_bytes largest_side = try_side else: print "Too small (<%d), reducing less" % min_bytes smallest_side = try_side print "too many attempts, returning what I've got!" return thumb def main(): thumb = getThumbnail("pyth.png") print "Reduced image size: %dx%d pixels" % thumb.size print "Saving to thumb.png" thumb.save("thumb.png") thumb_size = os.stat("thumb.png")[stat.ST_SIZE] print "Reduced file size: %.1f KB" % (thumb_size/1024.) print "Done, bye!" if __name__ == '__main__': main() A: Did you read the PIL documentation? There is an image.thumbnail method. A: Use PIL, see sample code here to resize keeping aspect ratio How do I resize an image using PIL and maintain its aspect ratio? see how to do similar thing on a dir Resize images in directory So above links describe how to resize images using PIL, now coming to your question of max size of 10KB, that can be achieved easily e.g. Suppose size required is 100x100, and we use JPEG compression, 100% JPEG quality takes abt 9 bits per pixel(see http://en.wikipedia.org/wiki/JPEG), that means size of 100x100 image would be 100x100x9/(1024x8) = 11KB, so at quality=100 you are almost achieving your goal, but if you still want 10KB only you can set quality=90, and in general you can pass quality as param to resize function and reduce quality by 10% if image size is above 10KB but I think that is not needed, at 90% quality all JPEG images would be < 10KB. Also note that without compression also image size is just 30KB for RGB image, and if you reduce size to 60x60 pixels image size would be only 10KB without any compression i.e you can use bmp images and if you want lesses sizer but lossless compression you can choose PNG. So in conclusion your target of 10KB is very easy to achieve.
Generating an image thumbnail that is <10KB and did not lose proportions
Notice how for every image that google indexes it has a small thumbnail. These thumbnails are: Less than 10KB in size. The proportions of the width / height are the same as the ones in the original image. I would like to write a function (in python) that would take an image and create a thumbnail with these to properties. EDIT: So there are 3 answers right now and all of them are half right. I need a function that not only resizes the image proportionally, but also makes sure the file size is smaller than 10KB. How do I do that?
[ "In this short post, Mike Fletcher and the effbot show (and discuss detailed variation) an excellent approach for the task you want to do.\nEdit: as for the 10K requirement, it's hard (to say the least;-) to predict how well an image will compress, depending on the image's format, since today's compression algorithms are so subtle. If you want your thumbnail to be just about as large (in pixels) as feasible while respecting a <10K requirement, you may have to use a \"trial and error\" approach making successively more refined guesses about the scaling factor, until you reach an acceptable result.\nFor example, here's a \"binary search\" approach to getting the correct size (there may well be better strategies!), with ample print statements &c to explain what's going on...:\nimport Image\nimport cStringIO\nimport math\nimport os\nimport stat\n\n# try no more than 10 times, then give up\nMAX_TRIES = 10\n\ndef getThumbnail(filename, max_bytes=(10*1024)):\n '''Get a thumbnail image of filename, <max_bytes'''\n original_size = os.stat(filename)[stat.ST_SIZE]\n print \"Original file size: %.1f KB\" % (original_size/1024.)\n image = Image.open(filename)\n image.load()\n print \"Original image size: %dx%d pixels\" % image.size\n min_bytes = int(0.9 * max_bytes)\n largest_side = max(image.size)\n smallest_side = 16\n for attempt in range(MAX_TRIES):\n try_side = (largest_side + smallest_side) / 2\n print \"Attempt #%d of %d\" % (attempt+1, MAX_TRIES)\n print \"Side must be within [%d:%d], now trying %d\" % (\n smallest_side, largest_side, try_side)\n thumb = image.copy()\n thumb.thumbnail((try_side,try_side), Image.ANTIALIAS)\n afile = cStringIO.StringIO()\n thumb.save(afile, \"PNG\")\n resulting_size = len(afile.getvalue())\n afile.close()\n print \"Reduced file size: %.1f KB\" % (resulting_size/1024.)\n print \"Reduced image size: %dx%d pixels\" % thumb.size\n if min_bytes <= resulting_size <= max_bytes:\n print \"Success!\"\n return thumb\n elif resulting_size > max_bytes:\n print \"Too large (>%d), reducing more\" % max_bytes\n largest_side = try_side\n else:\n print \"Too small (<%d), reducing less\" % min_bytes\n smallest_side = try_side\n print \"too many attempts, returning what I've got!\"\n return thumb\n\ndef main():\n thumb = getThumbnail(\"pyth.png\")\n print \"Reduced image size: %dx%d pixels\" % thumb.size\n print \"Saving to thumb.png\"\n thumb.save(\"thumb.png\")\n thumb_size = os.stat(\"thumb.png\")[stat.ST_SIZE]\n print \"Reduced file size: %.1f KB\" % (thumb_size/1024.)\n print \"Done, bye!\"\n\nif __name__ == '__main__':\n main()\n\n", "Did you read the PIL documentation? There is an image.thumbnail method.\n", "Use PIL, see sample code here to resize keeping aspect ratio\nHow do I resize an image using PIL and maintain its aspect ratio?\nsee how to do similar thing on a dir\nResize images in directory\nSo above links describe how to resize images using PIL, now coming to your question of max size of 10KB, that can be achieved easily e.g.\nSuppose size required is 100x100, and we use JPEG compression, 100% JPEG quality takes abt 9 bits per pixel(see http://en.wikipedia.org/wiki/JPEG), that means size of 100x100 image would be 100x100x9/(1024x8) = 11KB, so at quality=100 you are almost achieving your goal, but if you still want 10KB only you can set quality=90, and in general you can pass quality as param to resize function and reduce quality by 10% if image size is above 10KB but I think that is not needed, at 90% quality all JPEG images would be < 10KB.\nAlso note that without compression also image size is just 30KB for RGB image, and if you reduce size to 60x60 pixels image size would be only 10KB without any compression i.e you can use bmp images and if you want lesses sizer but lossless compression you can choose PNG.\nSo in conclusion your target of 10KB is very easy to achieve.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "image_processing", "python", "python_imaging_library" ]
stackoverflow_0001075037_image_processing_python_python_imaging_library.txt
Q: Splitting a large XML file in Python I'm looking to split a huge XML file into smaller bits. I'd like to scan through the file looking for a specific tag, then grab all info between and , then save that into a file, then continue on through the rest of the file. My issue is trying to find a clean way to note the start and end of the tags, so that I can grab the text inside as I scan through the file with "for line in f" I'd rather not use sentinel variables. Is there a pythonic way to get this done? The file is too big to read into memory. A: There are two common ways to handle XML data. One is called DOM, which stands for Document Object Model. This style of XML parsing is probably what you have seen when looking at documentation, because it reads the entire XML into memory to create the object model. The second is called SAX, which is a streaming method. The parser starts reading the XML and sends signals to your code about certain events, e.g. when a new start tag is found. So SAX is clearly what you need for your situation. Sax parsers can be found in the python library under xml.sax and xml.parsers.expat. A: You might consider using the ElementTree iterparse function for this situation. A: I have had success with the cElementTree.iterparse method in order to do a similar task. I had a large xml doc with repeated 'entries' with tag 'resFrame' and I wanted to filter out entries for a specific id. Here is the code that I used for it: source document had this structure <snapDoc> <bucket>....</bucket> <bucket>....</bucket> <bucket>....</bucket> ... <resFrame><id>234234</id>.....</resFrame> <frame><id>344234</id>.....</frame> <resFrame>...</resFrame> <frame>...</frame> </snapDoc> I used the following script to create a smaller doc that had the same structure, bucket entries and only resFrame entries with a specific id. #!/usr/bin/env python2.6 import xml.etree.cElementTree as cElementTree start = '''<?xml version="1.0" encoding="UTF-8"?> <snapDoc>''' def main(): print start context = cElementTree.iterparse('snap.xml', events=("start", "end")) context = iter(context) event, root = context.next() # get the root element of the XML doc for event, elem in context: if event == "end": if elem.tag == 'bucket': # i want to write out all <bucket> entries elem.tail = None print cElementTree.tostring( elem ) if elem.tag == 'resFrame': if elem.find("id").text == ":4:39644:482:-1:1": # i only want to write out resFrame entries with this id elem.tail = None print cElementTree.tostring( elem ) if elem.tag in ['bucket', 'frame', 'resFrame']: root.clear() # when done parsing a section clear the tree to safe memory print "</snapDoc>" main() A: How serendipitous! Will Larson just made a good post about Handling Very Large CSV and XML File in Python. The main takeaways seem to be to use the xml.sax module, as Van mentioned, and to make some macro-functions to abstract away the details of the low-level SAX API. A: This is an old, but very good article from Uche Ogbuji's also very good Python & XMl column. It covers your exact question and uses the standard lib's sax module like the other answer has suggested. Decomposition, Process, Recomposition
Splitting a large XML file in Python
I'm looking to split a huge XML file into smaller bits. I'd like to scan through the file looking for a specific tag, then grab all info between and , then save that into a file, then continue on through the rest of the file. My issue is trying to find a clean way to note the start and end of the tags, so that I can grab the text inside as I scan through the file with "for line in f" I'd rather not use sentinel variables. Is there a pythonic way to get this done? The file is too big to read into memory.
[ "There are two common ways to handle XML data.\nOne is called DOM, which stands for Document Object Model. This style of XML parsing is probably what you have seen when looking at documentation, because it reads the entire XML into memory to create the object model.\nThe second is called SAX, which is a streaming method. The parser starts reading the XML and sends signals to your code about certain events, e.g. when a new start tag is found.\nSo SAX is clearly what you need for your situation. Sax parsers can be found in the python library under xml.sax and xml.parsers.expat.\n", "You might consider using the ElementTree iterparse function for this situation.\n", "I have had success with the cElementTree.iterparse method in order to do a similar task.\nI had a large xml doc with repeated 'entries' with tag 'resFrame' and I wanted to filter out entries for a specific id. Here is the code that I used for it:\nsource document had this structure\n<snapDoc>\n <bucket>....</bucket>\n <bucket>....</bucket>\n <bucket>....</bucket>\n ...\n <resFrame><id>234234</id>.....</resFrame>\n <frame><id>344234</id>.....</frame>\n <resFrame>...</resFrame>\n <frame>...</frame>\n</snapDoc>\n\nI used the following script to create a smaller doc that had the same structure, bucket entries and only resFrame entries with a specific id.\n#!/usr/bin/env python2.6\n\nimport xml.etree.cElementTree as cElementTree\nstart = '''<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<snapDoc>'''\n\ndef main():\n print start\n context = cElementTree.iterparse('snap.xml', events=(\"start\", \"end\"))\n context = iter(context)\n event, root = context.next() # get the root element of the XML doc\n\n for event, elem in context:\n if event == \"end\":\n if elem.tag == 'bucket': # i want to write out all <bucket> entries\n elem.tail = None \n print cElementTree.tostring( elem )\n if elem.tag == 'resFrame':\n if elem.find(\"id\").text == \":4:39644:482:-1:1\": # i only want to write out resFrame entries with this id\n elem.tail = None\n print cElementTree.tostring( elem )\n if elem.tag in ['bucket', 'frame', 'resFrame']:\n root.clear() # when done parsing a section clear the tree to safe memory\n print \"</snapDoc>\"\n\nmain()\n\n", "How serendipitous! Will Larson just made a good post about Handling Very Large CSV and XML File in Python.\nThe main takeaways seem to be to use the xml.sax module, as Van mentioned, and to make some macro-functions to abstract away the details of the low-level SAX API.\n", "This is an old, but very good article from Uche Ogbuji's also very good Python & XMl column. It covers your exact question and uses the standard lib's sax module like the other answer has suggested. Decomposition, Process, Recomposition\n" ]
[ 9, 6, 6, 2, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0000476949_python_xml.txt
Q: Does python have hooks into EXT3 We have several cron jobs that ftp proxy logs to a centralized server. These files can be rather large and take some time to transfer. Part of the requirement of this project is to provide a logging mechanism in which we log the success or failure of these transfers. This is simple enough. My question is, is there a way to check if a file is currently being written to? My first solution was to just check the file size twice within a given timeframe and check the file size. But a co-worker said that there may be able to hook into the EXT3 file system via python and check the attributes to see if the file is currently being appended to. My Google-Fu came up empty. Is there a module for EXT3 or something else that would allow me to check the state of a file? The server is running Fedora Core 9 with EXT3 file system. A: no need for ext3-specific hooks; just check lsof, or more exactly, /proc/<pid>/fd/* and /proc/<pid>/fdinfo/* (that's where lsof gets it's info, AFAICT). There you can check if the file is open, if it's writeable, and the 'cursor' position. That's not the whole picture; but any more is done in processpace by stdlib on the writing process, as most writes are buffered and the kernel only sees bigger chunks of data, so any 'ext3-aware' monitor wouldn't get that either. A: There's no ext3 hooks to check what you'd want directly. I suppose you could dig through the source code of Fuser linux command, replicate the part that finds which process owns a file, and watch that resource. When noone longer has the file opened, it's done transferring. Another approach: Your cron jobs should tell that they're finished. We have our cron jobs that transport files just write an empty filename.finished after it's transferred the filename. Another approach is to transfer them to a temporary filename, e.g. filename.part and then rename it to filename Renaming is atomic. In both cases you check repeatedly until the presence of filename or filename.finished
Does python have hooks into EXT3
We have several cron jobs that ftp proxy logs to a centralized server. These files can be rather large and take some time to transfer. Part of the requirement of this project is to provide a logging mechanism in which we log the success or failure of these transfers. This is simple enough. My question is, is there a way to check if a file is currently being written to? My first solution was to just check the file size twice within a given timeframe and check the file size. But a co-worker said that there may be able to hook into the EXT3 file system via python and check the attributes to see if the file is currently being appended to. My Google-Fu came up empty. Is there a module for EXT3 or something else that would allow me to check the state of a file? The server is running Fedora Core 9 with EXT3 file system.
[ "no need for ext3-specific hooks; just check lsof, or more exactly, /proc/<pid>/fd/* and /proc/<pid>/fdinfo/* (that's where lsof gets it's info, AFAICT). There you can check if the file is open, if it's writeable, and the 'cursor' position.\nThat's not the whole picture; but any more is done in processpace by stdlib on the writing process, as most writes are buffered and the kernel only sees bigger chunks of data, so any 'ext3-aware' monitor wouldn't get that either.\n", "There's no ext3 hooks to check what you'd want directly.\n\nI suppose you could dig through the source code of Fuser linux command, replicate the part that finds which process owns a file, and watch that resource. When noone longer has the file opened, it's done transferring.\n\nAnother approach:\n\nYour cron jobs should tell that they're finished.\n\nWe have our cron jobs that transport files just write an empty filename.finished after it's transferred the filename. Another approach is to transfer them to a temporary filename, e.g. filename.part and then rename it to filename Renaming is atomic. In both cases you check repeatedly until the presence of filename or filename.finished\n" ]
[ 7, 1 ]
[]
[]
[ "ext3", "linux", "python" ]
stackoverflow_0001075391_ext3_linux_python.txt
Q: Simplifying Data with a for loop (Python) I was trying to simplify the code: header = [] header.append(header1) header.append(header2) header.append(header3) header.append(header4) header.append(header5) header.append(header6) where: header1 = str(input.headerOut1) header2 = str(input.headerOut2) header3 = str(input.headerOut3) header4 = str(input.headerOut4) header5 = str(input.headerOut5) header6 = str(input.headerOut6) I had wanted to use a for loop, like: headerList = [] for i in range(6) headerList.append(header+i) however, python won't recognize that header+i represents the string header1. Is there any way to simplify this code or get the for loop to work? Thank you so much! A: You should really structure your data as a list or a dictionary, like this: input.headerOut[1] input.headerOut[2] # etc. which would make this a lot easier, and more Pythonic. But you can do what you want using getattr: headerList = [] for i in range(1, 7): header = str(getattr(input, 'headerOut%d' % i)) headerList.append(header) A: header = [str(getattr(input, "headerOut%d" % x)) for x in range(1,7)] A: Put the headers in an array and loop through it. A: You can use locals to get the the local scope as a dict: headerList = [] for i in xrange(1, 7): headerList.append(locals()['header%s' % (i,)]) If possible, though, you should just use the input variable directly, as some of the other answers suggested.
Simplifying Data with a for loop (Python)
I was trying to simplify the code: header = [] header.append(header1) header.append(header2) header.append(header3) header.append(header4) header.append(header5) header.append(header6) where: header1 = str(input.headerOut1) header2 = str(input.headerOut2) header3 = str(input.headerOut3) header4 = str(input.headerOut4) header5 = str(input.headerOut5) header6 = str(input.headerOut6) I had wanted to use a for loop, like: headerList = [] for i in range(6) headerList.append(header+i) however, python won't recognize that header+i represents the string header1. Is there any way to simplify this code or get the for loop to work? Thank you so much!
[ "You should really structure your data as a list or a dictionary, like this:\ninput.headerOut[1]\ninput.headerOut[2]\n# etc.\n\nwhich would make this a lot easier, and more Pythonic. But you can do what you want using getattr:\nheaderList = []\nfor i in range(1, 7):\n header = str(getattr(input, 'headerOut%d' % i))\n headerList.append(header)\n\n", "header = [str(getattr(input, \"headerOut%d\" % x)) for x in range(1,7)]\n\n", "Put the headers in an array and loop through it.\n", "You can use locals to get the the local scope as a dict:\nheaderList = []\nfor i in xrange(1, 7):\n headerList.append(locals()['header%s' % (i,)])\n\nIf possible, though, you should just use the input variable directly, as some of the other answers suggested.\n" ]
[ 9, 5, 2, 2 ]
[]
[]
[ "for_loop", "loops", "python", "simplify", "string" ]
stackoverflow_0001075631_for_loop_loops_python_simplify_string.txt
Q: Using the AND and NOT Operator in Python Here is my custom class that I have that represents a triangle. I'm trying to write code that checks to see if self.a, self.b, and self.c are greater than 0, which would mean that I have Angle, Angle, Angle. Below you will see the code that checks for A and B, however when I use just self.a != 0 then it works fine. I believe I'm not using & correctly. Any ideas? Here is how I am calling it: print myTri.detType() class Triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f def detType(self): #Triangle Type AAA if self.a != 0 & self.b != 0: return self.a #If self.a > 10: #return AAA #Triangle Type AAS #elif self.a = 0: #return AAS #Triangle Type ASA #Triangle Type SAS #Triangle Type SSS #else: #return unknown A: You should write : if (self.a != 0) and (self.b != 0) : "&" is the bit wise operator and does not suit for boolean operations. The equivalent of "&&" is "and" in Python. A shorter way to check what you want is to use the "in" operator : if 0 not in (self.a, self.b) : You can check if anything is part of a an iterable with "in", it works for : Tuples. I.E : "foo" in ("foo", 1, c, etc) will return true Lists. I.E : "foo" in ["foo", 1, c, etc] will return true Strings. I.E : "a" in "ago" will return true Dict. I.E : "foo" in {"foo" : "bar"} will return true As an answer to the comments : Yes, using "in" is slower since you are creating an Tuple object, but really performances are not an issue here, plus readability matters a lot in Python. For the triangle check, it's easier to read : 0 not in (self.a, self.b, self.c) Than (self.a != 0) and (self.b != 0) and (self.c != 0) It's easier to refactor too. Of course, in this example, it really is not that important, it's very simple snippet. But this style leads to a Pythonic code, which leads to a happier programmer (and losing weight, improving sex life, etc.) on big programs. A: Use the keyword and, not & because & is a bit operator. Be careful with this... just so you know, in Java and C++, the & operator is ALSO a bit operator. The correct way to do a boolean comparison in those languages is &&. Similarly | is a bit operator, and || is a boolean operator. In Python and and or are used for boolean comparisons. A: It's called and and or in Python.
Using the AND and NOT Operator in Python
Here is my custom class that I have that represents a triangle. I'm trying to write code that checks to see if self.a, self.b, and self.c are greater than 0, which would mean that I have Angle, Angle, Angle. Below you will see the code that checks for A and B, however when I use just self.a != 0 then it works fine. I believe I'm not using & correctly. Any ideas? Here is how I am calling it: print myTri.detType() class Triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f def detType(self): #Triangle Type AAA if self.a != 0 & self.b != 0: return self.a #If self.a > 10: #return AAA #Triangle Type AAS #elif self.a = 0: #return AAS #Triangle Type ASA #Triangle Type SAS #Triangle Type SSS #else: #return unknown
[ "You should write :\nif (self.a != 0) and (self.b != 0) :\n\n\"&\" is the bit wise operator and does not suit for boolean operations. The equivalent of \"&&\" is \"and\" in Python.\nA shorter way to check what you want is to use the \"in\" operator :\nif 0 not in (self.a, self.b) :\n\nYou can check if anything is part of a an iterable with \"in\", it works for :\n\nTuples. I.E : \"foo\" in (\"foo\", 1, c, etc) will return true\nLists. I.E : \"foo\" in [\"foo\", 1, c, etc] will return true\nStrings. I.E : \"a\" in \"ago\" will return true\nDict. I.E : \"foo\" in {\"foo\" : \"bar\"} will return true\n\nAs an answer to the comments :\nYes, using \"in\" is slower since you are creating an Tuple object, but really performances are not an issue here, plus readability matters a lot in Python.\nFor the triangle check, it's easier to read :\n0 not in (self.a, self.b, self.c)\n\nThan \n(self.a != 0) and (self.b != 0) and (self.c != 0) \n\nIt's easier to refactor too.\nOf course, in this example, it really is not that important, it's very simple snippet. But this style leads to a Pythonic code, which leads to a happier programmer (and losing weight, improving sex life, etc.) on big programs.\n", "Use the keyword and, not & because & is a bit operator.\nBe careful with this... just so you know, in Java and C++, the & operator is ALSO a bit operator. The correct way to do a boolean comparison in those languages is &&. Similarly | is a bit operator, and || is a boolean operator. In Python and and or are used for boolean comparisons.\n", "It's called and and or in Python.\n" ]
[ 130, 23, 13 ]
[]
[]
[ "operators", "python" ]
stackoverflow_0001075652_operators_python.txt
Q: Writing a website in Python I'm pretty proficient in PHP, but want to try something new. I'm also know a bit of Python, enough to do the basics of the basics, but haven't used in a web design type situation. I've just written this, which works: #!/usr/bin/python def main(): print "Content-type: text/html" print print "<html><head>" print "<title>Hello World from Python</title>" print "</head><body>" print "Hello World!" print "</body></html>" if __name__ == "__main__": main() Thing is, that this seems pretty cumbersome. Without using something huge like django, what's the best way to write scripts that can process get and post? A: Your question was about basic CGI scripting, looking at your example, but it seems like everyone has chosen to answer it with "use my favorite framework". Let's try a different approach. If you're looking for a direct replacement for what you wrote above (ie. CGI scripting), then you're probably looking for the cgi module. It's a part of the Python standard library. Complimentary functionality is available in urllib and urllib2. You might also be interested in BaseHTTPServer and SimpleHTTPServer, also part of the standard library. Getting into more interesting territory, wsgiref gives you the basics of a WSGI interface, at which point you probably want to start thinking about more "frameworky" (is that a word?) things like web.py, Django, Pylons, CherryPy, etc, as others have mentioned. A: As far as full frameworks go I believe Django is relatively small. If you really want lightweight, though, check out web.py, CherryPy, Pylons and web2py. I think the crowd favorite from the above is Pylons, but I am a Django man so I can't say much else. For more on lightweight Python frameworks, check out this question. A: There are a couple of web frameworks available in python, that will relieve you from most of the work Django Pylons (and the new TurboGears, based on it). Web2py CherryPy (and the old TurboGears, based on it) I do not feel Django as "big" as you say; however, I think that Pylons and CherryPy may be a better answer to your question. CherryPy seems simpler,. but seems also a bit "passé", while Pylons is under active development. For Pylons, there is also an interesting Pylons book, available online. A: In Python, the way to produce a website is to use a framework. Most of the popular (and actively maintained/supported) frameworks have already been mentioned. In general, I do not view Djano or Turbogears as "huge", I view them as "complete." Each will let you build a database backed, dynamic website. The preference for one over the other is more about style than it is about features. Zope on the other hand, does feel "big". Zope is also "enterprise" class in terms of the features that are included out of the box. One very nice thing is that you can use the ZODB (Zope Object Database) without using the rest of Zope. It would certainly help if we knew what kinds of websites you were interested in developing, as that might help to narrow the suggestions. A: In web2py the previous code would be in controller default.py: def main(): return dict(message="Hello World") in view default/main.html: <html><head> <title>Hello World from Python</title> </head><body> {{=message}} </body></html> nothing else, no installation, no configuration, you can edit the two files above directly on the web via the admin interface. web2py is based on wsgi but works also with cgi, mod_python, mod_proxy and fastcgi if mod_wsgi is not available. A: I really love django and it doesn't seem to me that is huge. It is very powerful but not huge. If you want to start playing with http and python, the simplest thing is the BaseHttpServer provided in the standard library. see http://docs.python.org/library/basehttpserver.html for details A: I agree with Paolo - Django is pretty small and the way to go - but if you are not down with that I would add to TurboGears to the list A: If you are looking for a framework take a look at this list: Python Web Frameworks If you need small script(-s) or one time job script, might be plain CGI module is going to be enough - CGI Scripts and cgi module itself. I would recommend you to stick to some framework if you want to create something more then static pages and simple forms. Django is I believe most popular and most supported. A: What is "huge" is a matter of taste, but Django is a "full stack" framework, that includes everything from an ORM, to templates to well, loads of things. So it's not small (although smaller than Grok and Zope3, other full-stack python web frameworks). But there are also plenty of really small and minimalistic web frameworks, that do nothing than provide a framework for the web part. Many have been mentioned above. To the list I must add BFG and Bobo. Both utterly minimal, but still useful and flexible. http://bfg.repoze.org/ http://bobo.digicool.com/
Writing a website in Python
I'm pretty proficient in PHP, but want to try something new. I'm also know a bit of Python, enough to do the basics of the basics, but haven't used in a web design type situation. I've just written this, which works: #!/usr/bin/python def main(): print "Content-type: text/html" print print "<html><head>" print "<title>Hello World from Python</title>" print "</head><body>" print "Hello World!" print "</body></html>" if __name__ == "__main__": main() Thing is, that this seems pretty cumbersome. Without using something huge like django, what's the best way to write scripts that can process get and post?
[ "Your question was about basic CGI scripting, looking at your example, but it seems like everyone has chosen to answer it with \"use my favorite framework\". Let's try a different approach.\nIf you're looking for a direct replacement for what you wrote above (ie. CGI scripting), then you're probably looking for the cgi module. It's a part of the Python standard library. Complimentary functionality is available in urllib and urllib2. You might also be interested in BaseHTTPServer and SimpleHTTPServer, also part of the standard library.\nGetting into more interesting territory, wsgiref gives you the basics of a WSGI interface, at which point you probably want to start thinking about more \"frameworky\" (is that a word?) things like web.py, Django, Pylons, CherryPy, etc, as others have mentioned.\n", "As far as full frameworks go I believe Django is relatively small.\nIf you really want lightweight, though, check out web.py, CherryPy, Pylons and web2py.\nI think the crowd favorite from the above is Pylons, but I am a Django man so I can't say much else.\nFor more on lightweight Python frameworks, check out this question.\n", "There are a couple of web frameworks available in python, that will relieve you from most of the work\n\nDjango\nPylons (and the new TurboGears, based on it).\nWeb2py\nCherryPy (and the old TurboGears, based on it)\n\nI do not feel Django as \"big\" as you say; however, I think that Pylons and CherryPy may be a better answer to your question. CherryPy seems simpler,. but seems also a bit \"passé\", while Pylons is under active development.\nFor Pylons, there is also an interesting Pylons book, available online.\n", "In Python, the way to produce a website is to use a framework. Most of the popular (and actively maintained/supported) frameworks have already been mentioned. \nIn general, I do not view Djano or Turbogears as \"huge\", I view them as \"complete.\" Each will let you build a database backed, dynamic website. The preference for one over the other is more about style than it is about features. \nZope on the other hand, does feel \"big\". Zope is also \"enterprise\" class in terms of the features that are included out of the box. One very nice thing is that you can use the ZODB (Zope Object Database) without using the rest of Zope. \nIt would certainly help if we knew what kinds of websites you were interested in developing, as that might help to narrow the suggestions. \n", "In web2py the previous code would be\nin controller default.py:\ndef main():\n return dict(message=\"Hello World\")\n\nin view default/main.html:\n<html><head>\n<title>Hello World from Python</title>\n</head><body>\n{{=message}}\n</body></html>\n\nnothing else, no installation, no configuration, you can edit the two files above directly on the web via the admin interface. web2py is based on wsgi but works also with cgi, mod_python, mod_proxy and fastcgi if mod_wsgi is not available.\n", "I really love django and it doesn't seem to me that is huge. It is very powerful but not huge.\nIf you want to start playing with http and python, the simplest thing is the BaseHttpServer provided in the standard library. see http://docs.python.org/library/basehttpserver.html for details\n", "I agree with Paolo - Django is pretty small and the way to go - but if you are not down with that I would add to TurboGears to the list\n", "If you are looking for a framework take a look at this list: Python Web Frameworks\nIf you need small script(-s) or one time job script, might be plain CGI module is going to be enough - CGI Scripts and cgi module itself.\nI would recommend you to stick to some framework if you want to create something more then static pages and simple forms. Django is I believe most popular and most supported.\n", "What is \"huge\" is a matter of taste, but Django is a \"full stack\" framework, that includes everything from an ORM, to templates to well, loads of things. So it's not small (although smaller than Grok and Zope3, other full-stack python web frameworks).\nBut there are also plenty of really small and minimalistic web frameworks, that do nothing than provide a framework for the web part. Many have been mentioned above. To the list I must add BFG and Bobo. Both utterly minimal, but still useful and flexible.\nhttp://bfg.repoze.org/\nhttp://bobo.digicool.com/\n" ]
[ 27, 12, 8, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001070999_python.txt
Q: How should I learn to use the Windows API with Python? I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. How should I go about learning to use the Windows API with Python? A: Honestly, no. The Windows API is an 800 pound monster covered with hair. Charlie Petzold's 15 pound book was the canonical reference once upon a time. That said, the Python for Windows folks have some good material. Microsoft has the whole API online, including some sample code and such. And the Wikipedia article is a good overview. A: About 4 years ago I set out to truly understand the Windows API. I was coding in C# at the time, but I felt like the framework was abstracting me too much from the API (which it was). So I switched to Delphi (C++ or C would have also been good choices). In my opinion, it is important that you start working in a language that creates native code and talks directly to the Windows API and makes you care about buffers, pointers, structures, and real constructs that Windows uses directly. C# is a great language, but not the best choice for learning the Windows API. Next, buy Mark Russinovich's book "Windows Internals" Amazon link. This is the 5th edition. The 6th edition is coming out April 2012 and adds info about Server 2008 R2 and Windows 7. And now, for the most important (and best) resource for learning Win32 API: Mark Russinovich's Windows Operating Systems Internals Curriculum which is offered for free. It is designed to be used by an instructor to teach students. I went through it and it is awesome. Full of examples, history, and detailed explanations. In my opinion, this is an ideal way to learn the Windows API. Mark Russinovich is a Microsoft Technical Fellow (there are only 14 at MS including the creator of C#). He used to own Winternals until he sold it to MS, he has a PhD in Computer Engineering from Carnegie Mellon, he has been a frequent presenter at Microsoft conferences (even before he worked for them), and he is crazy smart. His presentations are one of the primary reasons I attend Microsoft TechEd every year. A: I strongly recommend theForger's Win32 API Tutorial. Its a C tutorial, but he pretty much holds your hand and shows you the basics. Its also pretty short, which is nice in a tutorial. A: Avoid tutorials (written by kids, for kids, newbie level) Read the Petzold, Richter, Pietrek, Russinovich and Adv. Win32 api newsgroup news://comp.os.ms-windows.programmer.win32 A: Learning Win32 API is 5% of initial understanding of concepts and the patterns used and 95% of RTFM. For those initial 5% the Petzold book is really good but I suspect that there ought to be some online tutorials which you can find in google as good as I can to find. Really, once you get the hang of it it's really straight forward do to be able to do anything. Once you get there it would probably be time to move on to something better like QT and never touch Win32 API ever again. Nowadays no one really uses bare Win32 API for anything more than trivial due to the sheer overhead it involves and the extreme lack of comfort. A: Since you've asked about Python, why do you need the Win32 API ? That's used for writing small, fast C/C++ programs. If your tool is Python, just download wxPython which runs wonderfully on Windows and produces sleek native GUIs with 1% the code and the effort. A: " > free resource to learn how to use the windows API (preferably with python) You may refer Python Programming on Win32 by Mark Hammond and Andy Robinson along with pywin32. If you are not interested to use pywin32, you can use ctypes — A foreign function library for Python and the Forger's Win32 API Programming Tutorial. Refer Example Code : Shared Memory with Mutex (pywin32 and ctypes) A: As Charlie says : "this Api is an 800 pound monster covered with hair". Consider using the express version (free) of visual studio for vb or c# (http://www.microsoft.com/express/) along with the msdn library (http://msdn.microsoft.com/en-us/library/default.aspx). this will give you as much exposure to the api as you want. A: Umm...a lot of people have put the cart before the horse on this one. The question I have for you is: why do you want to learn Win32? If you want to learn it so you can build Windows user interfaces, perhaps consider wxPython instead. If you only plan on calling into non-visual Win32 APIs then the Petzold book may not be the best. There are tools like SWIG that make calling libraries such as Win32 easier from languages like Python. A: Once upon a time I read over some Win32 API tutorials at www.relisoft.com They are an anti-MFC and pro-Win32 API shop and have a manifesto of sorts explaining practical reasons for why. They also have a general C++ tutorial. 99% of the time I like their programming style for what it's worth. A: All you need is completely free on MSDN.COM. Win32 is easily programed using C/C++, C#, and Visual Basic. I recommend C/C++. YOu can download the Visual Studio Express editions here. All the documentation (not an abbreviated form) is on the web here. Note that Win32 is often loosely used to mean "all the programming interface for Windows". More cannonically, it is the base native set of APIs for user mode applications. There a similar set of APIs for drivers and kernel components. You can learn about that here. Microsoft has many other windows programming interfaces as well: The Common Language Run time and .NET frame work is a manged layer on top of windows. There are many other API families as well such as DX9 and DX10 (good link to game programming here). A: You have to start with these two books. Petzold book: Great for learning messages and message pumps, GDI and User32 stuff. Richter book: The windows base services, viz. Processes, memory, threads and dlls
How should I learn to use the Windows API with Python?
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. How should I go about learning to use the Windows API with Python?
[ "Honestly, no. The Windows API is an 800 pound monster covered with hair. Charlie Petzold's 15 pound book was the canonical reference once upon a time.\nThat said, the Python for Windows folks have some good material. Microsoft has the whole API online, including some sample code and such. And the Wikipedia article is a good overview.\n", "About 4 years ago I set out to truly understand the Windows API. I was coding in C# at the time, but I felt like the framework was abstracting me too much from the API (which it was). So I switched to Delphi (C++ or C would have also been good choices). \nIn my opinion, it is important that you start working in a language that creates native code and talks directly to the Windows API and makes you care about buffers, pointers, structures, and real constructs that Windows uses directly. C# is a great language, but not the best choice for learning the Windows API.\nNext, buy Mark Russinovich's book \"Windows Internals\" Amazon link. This is the 5th edition. The 6th edition is coming out April 2012 and adds info about Server 2008 R2 and Windows 7. \nAnd now, for the most important (and best) resource for learning Win32 API:\nMark Russinovich's Windows Operating Systems Internals Curriculum which is offered for free. \nIt is designed to be used by an instructor to teach students. I went through it and it is awesome. Full of examples, history, and detailed explanations. In my opinion, this is an ideal way to learn the Windows API.\nMark Russinovich is a Microsoft Technical Fellow (there are only 14 at MS including the creator of C#). He used to own Winternals until he sold it to MS, he has a PhD in Computer Engineering from Carnegie Mellon, he has been a frequent presenter at Microsoft conferences (even before he worked for them), and he is crazy smart. His presentations are one of the primary reasons I attend Microsoft TechEd every year.\n", "I strongly recommend theForger's Win32 API Tutorial. Its a C tutorial, but he pretty much holds your hand and shows you the basics. Its also pretty short, which is nice in a tutorial.\n", "Avoid tutorials (written by kids, for kids, newbie level)\nRead the Petzold, Richter, Pietrek, Russinovich and Adv. Win32 api newsgroup \nnews://comp.os.ms-windows.programmer.win32\n", "Learning Win32 API is 5% of initial understanding of concepts and the patterns used and 95% of RTFM. \nFor those initial 5% the Petzold book is really good but I suspect that there ought to be some online tutorials which you can find in google as good as I can to find.\nReally, once you get the hang of it it's really straight forward do to be able to do anything. Once you get there it would probably be time to move on to something better like QT and never touch Win32 API ever again. Nowadays no one really uses bare Win32 API for anything more than trivial due to the sheer overhead it involves and the extreme lack of comfort.\n", "Since you've asked about Python, why do you need the Win32 API ? That's used for writing small, fast C/C++ programs. If your tool is Python, just download wxPython which runs wonderfully on Windows and produces sleek native GUIs with 1% the code and the effort.\n", "\" > free resource to learn how to use the windows API (preferably with python)\n\nYou may refer Python Programming on Win32 by Mark Hammond and Andy Robinson along with pywin32.\nIf you are not interested to use pywin32, you can use ctypes — A foreign function library for Python and the Forger's Win32 API Programming Tutorial.\nRefer Example Code : Shared Memory with Mutex (pywin32 and ctypes)\n\n", "As Charlie says : \"this Api is an 800 pound monster covered with hair\". \nConsider using the express version (free) of visual studio for vb or c# (http://www.microsoft.com/express/) along with the msdn library (http://msdn.microsoft.com/en-us/library/default.aspx). this will give you as much exposure to the api as you want.\n", "Umm...a lot of people have put the cart before the horse on this one. The question I have for you is: why do you want to learn Win32? \nIf you want to learn it so you can build Windows user interfaces, perhaps consider wxPython instead. If you only plan on calling into non-visual Win32 APIs then the Petzold book may not be the best. There are tools like SWIG that make calling libraries such as Win32 easier from languages like Python.\n", "Once upon a time I read over some Win32 API tutorials at www.relisoft.com\nThey are an anti-MFC and pro-Win32 API shop and have a manifesto of sorts explaining practical reasons for why.\nThey also have a general C++ tutorial. 99% of the time I like their programming style for what it's worth.\n", "All you need is completely free on MSDN.COM. Win32 is easily programed using C/C++, C#, and Visual Basic. I recommend C/C++. YOu can download the Visual Studio Express editions here.\nAll the documentation (not an abbreviated form) is on the web here. \nNote that Win32 is often loosely used to mean \"all the programming interface for Windows\". More cannonically, it is the base native set of APIs for user mode applications. There a similar set of APIs for drivers and kernel components. You can learn about that here.\nMicrosoft has many other windows programming interfaces as well: The Common Language Run time and .NET frame work is a manged layer on top of windows. There are many other API families as well such as DX9 and DX10 (good link to game programming here). \n", "You have to start with these two books.\nPetzold book: Great for learning messages and message pumps, GDI and User32 stuff.\nRichter book: The windows base services, viz. Processes, memory, threads and dlls\n" ]
[ 42, 25, 7, 7, 3, 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "winapi" ]
stackoverflow_0000342729_python_winapi.txt
Q: I want to parse a PAC file to get some Proxy information... just not in Explorer Follow on from this question: I am working on a Python 2.4 app which will run on Windows XP. It needs to be able to download various resources from HTTP and it's got to work in all of our office locations which use "PAC" files to automatically select http proxies. Thanks to somebody who responded to my previous question I managed to find a technique to execute Javascript from within Python, it's really easy: js = win32com.client.Dispatch('MSScriptControl.ScriptControl') js.Language = 'JavaScript' js.AddCode('function foo(a,b) {return a;}' ) result = js.Run( "foo", "hello" ) But here comes the problem: The PAC file references a number of functions such as shExpMatch and isPlainHostName - these are presumably provided for free by Microsoft Internet Explorer. If I simply run the PAC file in Widnows Scripting using the recipe above it will fail because these functions are not missing. So what I need is a way to set up the environment exactly the same way that IE does. The obvious way is to somehow import the functions in the same way that IE does. I found that Firefox contains a single JS file which includes these functions, I suppose I could try to run Firefox's JS on Microsoft's scripting-host, but that sounds like a risky idea. What I really want is to make the javascript environment 100% Microsoft standard without anything that can make my life harder. Any suggestions? PS. You can see an example of a PAC file on Wikipedia. Unfortunately I cannot publish ours... that would violate company security. A: Are you able to download the PAC file from the remote host? I am asking because usually urllib in python uses static information for the proxy, retrieved from the registry. However, if you are able to get that file, then I think you could be able to get also another file - and then your idea of using FF version could kick in.
I want to parse a PAC file to get some Proxy information... just not in Explorer
Follow on from this question: I am working on a Python 2.4 app which will run on Windows XP. It needs to be able to download various resources from HTTP and it's got to work in all of our office locations which use "PAC" files to automatically select http proxies. Thanks to somebody who responded to my previous question I managed to find a technique to execute Javascript from within Python, it's really easy: js = win32com.client.Dispatch('MSScriptControl.ScriptControl') js.Language = 'JavaScript' js.AddCode('function foo(a,b) {return a;}' ) result = js.Run( "foo", "hello" ) But here comes the problem: The PAC file references a number of functions such as shExpMatch and isPlainHostName - these are presumably provided for free by Microsoft Internet Explorer. If I simply run the PAC file in Widnows Scripting using the recipe above it will fail because these functions are not missing. So what I need is a way to set up the environment exactly the same way that IE does. The obvious way is to somehow import the functions in the same way that IE does. I found that Firefox contains a single JS file which includes these functions, I suppose I could try to run Firefox's JS on Microsoft's scripting-host, but that sounds like a risky idea. What I really want is to make the javascript environment 100% Microsoft standard without anything that can make my life harder. Any suggestions? PS. You can see an example of a PAC file on Wikipedia. Unfortunately I cannot publish ours... that would violate company security.
[ "Are you able to download the PAC file from the remote host? I am asking because usually urllib in python uses static information for the proxy, retrieved from the registry.\nHowever, if you are able to get that file, then I think you could be able to get also another file - and then your idea of using FF version could kick in.\n" ]
[ 1 ]
[]
[]
[ "internet_explorer", "javascript", "python", "windows" ]
stackoverflow_0001075899_internet_explorer_javascript_python_windows.txt
Q: IPv6 decoder for pcapy/impacket I use the pcapy/impacket library to decode network packets in Python. It has an IP decoder which knows about the syntax of IPv4 packets but apparently no IPv6 decoder. Does anyone get one? In a private correspondance, the Impacket maintainers say it may be better to start with Scapy A: Scapy, recommended by the Impacket maintainers, has no IPv6 decoding at this time. But there is an unofficial extension to do so. With this extension, it works: for packet in traffic: if packet.type == ETH_P_IPV6 or packet.type == ETH_P_IP: ip = packet.payload if (ip.version == 4 and ip.proto == UDP_PROTO) or \ (ip.version == 6 and ip.nh == UDP_PROTO): if ip.dport == DNS_PORT and ip.dst == ns: all_queries = all_queries + 1 but it is awfully slow for large traces. So, I may have to try Impacket nevertheless or even go back to C. A: You may want to look into dpkt, yet another packet parsing/building library. It was written by the author of pypcap, a different libpcap wrapper, but it shouldn't be too difficult to get it working with pcapy to see if it's faster for your purposes than Scapy.
IPv6 decoder for pcapy/impacket
I use the pcapy/impacket library to decode network packets in Python. It has an IP decoder which knows about the syntax of IPv4 packets but apparently no IPv6 decoder. Does anyone get one? In a private correspondance, the Impacket maintainers say it may be better to start with Scapy
[ "Scapy, recommended by the Impacket maintainers, has no IPv6 decoding at this time. But there is an unofficial extension to do so.\nWith this extension, it works:\nfor packet in traffic:\n if packet.type == ETH_P_IPV6 or packet.type == ETH_P_IP:\n ip = packet.payload\n if (ip.version == 4 and ip.proto == UDP_PROTO) or \\\n (ip.version == 6 and ip.nh == UDP_PROTO):\n if ip.dport == DNS_PORT and ip.dst == ns:\n all_queries = all_queries + 1\n\nbut it is awfully slow for large traces. So, I may have to try Impacket nevertheless or even go back to C.\n", "You may want to look into dpkt, yet another packet parsing/building library. It was written by the author of pypcap, a different libpcap wrapper, but it shouldn't be too difficult to get it working with pcapy to see if it's faster for your purposes than Scapy.\n" ]
[ 2, 1 ]
[ "I have never used pcapy before, but I do have used libpcap in C projects. As the pcapy page states it is not statically linked to libcap, so you can upgrade to a newer one with IPv6 support.\nAccording to libpcap changelog, version 1.0 released on October 27, 2008, has default IPv6 support (it is supposed to have IPv6 from much longer but it is now default compiled with that option), so you should be able to capture IPv6 traffic with this version. Latest pcapy release is from March 27, 2007, so at most it should include a 0.9.8 version of libcap released on September 10, 2007.\nI don't know if that would be enough for you to be able to capture IPv6 traffic since pcapy API would probably requiere some changes to support it, and that's on pcapy developer's roof.\nUpdate: Apparently pylibpcap, a python wrapper to libpcap, has newer releases than pcapy, so newer libpcap features should be better supported.\nMore information about PCAP (libpcap) in general here.\n", "You can use a really useful one-file library from google from \nhttp://code.google.com/p/ipaddr-py/\nthat supports IPv4, IPv6, ip validation, netmask and prefix managements, etc. It's well coded and documented.\nGood luck\nEmilio\n" ]
[ -1, -1 ]
[ "impacket", "ipv6", "packet_capture", "pcapy", "python" ]
stackoverflow_0000369764_impacket_ipv6_packet_capture_pcapy_python.txt
Q: One liner to replicate lines coming from a file (Python) I have a regular list comprehension to load all lines of a file in a list f = open('file') try: self._raw = [L.rstrip('\n') for L in f] finally: f.close() Now I'd like to insert in the list each line 'n' times on the fly. How to do it inside the list comprehension ? Tnx A: self._raw = [L.rstrip('\n') for L in f for _ in xrange(n)]
One liner to replicate lines coming from a file (Python)
I have a regular list comprehension to load all lines of a file in a list f = open('file') try: self._raw = [L.rstrip('\n') for L in f] finally: f.close() Now I'd like to insert in the list each line 'n' times on the fly. How to do it inside the list comprehension ? Tnx
[ "self._raw = [L.rstrip('\\n') for L in f for _ in xrange(n)]\n\n" ]
[ 6 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001076872_list_comprehension_python.txt
Q: How do I detect whether sys.stdout is attached to terminal or not? Is there a way to detect whether sys.stdout is attached to a console terminal or not? For example, I want to be able to detect if foo.py is run via: $ python foo.py # user types this on console OR $ python foo.py > output.txt # redirection $ python foo.py | grep .... # pipe The reason I ask this question is that I want to make sure that my progressbar display happens only in the former case (real console). A: This can be detected using isatty: if sys.stdout.isatty(): # You're running in a real terminal else: # You're being piped or redirected To demonstrate this in a shell: python -c "import sys; print(sys.stdout.isatty())" should write True python -c "import sys; print(sys.stdout.isatty())" | cat should write False
How do I detect whether sys.stdout is attached to terminal or not?
Is there a way to detect whether sys.stdout is attached to a console terminal or not? For example, I want to be able to detect if foo.py is run via: $ python foo.py # user types this on console OR $ python foo.py > output.txt # redirection $ python foo.py | grep .... # pipe The reason I ask this question is that I want to make sure that my progressbar display happens only in the former case (real console).
[ "This can be detected using isatty:\nif sys.stdout.isatty():\n # You're running in a real terminal\nelse:\n # You're being piped or redirected\n\nTo demonstrate this in a shell:\n\npython -c \"import sys; print(sys.stdout.isatty())\" should write True\npython -c \"import sys; print(sys.stdout.isatty())\" | cat should write False\n\n" ]
[ 262 ]
[]
[]
[ "console", "python", "terminal" ]
stackoverflow_0001077113_console_python_terminal.txt
Q: Python Unix time doesn't work in Javascript In Python, using calendar.timegm(), I get a 10 digit result for a unix timestamp. When I put this into Javscript's setTime() function, it comes up with a date in 1970. It evidently needs a unix timestamp that is 13 digits long. How can this happen? Are they both counting from the same date? How can I use the same unix timestamp between these two languages? In Python: In [60]: parseddate.utctimetuple() Out[60]: (2009, 7, 17, 1, 21, 0, 4, 198, 0) In [61]: calendar.timegm(parseddate.utctimetuple()) Out[61]: 1247793660 In Firebug: >>> var d = new Date(); d.setTime(1247793660); d.toUTCString() "Thu, 15 Jan 1970 10:36:55 GMT" A: timegm is based on Unix's gmtime() method, which return seconds since Jan 1, 1970. Javascripts setTime() method is milliseconds since that date. You'll need to multiply your seconds times 1000 to convert to the format expected by Javascript. A: Here are a couple of python methods I use to convert to and from javascript/datetime. def to_datetime(js_timestamp): return datetime.datetime.fromtimestamp(js_timestamp/1000) def js_timestamp_from_datetime(dt): return 1000 * time.mktime(dt.timetuple()) In javascript you would do: var dt = new Date(); dt.setTime(js_timestamp); A: Are you possibly mixing up seconds-since-1970 with milliseconds-since-1970? A: JavaScript Date constructor works with milliseconds, you should multiply the Python unix time by 1000. var unixTimestampSeg = 1247793660; var date = new Date(unixTimestampSeg*1000);
Python Unix time doesn't work in Javascript
In Python, using calendar.timegm(), I get a 10 digit result for a unix timestamp. When I put this into Javscript's setTime() function, it comes up with a date in 1970. It evidently needs a unix timestamp that is 13 digits long. How can this happen? Are they both counting from the same date? How can I use the same unix timestamp between these two languages? In Python: In [60]: parseddate.utctimetuple() Out[60]: (2009, 7, 17, 1, 21, 0, 4, 198, 0) In [61]: calendar.timegm(parseddate.utctimetuple()) Out[61]: 1247793660 In Firebug: >>> var d = new Date(); d.setTime(1247793660); d.toUTCString() "Thu, 15 Jan 1970 10:36:55 GMT"
[ "timegm is based on Unix's gmtime() method, which return seconds since Jan 1, 1970.\nJavascripts setTime() method is milliseconds since that date. You'll need to multiply your seconds times 1000 to convert to the format expected by Javascript.\n", "Here are a couple of python methods I use to convert to and from javascript/datetime.\ndef to_datetime(js_timestamp):\n return datetime.datetime.fromtimestamp(js_timestamp/1000)\n\ndef js_timestamp_from_datetime(dt):\n return 1000 * time.mktime(dt.timetuple())\n\nIn javascript you would do:\nvar dt = new Date();\ndt.setTime(js_timestamp);\n\n", "Are you possibly mixing up seconds-since-1970 with milliseconds-since-1970?\n", "JavaScript Date constructor works with milliseconds, you should multiply the Python unix time by 1000.\nvar unixTimestampSeg = 1247793660;\nvar date = new Date(unixTimestampSeg*1000);\n\n" ]
[ 11, 9, 2, 1 ]
[]
[]
[ "datetime", "javascript", "python", "unix_timestamp", "utc" ]
stackoverflow_0001077393_datetime_javascript_python_unix_timestamp_utc.txt
Q: Making a variable non-inheritable in python Is there a way to make a variable non-inheritable in python? Like in the following example: B is a subclass of A, but I want it to have its own SIZE value. Could I get an Error to be raised (on __init__ or on getsize()) if B doesn't override SIZE? class A: SIZE = 5 def getsize(self): return self.SIZE class B(A): pass Edit: ... while inheriting the getsize() method...? A: Use a double-underscore prefix: (Double-underscore solution deleted after Emma's clarification) OK, you can do it like this: class A: SIZE = 5 def __init__(self): if self.__class__ != A: del self.SIZE def getsize(self): return self.SIZE class B(A): pass a = A() print a.getsize() # Prints 5 b = B() print b.getsize() # AttributeError: B instance has no attribute 'SIZE' A: If you want to make absolutely sure that subclasses of A override SIZE, you could use a metaclass for A that will raise an error when a subclass does not override it (note that A is a new-style class here): class ClassWithSize(type): def __init__(cls, name, bases, attrs): if 'SIZE' not in attrs: raise NotImplementedError('The "%s" class does not implement a "SIZE" attribute' % name) super(ClassWithSize, cls).__init__(name, bases, attrs) class A(object): __metaclass__ = ClassWithSize SIZE = 5 def getsize(self): return self.SIZE class B(A): SIZE = 6 class C(A): pass When you put the above in a module and attempt to import it, an exception will be raised when the import reaches the C class implementation. A: If metaclasses scare you (and I sympathize with that attitude!-), a descriptor could work -- you don't even have to make your custom descriptor (though that's easy enough), a plain good old property could work fine too: class A(object): @property def SIZE(self): if type(self) is not A: raise AttributeError("Class %s MUST explicitly define SIZE!" % type(self).__name__) def getsize(self): return self.SIZE Of course, this way you'll get the error only when an instance of a subclass of A which doesn't override SIZE actually tries to use self.SIZE (the metaclass approach has the advantage of giving the error earlier, when an errant subclass of A is created). A: The only approach that I can add is to use hasattr(self.__class__, 'SIZE') in the implementation of getsize() and toss an exception if the attribute is not found. Something like: class A: SIZE = 5 def getsize(self): klass = self.__class__ if hasattr(klass, 'SIZE') and 'SIZE' in klass.__dict__: return self.SIZE raise NotImplementedError('SIZE is not defined in ' + klass.__name__) There is some magic still missing since the derived class could define a method named SIZE and getsize wouldn't detect it. You can probably do some type(klass.SIZE) magic to filter this out if you want to. A: You can always just override it like this: class B(A): SIZE = 6 A: It sounds like what you want is a private variable. In which case this is what you need to do: class A: __SIZE = 5 def getsize(self): return self.__SIZE def setsize(self,newsize): self.__SIZE=newsize class B(A): pass A: Another approach might be to get classes A and B to inherit from a third class instead of one from the other: class X: def getsize(self): return self.SIZE class A(X): SIZE = 5 class B(X): pass a = A() print a.getsize() # Prints 5 b = B() print b.getsize() # AttributeError: B instance has no attribute 'SIZE' A: Another common idiom is to use NotImplemented. Think of it as the middle ground between metaclass enforcement and mere documentation. class A: SIZE = NotImplemented Now if a subclass forgets to override SIZE, the runtime errors will be immediate and obvious.
Making a variable non-inheritable in python
Is there a way to make a variable non-inheritable in python? Like in the following example: B is a subclass of A, but I want it to have its own SIZE value. Could I get an Error to be raised (on __init__ or on getsize()) if B doesn't override SIZE? class A: SIZE = 5 def getsize(self): return self.SIZE class B(A): pass Edit: ... while inheriting the getsize() method...?
[ "Use a double-underscore prefix:\n(Double-underscore solution deleted after Emma's clarification)\nOK, you can do it like this:\nclass A:\n SIZE = 5\n def __init__(self):\n if self.__class__ != A:\n del self.SIZE\n\n def getsize(self):\n return self.SIZE\n\nclass B(A):\n pass\n\na = A()\nprint a.getsize()\n# Prints 5\n\nb = B()\nprint b.getsize()\n# AttributeError: B instance has no attribute 'SIZE'\n\n", "If you want to make absolutely sure that subclasses of A override SIZE, you could use a metaclass for A that will raise an error when a subclass does not override it (note that A is a new-style class here):\nclass ClassWithSize(type):\n def __init__(cls, name, bases, attrs):\n if 'SIZE' not in attrs:\n raise NotImplementedError('The \"%s\" class does not implement a \"SIZE\" attribute' % name)\n super(ClassWithSize, cls).__init__(name, bases, attrs)\n\nclass A(object):\n __metaclass__ = ClassWithSize\n\n SIZE = 5\n def getsize(self):\n return self.SIZE\n\nclass B(A):\n SIZE = 6\n\nclass C(A):\n pass\n\nWhen you put the above in a module and attempt to import it, an exception will be raised when the import reaches the C class implementation.\n", "If metaclasses scare you (and I sympathize with that attitude!-), a descriptor could work -- you don't even have to make your custom descriptor (though that's easy enough), a plain good old property could work fine too:\nclass A(object):\n\n @property\n def SIZE(self):\n if type(self) is not A:\n raise AttributeError(\"Class %s MUST explicitly define SIZE!\" % \n type(self).__name__)\n\n def getsize(self):\n return self.SIZE\n\nOf course, this way you'll get the error only when an instance of a subclass of A which doesn't override SIZE actually tries to use self.SIZE (the metaclass approach has the advantage of giving the error earlier, when an errant subclass of A is created).\n", "The only approach that I can add is to use hasattr(self.__class__, 'SIZE') in the implementation of getsize() and toss an exception if the attribute is not found. Something like:\nclass A:\n SIZE = 5\n def getsize(self):\n klass = self.__class__\n if hasattr(klass, 'SIZE') and 'SIZE' in klass.__dict__:\n return self.SIZE\n raise NotImplementedError('SIZE is not defined in ' + klass.__name__)\n\nThere is some magic still missing since the derived class could define a method named SIZE and getsize wouldn't detect it. You can probably do some type(klass.SIZE) magic to filter this out if you want to.\n", "You can always just override it like this:\nclass B(A): \n SIZE = 6\n\n", "It sounds like what you want is a private variable. In which case this is what you need to do:\nclass A:\n __SIZE = 5\n def getsize(self): \n return self.__SIZE\n\n def setsize(self,newsize):\n self.__SIZE=newsize\n\nclass B(A): \n pass\n\n", "Another approach might be to get classes A and B to inherit from a third class instead of one from the other:\nclass X:\n def getsize(self):\n return self.SIZE\nclass A(X):\n SIZE = 5\n\nclass B(X): pass\n\na = A()\nprint a.getsize()\n# Prints 5\n\nb = B()\nprint b.getsize()\n# AttributeError: B instance has no attribute 'SIZE'\n\n", "Another common idiom is to use NotImplemented. Think of it as the middle ground between metaclass enforcement and mere documentation.\nclass A:\n SIZE = NotImplemented\n\nNow if a subclass forgets to override SIZE, the runtime errors will be immediate and obvious.\n" ]
[ 8, 5, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "inheritance", "python" ]
stackoverflow_0001076718_inheritance_python.txt
Q: Share objects with file handle attribute between processes I have a question about shared resource with file handle between processes. Here is my test code: from multiprocessing import Process,Lock,freeze_support,Queue import tempfile #from cStringIO import StringIO class File(): def __init__(self): self.temp = tempfile.TemporaryFile() #print self.temp def read(self): print "reading!!!" s = "huanghao is a good boy !!" print >> self.temp,s self.temp.seek(0,0) f_content = self.temp.read() print f_content class MyProcess(Process): def __init__(self,queue,*args,**kwargs): Process.__init__(self,*args,**kwargs) self.queue = queue def run(self): print "ready to get the file object" self.queue.get().read() print "file object got" file.read() if __name__ == "__main__": freeze_support() queue = Queue() file = File() queue.put(file) print "file just put" p = MyProcess(queue) p.start() Then I get a KeyError like below: file just put ready to get the file object Process MyProcess-1: Traceback (most recent call last): File "D:\Python26\lib\multiprocessing\process.py", line 231, in _bootstrap self.run() File "E:\tmp\mpt.py", line 35, in run self.queue.get().read() File "D:\Python26\lib\multiprocessing\queues.py", line 91, in get res = self._recv() File "D:\Python26\lib\tempfile.py", line 375, in __getattr__ file = self.__dict__['file'] KeyError: 'file' I think when I put the File() object into queue , the object got serialized, and file handle can not be serialized, so, i got the KeyError: Anyone have any idea about that? if I want to share objects with file handle attribute, what should I do? A: I have to object (at length, won't just fit in a commentl;-) to @Mark's repeated assertion that file handles just can't be "passed around between running processes" -- this is simply not true in real, modern operating systems, such as, oh, say, Unix (free BSD variants, MacOSX, and Linux, included -- hmmm, I wonder what OS's are left out of this list...?-) -- sendmsg of course can do it (on a "Unix socket", by using the SCM_RIGHTS flag). Now the poor, valuable multiprocessing is fully right to not exploit this feature (even assuming there might be black magic to implement it on Windows too) -- most developers would no doubt misuse it anyway (having multiple processes access the same open file concurrently and running into race conditions). The only proper way to use it is for a process which has exclusive rights to open certain files to pass the opened file handles to another process which runs with reduced privileges -- and then never use that handle itself again. No way to enforce that in the multiprocessing module, anyway. Back to @Andy's original question, unless he's going to work on Linux only (AND with local processes only, too) and willing to play dirty tricks with the /proc filesystem, he's going to have to define his application-level needs more sharply and serialize file objects accordingly. Most files have a path (or can be made to have one: path-less files are pretty rare, actually non-existent on Windows I believe) and thus can be serialized via it -- many others are small enough to serialize by sending their content over -- etc, etc.
Share objects with file handle attribute between processes
I have a question about shared resource with file handle between processes. Here is my test code: from multiprocessing import Process,Lock,freeze_support,Queue import tempfile #from cStringIO import StringIO class File(): def __init__(self): self.temp = tempfile.TemporaryFile() #print self.temp def read(self): print "reading!!!" s = "huanghao is a good boy !!" print >> self.temp,s self.temp.seek(0,0) f_content = self.temp.read() print f_content class MyProcess(Process): def __init__(self,queue,*args,**kwargs): Process.__init__(self,*args,**kwargs) self.queue = queue def run(self): print "ready to get the file object" self.queue.get().read() print "file object got" file.read() if __name__ == "__main__": freeze_support() queue = Queue() file = File() queue.put(file) print "file just put" p = MyProcess(queue) p.start() Then I get a KeyError like below: file just put ready to get the file object Process MyProcess-1: Traceback (most recent call last): File "D:\Python26\lib\multiprocessing\process.py", line 231, in _bootstrap self.run() File "E:\tmp\mpt.py", line 35, in run self.queue.get().read() File "D:\Python26\lib\multiprocessing\queues.py", line 91, in get res = self._recv() File "D:\Python26\lib\tempfile.py", line 375, in __getattr__ file = self.__dict__['file'] KeyError: 'file' I think when I put the File() object into queue , the object got serialized, and file handle can not be serialized, so, i got the KeyError: Anyone have any idea about that? if I want to share objects with file handle attribute, what should I do?
[ "I have to object (at length, won't just fit in a commentl;-) to @Mark's repeated assertion that file handles just can't be \"passed around between running processes\" -- this is simply not true in real, modern operating systems, such as, oh, say, Unix (free BSD variants, MacOSX, and Linux, included -- hmmm, I wonder what OS's are left out of this list...?-) -- sendmsg of course can do it (on a \"Unix socket\", by using the SCM_RIGHTS flag).\nNow the poor, valuable multiprocessing is fully right to not exploit this feature (even assuming there might be black magic to implement it on Windows too) -- most developers would no doubt misuse it anyway (having multiple processes access the same open file concurrently and running into race conditions). The only proper way to use it is for a process which has exclusive rights to open certain files to pass the opened file handles to another process which runs with reduced privileges -- and then never use that handle itself again. No way to enforce that in the multiprocessing module, anyway.\nBack to @Andy's original question, unless he's going to work on Linux only (AND with local processes only, too) and willing to play dirty tricks with the /proc filesystem, he's going to have to define his application-level needs more sharply and serialize file objects accordingly. Most files have a path (or can be made to have one: path-less files are pretty rare, actually non-existent on Windows I believe) and thus can be serialized via it -- many others are small enough to serialize by sending their content over -- etc, etc.\n" ]
[ 7 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0001075443_multiprocessing_python.txt
Q: Python Unicode UnicodeEncodeError I am having issues with trying to convert an UTF-8 string to unicode. I get the error. UnicodeEncodeError: 'ascii' codec can't encode characters in position 73-75: ordinal not in range(128) I tried wrapping this in a try/except block but then google was giving me a system administrator error which was one line. Can someone suggest how to catch this error and continue. Cheers, John. -- FULL ERROR -- Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 501, in __call__ handler.get(*groups) File "/Users/johnb/Sites/hurl/hurl.py", line 153, in get self.redirect(url.long_url) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 371, in redirect self.response.headers['Location'] = str(absolute_url) UnicodeEncodeError: 'ascii' codec can't encode characters in position 73-75: ordinal not in range(128) A: The correct solution is to do the following: self.response.headers['Location'] = urllib.quote(absolute_url.encode("utf-8")) A: The location header you are trying to set needs to be an Url, and an Url needs to be in Ascii. Since your Url is not an Ascii string you get the error. Just catching the error won't help since the Location header won't work with an invalid Url. When you create absolute_url you need to make sure it is encoded properly, best by using urllib.quote and the strings encode() method. You can try this: self.response.headers['Location'] = urllib.quote(absolute_url.encode('utf-8')) A: Please edit that mess so that it's legible. Hint: use the "code block" (101010 thingy button). You say that you are "trying to convert an UTF-8 string to unicode" but str(absolute_url) is a strange way of going about it. Are you sure that absolute_url is UTF-8? Try print type(absolute_url) print repr(absolute_url) If it is UTF-8, you need absolute_url.decode('utf8')
Python Unicode UnicodeEncodeError
I am having issues with trying to convert an UTF-8 string to unicode. I get the error. UnicodeEncodeError: 'ascii' codec can't encode characters in position 73-75: ordinal not in range(128) I tried wrapping this in a try/except block but then google was giving me a system administrator error which was one line. Can someone suggest how to catch this error and continue. Cheers, John. -- FULL ERROR -- Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 501, in __call__ handler.get(*groups) File "/Users/johnb/Sites/hurl/hurl.py", line 153, in get self.redirect(url.long_url) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 371, in redirect self.response.headers['Location'] = str(absolute_url) UnicodeEncodeError: 'ascii' codec can't encode characters in position 73-75: ordinal not in range(128)
[ "The correct solution is to do the following:\nself.response.headers['Location'] = urllib.quote(absolute_url.encode(\"utf-8\"))\n\n", "The location header you are trying to set needs to be an Url, and an Url needs to be in Ascii. Since your Url is not an Ascii string you get the error. Just catching the error won't help since the Location header won't work with an invalid Url.\nWhen you create absolute_url you need to make sure it is encoded properly, best by using urllib.quote and the strings encode() method. You can try this:\nself.response.headers['Location'] = urllib.quote(absolute_url.encode('utf-8'))\n\n", "Please edit that mess so that it's legible. Hint: use the \"code block\" (101010 thingy button).\nYou say that you are \"trying to convert an UTF-8 string to unicode\" but str(absolute_url) is a strange way of going about it. Are you sure that absolute_url is UTF-8? Try \nprint type(absolute_url)\nprint repr(absolute_url)\n\nIf it is UTF-8, you need absolute_url.decode('utf8')\n" ]
[ 8, 4, 1 ]
[ "Try this:\nself.response.headers['Location'] = absolute_url.decode(\"utf-8\")\nor\nself.response.headers['Location'] = unicode(absolute_url, \"utf-8\")\n\n" ]
[ -1 ]
[ "google_app_engine", "python", "unicode", "utf_8" ]
stackoverflow_0001077564_google_app_engine_python_unicode_utf_8.txt
Q: Windows Application Programming & wxPython Developing a project of mine I realize I have a need for some level of persistence across sessions, for example when a user executes the application, changes some preferences and then closes the app. The next time the user executes the app, be it after a reboot or 15 minutes, I would like to be able to retain the preferences that had been changed. My question relates to this persistence. Whether programming an application using the win32 API or the MFC Framework .. or using the newer tools for higher level languages such as wxPython or wxRuby, how does one maintain the type of persistence I refer to? Is it done as a temporary file written to the disk? Is it saved into some registry setting? Is there some other layer it is stored in that I am unaware of? A: I would advice to do it in two steps. First step is to save your prefs. as string, for that you can a) Use any xml lib or output xml by hand to output string and read similarly from string b) Just use pickle module to dump your prefs object as a string c) Somehow generate a string from prefs which you can read back as prefs e.g. use yaml, config , JSON etc actually JSON is a good option when simplejson makes it so easy. Once you have your methods to convert to and from string are ready, you just need to store it somewhere where it is persisted and you can read back next time, for that you can a) Use wx.Config which save to registry in windows and to other places depending on platform so you don't have to worry where it saves, you can just read back values in platform independent way. But if you wish you can just use wx.Config for directly saving reading prefs. b) Directly save prefs. string to a file in a folder assigned by OS to your app e.g. app data folder in windows. Benefit of saving to a string and than using wx.Config to save it, is that you can easily change where data is saved in future e.g. in future if there is a need to upload prefs. you can just upload prefs. string. A: There are different methods to do this that have evolved over the years. These methods include (but not limited to): Registry entries. INI files. XML Files Simple binary/text files Databases Nowadays, most people do this kind of thing with XML files residing in the user specific AppData folders. It is your choice how you do it. For example, for simple things, databases can be overkill and for huge persisted objects, registry would not be appropriate. You have to see what you are doing and do it accordingly. Here is a very good discussion on this topic
Windows Application Programming & wxPython
Developing a project of mine I realize I have a need for some level of persistence across sessions, for example when a user executes the application, changes some preferences and then closes the app. The next time the user executes the app, be it after a reboot or 15 minutes, I would like to be able to retain the preferences that had been changed. My question relates to this persistence. Whether programming an application using the win32 API or the MFC Framework .. or using the newer tools for higher level languages such as wxPython or wxRuby, how does one maintain the type of persistence I refer to? Is it done as a temporary file written to the disk? Is it saved into some registry setting? Is there some other layer it is stored in that I am unaware of?
[ "I would advice to do it in two steps.\n\nFirst step is to save your prefs. as\nstring, for that you can\na)\nUse any xml lib or output xml by\nhand to output string and read\nsimilarly from string\nb) Just use pickle module to dump your prefs object as a string\nc) Somehow generate a string from prefs which you can read back as prefs e.g. use yaml, config , JSON etc actually JSON is a good option when simplejson makes it so easy.\nOnce you have your methods to convert to and from string are ready, you just need to store it somewhere where it is persisted and you can read back next time, for that you can\na) Use wx.Config which save to registry in windows and to other places depending on platform so you don't have to worry where it saves, you can just read back values in platform independent way. But if you wish you can just use wx.Config for directly saving reading prefs.\nb) Directly save prefs. string to a file in a folder assigned by OS to your app e.g. app data folder in windows.\n\nBenefit of saving to a string and than using wx.Config to save it, is that you can easily change where data is saved in future e.g. in future if there is a need to upload prefs. you can just upload prefs. string.\n", "There are different methods to do this that have evolved over the years.\nThese methods include (but not limited to):\n\nRegistry entries.\nINI files.\nXML Files\nSimple binary/text files\nDatabases\n\nNowadays, most people do this kind of thing with XML files residing in the user specific AppData folders. It is your choice how you do it. For example, for simple things, databases can be overkill and for huge persisted objects, registry would not be appropriate. You have to see what you are doing and do it accordingly.\nHere is a very good discussion on this topic\n" ]
[ 3, 1 ]
[]
[]
[ "python", "windows", "wxpython" ]
stackoverflow_0001077649_python_windows_wxpython.txt
Q: Tkinter button bind and parent deatroy This is my code : print '1' from Tkinter import * print '2' class myApp: print '3' def __init__(self,parent): print '4' ## self.myparent = parent line1 print '11' self.myContainer1 = Frame(parent) print '12' self.myContainer1.pack() print '13' self.button1 = Button(self.myContainer1,text="OK",background="green") print '14' self.button1.pack(side=LEFT) print '15' self.button1.bind("<Button-1>",self.button1Click) print '16' self.button2 = Button(self.myContainer1,text="Cancel",background="cyan") print '17' self.button2.pack(side=RIGHT) print '18' self.button2.bind("<Button-1>",self.button2Click) print '19' def button1Click(self,event): print '5' if self.button1['background'] == 'green': print '20' self.button1['background'] ='tan' print '21' else: print '22' self.button1['background'] = 'yellow' print '23' def button2Click(self,event): print '6' ## self.myparent.destroy() self.parent.destroy() print '7' root = Tk() print '8' myapp = myApp(root) print '9' root.mainloop() print '10' Error is : >>> ================================ RESTART ================================ >>> 1 2 3 7 8 4 11 12 13 14 15 16 17 18 19 9 5 20 21 5 22 23 6 Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1403, in __call__ return self.func(*args) File "C:\Documents and Settings\he00044.HALEDGEWOOD\Desktop\TkinterExamples\buttonBind.py", line 56, in button2Click self.parent.destroy() AttributeError: myApp instance has no attribute 'parent' 10 >>> This is when i comment line1 It may be becoz myapp is not finding parent. But the concept is not clear. Can anybody explain the concept in detail.... A: Why ever did you comment out those two lines mentioning self.myparent and create a new one mentioning a mysterious, never-initialized self.parent?! That's the start of your problem, of course -- looks like absurd, deliberate sabotage of code. A: Assign the incoming parameter parent to self.parent? def __init__(self,parent): self.parent = parent A: Your question is not tkinter related, it's rather about object oriented design. Class myApp has __init__ method (constructor, the method that executes when object of that class is created), as well as two methods for button clicks. In button2Click method, you attempt to read attribute self.parent (translating as myapp.parent), but this property is not defined. When you uncomment line 1, you won't get the error. This is because you are creating attribute myapp.parent, and you are assigning the Tk root widget to this attribute. This is necessary as all widgets you create have to receive their parent widget. A: The other answers so far are great. This may also help: Fredrik Lundh's intro to Tkinter. Added some comments to your code that, along with the other answers, should help get you moving again: import Tkinter class MyApp: def __init__(self, parent): # constructor self.parent = parent # the parent here is 'root' self.myContainer1 = Tkinter.Frame(self.parent) # create Frame w/ root as parent self.myContainer1.pack() # make Frame (myContainer1) visible self.button1 = Tkinter.Button(self.myContainer1, text="OK", background="green") # add button as child of Frame self.button1.pack(side=Tkinter.LEFT) # place button1 in Frame self.button1.bind("<Button-1>",self.button1Click) # bind left mouse button to button1Click method self.button2 = Tkinter.Button(self.myContainer1, text="Cancel", background="cyan") self.button2.pack(side=Tkinter.RIGHT) self.button2.bind("<Button-1>", self.button2Click) def button1Click(self, event): if self.button1['background'] == 'green': self.button1['background'] ='tan' else: self.button1['background'] = 'yellow' def button2Click(self, event): self.parent.destroy() # the parent here is 'root', so you're ending the event loop root = Tkinter.Tk() # create root widget (a window) myapp = MyApp(root) # create instance of MyApp with root as the parent root.mainloop() # create event loop (ends when window is closed)
Tkinter button bind and parent deatroy
This is my code : print '1' from Tkinter import * print '2' class myApp: print '3' def __init__(self,parent): print '4' ## self.myparent = parent line1 print '11' self.myContainer1 = Frame(parent) print '12' self.myContainer1.pack() print '13' self.button1 = Button(self.myContainer1,text="OK",background="green") print '14' self.button1.pack(side=LEFT) print '15' self.button1.bind("<Button-1>",self.button1Click) print '16' self.button2 = Button(self.myContainer1,text="Cancel",background="cyan") print '17' self.button2.pack(side=RIGHT) print '18' self.button2.bind("<Button-1>",self.button2Click) print '19' def button1Click(self,event): print '5' if self.button1['background'] == 'green': print '20' self.button1['background'] ='tan' print '21' else: print '22' self.button1['background'] = 'yellow' print '23' def button2Click(self,event): print '6' ## self.myparent.destroy() self.parent.destroy() print '7' root = Tk() print '8' myapp = myApp(root) print '9' root.mainloop() print '10' Error is : >>> ================================ RESTART ================================ >>> 1 2 3 7 8 4 11 12 13 14 15 16 17 18 19 9 5 20 21 5 22 23 6 Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1403, in __call__ return self.func(*args) File "C:\Documents and Settings\he00044.HALEDGEWOOD\Desktop\TkinterExamples\buttonBind.py", line 56, in button2Click self.parent.destroy() AttributeError: myApp instance has no attribute 'parent' 10 >>> This is when i comment line1 It may be becoz myapp is not finding parent. But the concept is not clear. Can anybody explain the concept in detail....
[ "Why ever did you comment out those two lines mentioning self.myparent and create a new one mentioning a mysterious, never-initialized self.parent?! That's the start of your problem, of course -- looks like absurd, deliberate sabotage of code.\n", "Assign the incoming parameter parent to self.parent?\ndef __init__(self,parent):\n self.parent = parent\n\n", "Your question is not tkinter related, it's rather about object oriented design.\nClass myApp has __init__ method (constructor, the method that executes when object of that class is created), as well as two methods for button clicks. In button2Click method, you attempt to read attribute self.parent (translating as myapp.parent), but this property is not defined.\nWhen you uncomment line 1, you won't get the error. This is because you are creating attribute myapp.parent, and you are assigning the Tk root widget to this attribute. This is necessary as all widgets you create have to receive their parent widget.\n", "The other answers so far are great. \nThis may also help: Fredrik Lundh's intro to Tkinter.\nAdded some comments to your code that, along with the other answers, should help get you moving again: \nimport Tkinter\n\nclass MyApp:\n def __init__(self, parent): # constructor\n self.parent = parent # the parent here is 'root'\n self.myContainer1 = Tkinter.Frame(self.parent) # create Frame w/ root as parent\n self.myContainer1.pack() # make Frame (myContainer1) visible\n self.button1 = Tkinter.Button(self.myContainer1, \n text=\"OK\", background=\"green\") # add button as child of Frame\n self.button1.pack(side=Tkinter.LEFT) # place button1 in Frame\n self.button1.bind(\"<Button-1>\",self.button1Click) # bind left mouse button to button1Click method\n self.button2 = Tkinter.Button(self.myContainer1, text=\"Cancel\", \n background=\"cyan\")\n self.button2.pack(side=Tkinter.RIGHT)\n self.button2.bind(\"<Button-1>\", self.button2Click)\n\n def button1Click(self, event):\n if self.button1['background'] == 'green':\n self.button1['background'] ='tan'\n else:\n self.button1['background'] = 'yellow'\n\n def button2Click(self, event):\n self.parent.destroy() # the parent here is 'root', so you're ending the event loop\n\nroot = Tkinter.Tk() # create root widget (a window)\nmyapp = MyApp(root) # create instance of MyApp with root as the parent\nroot.mainloop() # create event loop (ends when window is closed)\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0001077932_python_tkinter.txt
Q: How can I deal with No module named edit.editor? I am trying to follow the WingIDE tutorial on creating scripts in the IDE. This following example scripts always throws an error: import wingapi def test_script(test_str): app = wingapi.gApplication v = "Product info is: " + str(app.GetProductInfo()) v += "\nAnd you typed: %s" % test_str wingapi.gApplication.ShowMessageDialog("Test Message", v) Traceback (most recent call last): File "C:\Wing-pi\Scripts\test.py", line 1, in import wingapi File "C:\Program Files\Development\Wing IDE 3.1\bin\wingapi.py", line 18, in import edit.editor ImportError: No module named edit.editor Process terminated with an exit code of 1 I am launching the script in the Wing IDE as suggested by someone, but I keep getting the same result. A: Answer is based on email from Stephan Deibel from the Wingware company that develops Wind IDE. Scripts are not launched in Wing's debugger. If you're editing them within Wing, they get reloaded as soon as you save and you should be able to use Command By Name in the edit menu to type test-script, which will execute the above script. This is described in more detail on the page you found the example: You cannot run the script in debug mode unless you have the Wing sources. You can launch the script fine from within the Wing IDE.
How can I deal with No module named edit.editor?
I am trying to follow the WingIDE tutorial on creating scripts in the IDE. This following example scripts always throws an error: import wingapi def test_script(test_str): app = wingapi.gApplication v = "Product info is: " + str(app.GetProductInfo()) v += "\nAnd you typed: %s" % test_str wingapi.gApplication.ShowMessageDialog("Test Message", v) Traceback (most recent call last): File "C:\Wing-pi\Scripts\test.py", line 1, in import wingapi File "C:\Program Files\Development\Wing IDE 3.1\bin\wingapi.py", line 18, in import edit.editor ImportError: No module named edit.editor Process terminated with an exit code of 1 I am launching the script in the Wing IDE as suggested by someone, but I keep getting the same result.
[ "Answer is based on email from Stephan Deibel from the Wingware company that develops Wind IDE.\n\nScripts are not launched in Wing's debugger. If you're editing them within Wing, they get reloaded as soon as you save and you should be able to use Command By Name in the edit menu to type test-script, which will execute the above script. This is described in more detail on the page you found the example:\n\nYou cannot run the script in debug mode unless you have the Wing sources. You can launch the script fine from within the Wing IDE.\n" ]
[ 1 ]
[]
[]
[ "python", "wing_ide" ]
stackoverflow_0001072736_python_wing_ide.txt
Q: Restarting a Django application running on Apache + mod_python I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that? Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course. A: If possible, you should switch to mod_wsgi. This is now the recommended way to serve Django anyway, and is much more efficient in terms of memory and server resources. In mod_wsgi, each site has a .wsgi file associated with it. To restart a site, just touch the relevant file, and only that code will be reloaded. A: As others have suggested, use mod_wsgi instead. To get the ability for automatic reloading, through touching the WSGI script file, or through a monitor that looks for code changes, you must be using daemon mode on UNIX. A slight of hand can be used to achieve same on Windows when using embedded mode. All the details can be found in: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode A: You can reduce number of connections to 1 by setting "MaxRequestsPerChild 1" in your httpd.conf file. But do it only on test server, not production. or If you don't want to kill existing connections and still restart apache you can restart it "gracefully" by performing "apache2ctl gracefully" - all existing connections will be allowed to complete.
Restarting a Django application running on Apache + mod_python
I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that? Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course.
[ "If possible, you should switch to mod_wsgi. This is now the recommended way to serve Django anyway, and is much more efficient in terms of memory and server resources.\nIn mod_wsgi, each site has a .wsgi file associated with it. To restart a site, just touch the relevant file, and only that code will be reloaded.\n", "As others have suggested, use mod_wsgi instead. To get the ability for automatic reloading, through touching the WSGI script file, or through a monitor that looks for code changes, you must be using daemon mode on UNIX. A slight of hand can be used to achieve same on Windows when using embedded mode. All the details can be found in:\nhttp://code.google.com/p/modwsgi/wiki/ReloadingSourceCode\n", "You can reduce number of connections to 1 by setting \"MaxRequestsPerChild 1\" in your httpd.conf file. But do it only on test server, not production.\nor \nIf you don't want to kill existing connections and still restart apache you can restart it \"gracefully\" by performing \"apache2ctl gracefully\" - all existing connections will be allowed to complete.\n" ]
[ 15, 4, 1 ]
[ "Use a test server included in Django. (like ./manage.py runserver 0.0.0.0:8080) It will do most things you would need during development. The only drawback is that it cannot handle simultaneous requests with multi-threading.\nI've heard that there is a trick that setting Apache's max instances to 1 so that every code change is reflected immediately--but because you said you're running other services, so this may not be your case.\n" ]
[ -1 ]
[ "django", "mod_python", "python" ]
stackoverflow_0001078166_django_mod_python_python.txt
Q: I'd like to call the Windows C++ function WinHttpGetProxyForUrl from Python - can this be done? Microsoft provides a method as part of WinHTTP which allows a user to determine which Proxy ought to be used for any given URL. It's called WinHttpGetProxyForUrl. Unfortunately I'm programming in python so I cannot directly access this function - I can use Win32COM to call any Microsoft service with a COM interface. So is there any way to get access to this function from Python? As an additional problem I'm not able to add anything other than Python to the project. That means however convenient it is impossible to add C# or C++ fixes. I'm running Python2.4.4 with Win32 extensions on Windows XP. Update 0: This is what I have so far: import win32inet import pprint hinternet = win32inet.InternetOpen("foo 1.0", 0, "", "", 0) # Does not work!!! proxy = win32inet.WinHttpGetProxyForUrl( hinternet, u"http://www.foo.com", 0 ) Obviously the last line is wrong, however I cannot see any docs or examples on the right way to do it! Update 1: I'm going to re-ask this as a new question since it's now really about win32com. A: You can use ctypes to call function in WinHttp.dll, it is the DLL which contains 'WinHttpGetProxyForUrl. ' Though to call it you will need a HINTERNET session variable, so here I am showing you the first step, it shows how you can use ctypes to call into DLL,it produces a HINTERNET which you have to pass to WinHttpGetProxyForUrl, that I will leave for you as exercise, if you feel difficulty POST the code I will try to fix it. Read more about ctypes @ http://docs.python.org/library/ctypes.html import ctypes winHttp = ctypes.windll.LoadLibrary("Winhttp.dll") WINHTTP_ACCESS_TYPE_DEFAULT_PROXY=0 WINHTTP_NO_PROXY_NAME=WINHTTP_NO_PROXY_BYPASS=0 WINHTTP_FLAG_ASYNC=0x10000000 # http://msdn.microsoft.com/en-us/library/aa384098(VS.85).aspx HINTERNET = winHttp.WinHttpOpen("PyWin32", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, WINHTTP_FLAG_ASYNC) print HINTERNET A: This page at ActiveState: WINHTTP_AUTOPROXY_OPTIONS Object implies that WinHttpGetProxyForUrl is available in the win32inet module of the Win32 extensions. SourceForge is currently broken so I can't download it to verify whether it is or not. Edit after "Update 0" in the question: You need to pass a WINHTTP_AUTOPROXY_OPTIONS and a WINHTTP_PROXY_INFO as documented here on MSDN: WinHttpGetProxyForUrl Function.
I'd like to call the Windows C++ function WinHttpGetProxyForUrl from Python - can this be done?
Microsoft provides a method as part of WinHTTP which allows a user to determine which Proxy ought to be used for any given URL. It's called WinHttpGetProxyForUrl. Unfortunately I'm programming in python so I cannot directly access this function - I can use Win32COM to call any Microsoft service with a COM interface. So is there any way to get access to this function from Python? As an additional problem I'm not able to add anything other than Python to the project. That means however convenient it is impossible to add C# or C++ fixes. I'm running Python2.4.4 with Win32 extensions on Windows XP. Update 0: This is what I have so far: import win32inet import pprint hinternet = win32inet.InternetOpen("foo 1.0", 0, "", "", 0) # Does not work!!! proxy = win32inet.WinHttpGetProxyForUrl( hinternet, u"http://www.foo.com", 0 ) Obviously the last line is wrong, however I cannot see any docs or examples on the right way to do it! Update 1: I'm going to re-ask this as a new question since it's now really about win32com.
[ "You can use ctypes to call function in WinHttp.dll, it is the DLL which contains 'WinHttpGetProxyForUrl. '\nThough to call it you will need a HINTERNET session variable, so here I am showing you the first step, it shows how you can use ctypes to call into DLL,it produces a HINTERNET which you have to pass to WinHttpGetProxyForUrl, that I will leave for you as exercise, if you feel difficulty POST the code I will try to fix it.\nRead more about ctypes @ http://docs.python.org/library/ctypes.html\nimport ctypes\n\nwinHttp = ctypes.windll.LoadLibrary(\"Winhttp.dll\")\n\nWINHTTP_ACCESS_TYPE_DEFAULT_PROXY=0\nWINHTTP_NO_PROXY_NAME=WINHTTP_NO_PROXY_BYPASS=0\nWINHTTP_FLAG_ASYNC=0x10000000\n# http://msdn.microsoft.com/en-us/library/aa384098(VS.85).aspx\nHINTERNET = winHttp.WinHttpOpen(\"PyWin32\", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, WINHTTP_FLAG_ASYNC)\n\nprint HINTERNET\n\n", "This page at ActiveState: WINHTTP_AUTOPROXY_OPTIONS Object implies that WinHttpGetProxyForUrl is available in the win32inet module of the Win32 extensions. SourceForge is currently broken so I can't download it to verify whether it is or not.\nEdit after \"Update 0\" in the question:\nYou need to pass a WINHTTP_AUTOPROXY_OPTIONS and a WINHTTP_PROXY_INFO as documented here on MSDN: WinHttpGetProxyForUrl Function.\n" ]
[ 1, 1 ]
[]
[]
[ "c++", "com", "python", "windows" ]
stackoverflow_0001078939_c++_com_python_windows.txt
Q: CherryPy interferes with Twisted shutting down on Windows I've got an application that runs Twisted by starting the reactor with reactor.run() in my main thread after starting some other threads, including the CherryPy web server. Here's a program that shuts down cleanly when Ctrl+C is pressed on Linux but not on Windows: from threading import Thread from signal import signal, SIGINT import cherrypy from twisted.internet import reactor from twisted.web.client import getPage def stop(signum, frame): cherrypy.engine.exit() reactor.callFromThread(reactor.stop) signal(SIGINT, stop) class Root: @cherrypy.expose def index(self): reactor.callFromThread(kickoff) return "Hello World!" cherrypy.server.socket_host = "0.0.0.0" Thread(target=cherrypy.quickstart, args=[Root()]).start() def print_page(html): print(html) def kickoff(): getPage("http://acpstats/account/login").addCallback(print_page) reactor.run() I believe that CherryPy is the culprit here, because here's a different program that I wrote without CherryPy that does shutdown cleanly on both Linux and Windows when Ctrl+C is pressed: from time import sleep from threading import Thread from signal import signal, SIGINT from twisted.internet import reactor from twisted.web.client import getPage keep_going = True def stop(signum, frame): global keep_going keep_going = False reactor.callFromThread(reactor.stop) signal(SIGINT, stop) def print_page(html): print(html) def kickoff(): getPage("http://acpstats/account/login").addCallback(print_page) def periodic_downloader(): while keep_going: reactor.callFromThread(kickoff) sleep(5) Thread(target=periodic_downloader).start() reactor.run() Does anyone have any idea what the problem is? Here's my conundrum: On Linux everything works On Windows, I can call functions from signal handlers using reactor.callFromThread when CherryPy is not running When CherryPy is running, no function that I call using reactor.callFromThread from a signal handler will ever execute (I've verified that the signal handler itself does get called) What can I do about this? How can I shut down Twisted on Windows from a signal handler while running CherryPy? Is this a bug, or have I simply missed some important part of the documentation for either of these two projects? A: CherryPy handles signals by default when you call quickstart. In your case, you should probably just unroll quickstart, which is only a few lines, and pick and choose. Here's basically what quickstart does in trunk: if config: cherrypy.config.update(config) tree.mount(root, script_name, config) if hasattr(engine, "signal_handler"): engine.signal_handler.subscribe() if hasattr(engine, "console_control_handler"): engine.console_control_handler.subscribe() engine.start() engine.block() In your case, you don't need the signal handlers, so you can omit those. You also don't need to call engine.block if you're not starting CherryPy from the main thread. Engine.block() is just a way to make the main thread not terminate immediately, but instead wait around for process termination (this is so autoreload works reliably; some platforms have issues calling execv from any thread but the main thread). If you remove the block() call, you don't even need the Thread() around quickstart. So, replace your line: Thread(target=cherrypy.quickstart, args=[Root()]).start() with: cherrypy.tree.mount(Root()) cherrypy.engine.start()
CherryPy interferes with Twisted shutting down on Windows
I've got an application that runs Twisted by starting the reactor with reactor.run() in my main thread after starting some other threads, including the CherryPy web server. Here's a program that shuts down cleanly when Ctrl+C is pressed on Linux but not on Windows: from threading import Thread from signal import signal, SIGINT import cherrypy from twisted.internet import reactor from twisted.web.client import getPage def stop(signum, frame): cherrypy.engine.exit() reactor.callFromThread(reactor.stop) signal(SIGINT, stop) class Root: @cherrypy.expose def index(self): reactor.callFromThread(kickoff) return "Hello World!" cherrypy.server.socket_host = "0.0.0.0" Thread(target=cherrypy.quickstart, args=[Root()]).start() def print_page(html): print(html) def kickoff(): getPage("http://acpstats/account/login").addCallback(print_page) reactor.run() I believe that CherryPy is the culprit here, because here's a different program that I wrote without CherryPy that does shutdown cleanly on both Linux and Windows when Ctrl+C is pressed: from time import sleep from threading import Thread from signal import signal, SIGINT from twisted.internet import reactor from twisted.web.client import getPage keep_going = True def stop(signum, frame): global keep_going keep_going = False reactor.callFromThread(reactor.stop) signal(SIGINT, stop) def print_page(html): print(html) def kickoff(): getPage("http://acpstats/account/login").addCallback(print_page) def periodic_downloader(): while keep_going: reactor.callFromThread(kickoff) sleep(5) Thread(target=periodic_downloader).start() reactor.run() Does anyone have any idea what the problem is? Here's my conundrum: On Linux everything works On Windows, I can call functions from signal handlers using reactor.callFromThread when CherryPy is not running When CherryPy is running, no function that I call using reactor.callFromThread from a signal handler will ever execute (I've verified that the signal handler itself does get called) What can I do about this? How can I shut down Twisted on Windows from a signal handler while running CherryPy? Is this a bug, or have I simply missed some important part of the documentation for either of these two projects?
[ "CherryPy handles signals by default when you call quickstart. In your case, you should probably just unroll quickstart, which is only a few lines, and pick and choose. Here's basically what quickstart does in trunk:\nif config:\n cherrypy.config.update(config)\n\ntree.mount(root, script_name, config)\n\nif hasattr(engine, \"signal_handler\"):\n engine.signal_handler.subscribe()\nif hasattr(engine, \"console_control_handler\"):\n engine.console_control_handler.subscribe()\n\nengine.start()\nengine.block()\n\nIn your case, you don't need the signal handlers, so you can omit those. You also don't need to call engine.block if you're not starting CherryPy from the main thread. Engine.block() is just a way to make the main thread not terminate immediately, but instead wait around for process termination (this is so autoreload works reliably; some platforms have issues calling execv from any thread but the main thread).\nIf you remove the block() call, you don't even need the Thread() around quickstart. So, replace your line:\nThread(target=cherrypy.quickstart, args=[Root()]).start()\n\nwith:\ncherrypy.tree.mount(Root())\ncherrypy.engine.start()\n\n" ]
[ 14 ]
[]
[]
[ "cherrypy", "linux", "python", "twisted", "windows" ]
stackoverflow_0001075351_cherrypy_linux_python_twisted_windows.txt
Q: Handling authentication and proxy servers with httplib2 I'm attempting to test interactions with a Nexus server that requires authentication for the operations I intend to use, but I also need to handle an internal proxy server. Based on my (limited) understanding I can add multiple handlers to the opener. However I'm still getting a 401 response. I've checked the username and password are valid. I'm not certain if cookies are required to do this and if so how they'd be included. Any suggestions? baseUrl = 'server:8070/nexus-webapp-1.3.3/service/local' params = {"[key]":"[value]"} data = urllib.urlencode(params) # create a password manager password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() # Add the username and password as supplied password_mgr.add_password(None, baseUrl, username, password) handler = urllib2.HTTPBasicAuthHandler(password_mgr) proxy_support = urllib2.ProxyHandler({}) # create "opener" (OpenerDirector instance) opener = urllib2.build_opener(proxy_support, handler) urllib2.install_opener(opener) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} req = Request(protocol+url, data, txheaders) handle = urlopen(req) This is the resulting URLError's headers field: >HTTPMessage: Server: Apache-Coyote/1.1 Set-Cookie: JSESSIONID=B4BD05C9582F7B27495CBB675A339724; Path=/nexus-webapp-1.3.3 WWW-Authenticate: NxBASIC realm="Sonatype Nexus Repository Manager API" Content-Type: text/html;charset=utf-8 Content-Length: 954 Date: Fri, 03 Jul 2009 17:38:42 GMT Connection: close Update It seems Nexus implement a custom version of Restlet's AuthenticationHelper. Thanks to Alex I knew what to look for. A: Can you show the full headers of the 401 response you're getting? Maybe it's not a basic auth request, maybe it's the proxy wanting its own authentication -- it's hard to guess without seeing said headers! Edit: thanks for showing the headers (I reformatted them as "code" else they were unreadable). As I suspected, it doesn't want "Basic", it wants some other (Nexus proprietary...?) "NxBASIC" authentication protocol -- I've never heard about it (I don't know anything about Nexus) and I imagine neither has the basic authentication handler you're using (even if NxBASIC somehow accepted plain Basic authentication, the handler, knowing only that it's a different protocol, would not offer such authentication). So, first you need to research exactly what that NxBASIC thing is, and for that I suspect a SO question with the right tags might help. Then, depending on what you learn, comes the interesting issue of defining a handler for it...!-(
Handling authentication and proxy servers with httplib2
I'm attempting to test interactions with a Nexus server that requires authentication for the operations I intend to use, but I also need to handle an internal proxy server. Based on my (limited) understanding I can add multiple handlers to the opener. However I'm still getting a 401 response. I've checked the username and password are valid. I'm not certain if cookies are required to do this and if so how they'd be included. Any suggestions? baseUrl = 'server:8070/nexus-webapp-1.3.3/service/local' params = {"[key]":"[value]"} data = urllib.urlencode(params) # create a password manager password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() # Add the username and password as supplied password_mgr.add_password(None, baseUrl, username, password) handler = urllib2.HTTPBasicAuthHandler(password_mgr) proxy_support = urllib2.ProxyHandler({}) # create "opener" (OpenerDirector instance) opener = urllib2.build_opener(proxy_support, handler) urllib2.install_opener(opener) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} req = Request(protocol+url, data, txheaders) handle = urlopen(req) This is the resulting URLError's headers field: >HTTPMessage: Server: Apache-Coyote/1.1 Set-Cookie: JSESSIONID=B4BD05C9582F7B27495CBB675A339724; Path=/nexus-webapp-1.3.3 WWW-Authenticate: NxBASIC realm="Sonatype Nexus Repository Manager API" Content-Type: text/html;charset=utf-8 Content-Length: 954 Date: Fri, 03 Jul 2009 17:38:42 GMT Connection: close Update It seems Nexus implement a custom version of Restlet's AuthenticationHelper. Thanks to Alex I knew what to look for.
[ "Can you show the full headers of the 401 response you're getting? Maybe it's not a basic auth request, maybe it's the proxy wanting its own authentication -- it's hard to guess without seeing said headers!\nEdit: thanks for showing the headers (I reformatted them as \"code\" else they were unreadable).\nAs I suspected, it doesn't want \"Basic\", it wants some other (Nexus proprietary...?) \"NxBASIC\" authentication protocol -- I've never heard about it (I don't know anything about Nexus) and I imagine neither has the basic authentication handler you're using (even if NxBASIC somehow accepted plain Basic authentication, the handler, knowing only that it's a different protocol, would not offer such authentication).\nSo, first you need to research exactly what that NxBASIC thing is, and for that I suspect a SO question with the right tags might help. Then, depending on what you learn, comes the interesting issue of defining a handler for it...!-(\n" ]
[ 3 ]
[]
[]
[ "httplib2", "nexus", "python" ]
stackoverflow_0001080179_httplib2_nexus_python.txt
Q: How to emulate language complement operator in .hgignore? I have a Python regular expression that matches a set of filenames. How to change it so that I can use it in Mercurial's .hgignore file to ignore files that do not match the expression? Full story: I have a big source tree with *.ml files scattered everywhere. I want to put them into a new repository. There are other, less important files which are too heavy to be included in the repository. I'm trying to find the corresponding expression for .hgignore file. 1st observation: Python doesn't have regular language complement operator (AFAIK it can complement only a set of characters). (BTW, why?) 2nd observation: The following regex in Python: re.compile("^.*(?<!\.ml)$") works as expected: abcabc - match abc.ml - no match x/abcabc - match x/abc.ml - no match However, when I put exactly the same expression in the .hgignore file, I get this: $ hg st --all ? abc.ml I .hgignore I abcabc I x/xabc I x/xabc.ml According to .hgignore manpage, Mercurial uses just normal Python regular expressions. How is that I get different results then? How is it possible that Mercurial found a match for the x/xabc.ml? Does anybody know less ugly way around the lack of regular language complement operator? A: The regexs are applied to each subdirectory component in turn as well as the file name, not the entire relative path at once. So if I have a/b/c/d in my repo, each regex will be applied to a, a/b, a/b/c as well as a/b/c/d. If any component matches, the file will be ignored. (You can tell that this is the behaviour by trying ^bar$ with bar/foo - you'll see that bar/foo is ignored.) ^.*(?<!\.ml)$ ignores x/xabc.ml because the pattern matches x (i.e. the subdirectory.) This means that there is no regex that will help you, because your patterns are bound to match the first subdirectory component. A: Through some testing, found two solutions that appear to work. The first roots to a subdirectory, and apparently this is significant. The second is brittle, because it only allows one suffix to be used. I'm running these tests on Windows XP (customized to work a bit more unixy) with Mercurial 1.2.1. (Comments added with # message by me.) $ hg --version Mercurial Distributed SCM (version 1.2.1) $ cat .hgignore syntax: regexp ^x/.+(?<!\.ml)$ # rooted to x/ subdir #^.+[^.][^m][^l]$ $ hg status --all ? .hgignore # not affected by x/ regex ? abc.ml # not affected by x/ regex ? abcabc # not affected by x/ regex ? x\saveme.ml # versioned, is *.ml I x\abcabc # ignored, is not *.ml I x\ignoreme.txt # ignored, is not *.ml And the second: $ cat .hgignore syntax: regexp #^x/.+(?<!\.ml)$ ^.+[^.][^m][^l]$ # brittle, can only use one suffix $ hg status --all ? abc.ml # versioned, is *.ml ? x\saveme.ml # versioned, is *.ml I .hgignore # ignored, is not *.ml I abcabc # ignored, is not *.ml I x\abcabc # ignored, is not *.ml I x\ignoreme.txt # ignored, is not *.ml The second one has fully expected behavior as I understand the OP. The first only has expected behavior in the subdirectory, but is more flexible. A: The problem appears specifically to be that matches in subdirectories are different to the root. Note the following: $ hg --version Mercurial Distributed SCM (version 1.1.2) It's an older version, but it behaves in the same way. My project has the following files: $ find . -name 'abc*' -print ./x/abcabc ./x/abc.ml ./abcabc ./abc.ml Here's my .hgignore: $ cat .hgignore ^.*(?<!\.ml)$ Now, when I run stat: $ hg stat ? abc.ml So, hg has failed to pick up x/abc.ml. But is this really a problem with the regular expression? Perhaps not: $ python Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import mercurial.ignore >>> import os >>> root = os.getcwd() >>> ignorefunc = mercurial.ignore.ignore(root, ['.hgignore'], lambda msg: None) >>> >>> ignorefunc("abc.ml") # No match - this is correct >>> ignorefunc("abcabc") # Match - this is correct, we want to ignore this <_sre.SRE_Match object at 0xb7c765d0> >>> ignorefunc("abcabc").span() (0, 6) >>> ignorefunc("x/abcabc").span() # Match - this is correct, we want to ignore this (0, 8) >>> ignorefunc("x/abc.ml") # No match - this is correct! >>> Notice that ignorefunc treated abcabc and x/abcabc the same (matched - i.e. ignore) whereas abc.ml and x/abc.ml are also treated the same (no match - i.e. don't ignore). So, perhaps the logic error is elsewhere in Mercurial, or perhaps I'm looking at the wrong bit of Mercurial (though I'd be surprised if that were the case). Unless I've missed something, maybe a bug (rather than an RFE which Martin Geisler pointed to) needs to be filed against Mercurial.
How to emulate language complement operator in .hgignore?
I have a Python regular expression that matches a set of filenames. How to change it so that I can use it in Mercurial's .hgignore file to ignore files that do not match the expression? Full story: I have a big source tree with *.ml files scattered everywhere. I want to put them into a new repository. There are other, less important files which are too heavy to be included in the repository. I'm trying to find the corresponding expression for .hgignore file. 1st observation: Python doesn't have regular language complement operator (AFAIK it can complement only a set of characters). (BTW, why?) 2nd observation: The following regex in Python: re.compile("^.*(?<!\.ml)$") works as expected: abcabc - match abc.ml - no match x/abcabc - match x/abc.ml - no match However, when I put exactly the same expression in the .hgignore file, I get this: $ hg st --all ? abc.ml I .hgignore I abcabc I x/xabc I x/xabc.ml According to .hgignore manpage, Mercurial uses just normal Python regular expressions. How is that I get different results then? How is it possible that Mercurial found a match for the x/xabc.ml? Does anybody know less ugly way around the lack of regular language complement operator?
[ "The regexs are applied to each subdirectory component in turn as well as the file name, not the entire relative path at once. So if I have a/b/c/d in my repo, each regex will be applied to a, a/b, a/b/c as well as a/b/c/d. If any component matches, the file will be ignored. (You can tell that this is the behaviour by trying ^bar$ with bar/foo - you'll see that bar/foo is ignored.)\n^.*(?<!\\.ml)$ ignores x/xabc.ml because the pattern matches x (i.e. the subdirectory.)\nThis means that there is no regex that will help you, because your patterns are bound to match the first subdirectory component.\n", "Through some testing, found two solutions that appear to work. The first roots to a subdirectory, and apparently this is significant. The second is brittle, because it only allows one suffix to be used. I'm running these tests on Windows XP (customized to work a bit more unixy) with Mercurial 1.2.1.\n(Comments added with # message by me.)\n\n$ hg --version\nMercurial Distributed SCM (version 1.2.1)\n\n$ cat .hgignore\nsyntax: regexp\n^x/.+(?<!\\.ml)$ # rooted to x/ subdir\n#^.+[^.][^m][^l]$\n\n$ hg status --all\n? .hgignore # not affected by x/ regex\n? abc.ml # not affected by x/ regex\n? abcabc # not affected by x/ regex\n? x\\saveme.ml # versioned, is *.ml\nI x\\abcabc # ignored, is not *.ml\nI x\\ignoreme.txt # ignored, is not *.ml\n\nAnd the second:\n\n$ cat .hgignore\nsyntax: regexp\n#^x/.+(?<!\\.ml)$\n^.+[^.][^m][^l]$ # brittle, can only use one suffix\n\n$ hg status --all\n? abc.ml # versioned, is *.ml\n? x\\saveme.ml # versioned, is *.ml\nI .hgignore # ignored, is not *.ml\nI abcabc # ignored, is not *.ml\nI x\\abcabc # ignored, is not *.ml\nI x\\ignoreme.txt # ignored, is not *.ml\n\nThe second one has fully expected behavior as I understand the OP. The first only has expected behavior in the subdirectory, but is more flexible.\n", "The problem appears specifically to be that matches in subdirectories are different to the root. Note the following:\n$ hg --version\nMercurial Distributed SCM (version 1.1.2)\n\nIt's an older version, but it behaves in the same way. My project has the following files:\n$ find . -name 'abc*' -print\n./x/abcabc\n./x/abc.ml\n./abcabc\n./abc.ml\n\nHere's my .hgignore:\n$ cat .hgignore\n^.*(?<!\\.ml)$\n\nNow, when I run stat:\n$ hg stat\n? abc.ml\n\nSo, hg has failed to pick up x/abc.ml. But is this really a problem with the regular expression? Perhaps not:\n$ python\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) \n[GCC 4.3.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import mercurial.ignore\n>>> import os\n>>> root = os.getcwd()\n>>> ignorefunc = mercurial.ignore.ignore(root, ['.hgignore'], lambda msg: None)\n>>> \n>>> ignorefunc(\"abc.ml\") # No match - this is correct\n>>> ignorefunc(\"abcabc\") # Match - this is correct, we want to ignore this\n<_sre.SRE_Match object at 0xb7c765d0>\n>>> ignorefunc(\"abcabc\").span() \n(0, 6)\n>>> ignorefunc(\"x/abcabc\").span() # Match - this is correct, we want to ignore this\n(0, 8)\n>>> ignorefunc(\"x/abc.ml\") # No match - this is correct!\n>>> \n\nNotice that ignorefunc treated abcabc and x/abcabc the same (matched - i.e. ignore) whereas abc.ml and x/abc.ml are also treated the same (no match - i.e. don't ignore).\nSo, perhaps the logic error is elsewhere in Mercurial, or perhaps I'm looking at the wrong bit of Mercurial (though I'd be surprised if that were the case). Unless I've missed something, maybe a bug (rather than an RFE which Martin Geisler pointed to) needs to be filed against Mercurial.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "mercurial", "python", "regex" ]
stackoverflow_0001079342_mercurial_python_regex.txt
Q: Cleaning an image to only black I have an image. I would like to go over that image, pixel by pixel, and any pixel that is not black should be turned to white. How do I do this? (Python). Thanks! A: The most efficient way is to use the point function def only_black(band): if band > 0: return 255 return 0 result = im.convert('L').point(only_black) This is what the PIL documentation has to say about this: When converting to a bilevel image (mode "1"), the source image is first converted to black and white. Resulting values larger than 127 are then set to white, and the image is dithered. To use other thresholds, use the point method. A: You should use the point function, which exists specifically for this reason. converter= ( (0,) + 255*(255,) ).__getitem__ def black_or_white(img): return img.convert('L').point(converter) A: You might want to check out the following library: http://effbot.org/imagingbook/image.htm Especially: im.getpixel(xy) => value or tuple and im.putpixel(xy, colour)
Cleaning an image to only black
I have an image. I would like to go over that image, pixel by pixel, and any pixel that is not black should be turned to white. How do I do this? (Python). Thanks!
[ "The most efficient way is to use the point function\ndef only_black(band):\n if band > 0:\n return 255\n return 0\nresult = im.convert('L').point(only_black)\n\nThis is what the PIL documentation has to say about this:\n\nWhen converting to a bilevel image\n (mode \"1\"), the source image is first\n converted to black and white.\n Resulting values larger than 127 are\n then set to white, and the image is\n dithered. To use other thresholds, use\n the point method.\n\n", "You should use the point function, which exists specifically for this reason.\nconverter= ( (0,) + 255*(255,) ).__getitem__\ndef black_or_white(img):\n return img.convert('L').point(converter)\n\n", "You might want to check out the following library:\nhttp://effbot.org/imagingbook/image.htm\nEspecially:\nim.getpixel(xy) => value or tuple\n\nand \nim.putpixel(xy, colour)\n\n" ]
[ 6, 3, 1 ]
[]
[]
[ "image_processing", "python", "python_imaging_library" ]
stackoverflow_0001080219_image_processing_python_python_imaging_library.txt
Q: Random list with rules I'm trying to create a list of tasks that I've read from some text files and put them into lists. I want to create a master list of what I'm going to do through the day however I've got a few rules for this. One list has separate daily tasks that don't depend on the order they are completed. I call this list 'daily'. I've got another list of tasks for my projects, but these do depend on the order completed. This list is called 'projects'. I have a third list of things that must be done at the end of the day. I call it 'endofday'. So here are the basic rules. A list of randomized tasks where daily tasks can be performed in any order, where project tasks may be randomly inserted into the main list at any position but must stay in their original order relative to each other, and end of day tasks appended to the main list. I understand how to get a random number from random.randint(), appending to lists, reading files and all that......but the logic is giving me a case of 'hurty brain'. Anyone want to take a crack at this? EDIT: Ok I solved it on my own, but at least asking the question got me to picture it in my head. Here's what I did. random.shuffle(daily) while projects: daily.insert(random.randint(0,len(daily)), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x Thanks for the answers, I'll give ya guys some kudos anyways! EDIT AGAIN: Crap I just realized that's not the right answer lol LAST EDIT I SWEAR: position = [] random.shuffle(daily) for x in range(len(projects)): position.append(random.randint(0,len(daily)+x)) position.sort() while projects: daily.insert(position.pop(0), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x I LIED: I just thought about what happens when position has duplicate values and lo and behold my first test returned 1,3,2,4 for my projects. I'm going to suck it up and use the answerer's solution lol OR NOT: position = [] random.shuffle(daily) for x in range(len(projects)): while 1: pos = random.randint(0,len(daily)+x) if pos not in position: break position.append(pos) position.sort() while projects: daily.insert(position.pop(0), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x A: First, copy and shuffle daily to initialize master: master = list(daily) random.shuffle(master) then (the interesting part!-) the alteration of master (to insert projects randomly but without order changes), and finally random.shuffle(endofday); master.extend(endofday). As I said the alteration part is the interesting one -- what about: def random_mix(seq_a, seq_b): iters = [iter(seq_a), iter(seq_b)] while True: it = random.choice(iters) try: yield it.next() except StopIteration: iters.remove(it) it = iters[0] for x in it: yield x Now, the mixing step becomes just master = list(random_mix(master, projects)) Performance is not ideal (lots of random numbers generated here, we could do with fewer, for example), but fine if we're talking about a few dozens or hundreds of items for example. This insertion randomness is not ideal -- for that, the choice between the two sequences should not be equiprobable, but rather with probability proportional to their lengths. If that's important to you, let me know with a comment and I'll edit to fix the issue, but I wanted first to offer a simpler and more understandable version!-) Edit: thanks for the accept, let me complete the answer anyway with a different way of "random mixing preserving order" which does use the right probabilities -- it's only slightly more complicated because it cannot just call random.choice;-). def random_mix_rp(seq_a, seq_b): iters = [iter(seq_a), iter(seq_b)] lens = [len(seq_a), len(seq_b)] while True: r = random.randrange(sum(lens)) itindex = r < lens[0] it = iters[itindex] lens[itindex] -= 1 try: yield it.next() except StopIteration: iters.remove(it) it = iters[0] for x in it: yield x Of course other optimization opportunities arise here -- since we're tracking the lengths anyway, we could rely on a length having gone down to zero rather than on try/except to detect that one sequence is finished and we should just exhaust the other one, etc etc. But, I wanted to show the version closest to my original one. Here's one exploiting this idea to optimize and simplify: def random_mix_rp1(seq_a, seq_b): iters = [iter(seq_a), iter(seq_b)] lens = [len(seq_a), len(seq_b)] while all(lens): r = random.randrange(sum(lens)) itindex = r < lens[0] it = iters[itindex] lens[itindex] -= 1 yield it.next() for it in iters: for x in it: yield x A: Use random.shuffle to shuffle a list random.shuffle(["x", "y", "z"]) A: How to fetch a random element in a list using python: >>> import random >>> li = ["a", "b", "c"] >>> len = (len(li))-1 >>> ran = random.randint(0, len) >>> ran = li[ran] >>> ran 'b' But it seems you're more curious about how to design this. If so, the python tag should probably not be there. If not, the question is probably to broad to get you any good answers code-wise. A: Combine all 3 lists into a DAG Perform all possible topological sorts, store each sort in a list. Choose one from the list at random A: In order for the elements of the "project" list to stay in order, you could do the following: Say you have 4 project tasks: "a,b,c,d". Then you know there are five spots where other, randomly chosen elements can be inserted (before and after each element, including the beginning and the end), while the ordering naturally stays the same. Next, you can add five times a special element (e.g. "-:-") to the daily list. When you now shuffle the daily list, these special items, corresponding to "a,b,c,d" from above, are randomly placed. Now you simply have to insert the elements of the "projects" list sequentially for each special element "-:-". And you keep the ordering, yet have a completely random list regarding the tasks from the daily list.
Random list with rules
I'm trying to create a list of tasks that I've read from some text files and put them into lists. I want to create a master list of what I'm going to do through the day however I've got a few rules for this. One list has separate daily tasks that don't depend on the order they are completed. I call this list 'daily'. I've got another list of tasks for my projects, but these do depend on the order completed. This list is called 'projects'. I have a third list of things that must be done at the end of the day. I call it 'endofday'. So here are the basic rules. A list of randomized tasks where daily tasks can be performed in any order, where project tasks may be randomly inserted into the main list at any position but must stay in their original order relative to each other, and end of day tasks appended to the main list. I understand how to get a random number from random.randint(), appending to lists, reading files and all that......but the logic is giving me a case of 'hurty brain'. Anyone want to take a crack at this? EDIT: Ok I solved it on my own, but at least asking the question got me to picture it in my head. Here's what I did. random.shuffle(daily) while projects: daily.insert(random.randint(0,len(daily)), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x Thanks for the answers, I'll give ya guys some kudos anyways! EDIT AGAIN: Crap I just realized that's not the right answer lol LAST EDIT I SWEAR: position = [] random.shuffle(daily) for x in range(len(projects)): position.append(random.randint(0,len(daily)+x)) position.sort() while projects: daily.insert(position.pop(0), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x I LIED: I just thought about what happens when position has duplicate values and lo and behold my first test returned 1,3,2,4 for my projects. I'm going to suck it up and use the answerer's solution lol OR NOT: position = [] random.shuffle(daily) for x in range(len(projects)): while 1: pos = random.randint(0,len(daily)+x) if pos not in position: break position.append(pos) position.sort() while projects: daily.insert(position.pop(0), projects.pop(0)) random.shuffle(endofday) daily.extend(endofday) for x in daily: print x
[ "First, copy and shuffle daily to initialize master:\nmaster = list(daily)\nrandom.shuffle(master)\n\nthen (the interesting part!-) the alteration of master (to insert projects randomly but without order changes), and finally random.shuffle(endofday); master.extend(endofday).\nAs I said the alteration part is the interesting one -- what about:\ndef random_mix(seq_a, seq_b):\n iters = [iter(seq_a), iter(seq_b)]\n while True:\n it = random.choice(iters)\n try: yield it.next()\n except StopIteration:\n iters.remove(it)\n it = iters[0]\n for x in it: yield x\n\nNow, the mixing step becomes just master = list(random_mix(master, projects))\nPerformance is not ideal (lots of random numbers generated here, we could do with fewer, for example), but fine if we're talking about a few dozens or hundreds of items for example.\nThis insertion randomness is not ideal -- for that, the choice between the two sequences should not be equiprobable, but rather with probability proportional to their lengths. If that's important to you, let me know with a comment and I'll edit to fix the issue, but I wanted first to offer a simpler and more understandable version!-)\nEdit: thanks for the accept, let me complete the answer anyway with a different way of \"random mixing preserving order\" which does use the right probabilities -- it's only slightly more complicated because it cannot just call random.choice;-).\ndef random_mix_rp(seq_a, seq_b):\n iters = [iter(seq_a), iter(seq_b)]\n lens = [len(seq_a), len(seq_b)]\n while True:\n r = random.randrange(sum(lens))\n itindex = r < lens[0]\n it = iters[itindex]\n lens[itindex] -= 1\n\n try: yield it.next()\n except StopIteration:\n iters.remove(it)\n it = iters[0]\n for x in it: yield x\n\nOf course other optimization opportunities arise here -- since we're tracking the lengths anyway, we could rely on a length having gone down to zero rather than on try/except to detect that one sequence is finished and we should just exhaust the other one, etc etc. But, I wanted to show the version closest to my original one. Here's one exploiting this idea to optimize and simplify:\ndef random_mix_rp1(seq_a, seq_b):\n iters = [iter(seq_a), iter(seq_b)]\n lens = [len(seq_a), len(seq_b)]\n while all(lens):\n r = random.randrange(sum(lens))\n itindex = r < lens[0]\n it = iters[itindex]\n lens[itindex] -= 1\n yield it.next()\n for it in iters:\n for x in it: yield x\n\n", "Use random.shuffle to shuffle a list\nrandom.shuffle([\"x\", \"y\", \"z\"])\n", "How to fetch a random element in a list using python:\n>>> import random\n>>> li = [\"a\", \"b\", \"c\"]\n>>> len = (len(li))-1\n>>> ran = random.randint(0, len)\n>>> ran = li[ran]\n>>> ran\n'b'\n\nBut it seems you're more curious about how to design this. If so, the python tag should probably not be there. If not, the question is probably to broad to get you any good answers code-wise.\n", "\nCombine all 3 lists into a DAG\nPerform all possible topological sorts, store each sort in a list.\nChoose one from the list at random\n\n", "In order for the elements of the \"project\" list to stay in order, you could do the following:\nSay you have 4 project tasks: \"a,b,c,d\". Then you know there are five spots where other, randomly chosen elements can be inserted (before and after each element, including the beginning and the end), while the ordering naturally stays the same.\nNext, you can add five times a special element (e.g. \"-:-\") to the daily list. When you now shuffle the daily list, these special items, corresponding to \"a,b,c,d\" from above, are randomly placed. Now you simply have to insert the elements of the \"projects\" list sequentially for each special element \"-:-\". And you keep the ordering, yet have a completely random list regarding the tasks from the daily list.\n" ]
[ 4, 1, 1, 1, 1 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001080393_python_random.txt
Q: Preventing invoking C types from Python What's the correct way to prevent invoking (creating an instance of) a C type from Python? I've considered providing a tp_init that raises an exception, but as I understand it that would still allow __new__ to be called directly on the type. A C function returns instances of this type -- that's the only way instances of this type are intended to be created. Edit: My intention is that users of my type will get an exception if they accidentally use it wrongly. The C code is such that calling a function on an object incorrectly created from Python would crash. I realise this is unusual: all of my C extension types so far have worked nicely when instantiated from Python. My question is whether there is a usual way to provide this restriction. A: Simple: leave the tp_new slot of the type empty. >>> Foo() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot create 'foo.Foo' instances >>> Foo.__new__(Foo) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object.__new__(foo.Foo) is not safe, use foo.Foo.__new__() If you inherit from a type other than the base object type, you will have to set tp_new to NULL after calling PyType_Ready(). A: Don't prevent them from doing it. "We're all consenting adults here." Nobody is going to do it unless they have a reason, and if they have such a reason then you shouldn't stop them just because you didn't anticipate every possible use of your type. A: "The type is a return type of another C function - that's the only way instances of this type are intended to be created" -- that's rather confusing. I think you mean "A C function returns instances of this type -- that's the only way etc etc". In your documentation, warn the caller clearly against invoking the type. Don't export the type from your C extension. You can't do much about somebody who introspects the returned instances but so what? It's their data/machine/job at risk, not yours. [Update (I hate the UI for comments!)] James: "type ...just only created from C": again you are confusing the type and its instances. The type is created statically in C. Your C code contains the type and also a factory function that users are intended to call to obtain instances of the type. For some reason that you don't explain, if users obtain an instance by calling the type directly, subsequent instance.method() calls will crash (I presume that's what you mean by "calling functions on the object". Call me crazy, but isn't that a bug that you should fix? Re "don't export": try "don't expose". In your C code, you will have something like this where you list out all the APIs that your module is providing, both types and functions: static struct PyMethodDef public_functions[] = { {"EvilType", (PyCFunction) py_EvilType, ......}, /* omit above line and punters can't call it directly from Python */ {"make_evil", (PyCFunction) py_make_evil, ......}, ......, }; module = Py_InitModule4("mymodule", public_functions, module_doc, ... A: There is a fantastically bulletproof way. Let people create the object, and have Python crash. That should stop them doing it pretty efficiently. ;) Also you can underscore the class name, to indicate that it should be internal. (At least, I assume you can create underscored classnames from C too, I haven't actually ever done it.)
Preventing invoking C types from Python
What's the correct way to prevent invoking (creating an instance of) a C type from Python? I've considered providing a tp_init that raises an exception, but as I understand it that would still allow __new__ to be called directly on the type. A C function returns instances of this type -- that's the only way instances of this type are intended to be created. Edit: My intention is that users of my type will get an exception if they accidentally use it wrongly. The C code is such that calling a function on an object incorrectly created from Python would crash. I realise this is unusual: all of my C extension types so far have worked nicely when instantiated from Python. My question is whether there is a usual way to provide this restriction.
[ "Simple: leave the tp_new slot of the type empty.\n>>> Foo()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: cannot create 'foo.Foo' instances\n>>> Foo.__new__(Foo)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: object.__new__(foo.Foo) is not safe, use foo.Foo.__new__()\n\nIf you inherit from a type other than the base object type, you will have to set tp_new to NULL after calling PyType_Ready().\n", "Don't prevent them from doing it. \"We're all consenting adults here.\"\nNobody is going to do it unless they have a reason, and if they have such a reason then you shouldn't stop them just because you didn't anticipate every possible use of your type.\n", "\"The type is a return type of another C function - that's the only way instances of this type are intended to be created\" -- that's rather confusing. I think you mean \"A C function returns instances of this type -- that's the only way etc etc\".\nIn your documentation, warn the caller clearly against invoking the type. Don't export the type from your C extension. You can't do much about somebody who introspects the returned instances but so what? It's their data/machine/job at risk, not yours.\n[Update (I hate the UI for comments!)]\nJames: \"type ...just only created from C\": again you are confusing the type and its instances. The type is created statically in C. Your C code contains the type and also a factory function that users are intended to call to obtain instances of the type. For some reason that you don't explain, if users obtain an instance by calling the type directly, subsequent instance.method() calls will crash (I presume that's what you mean by \"calling functions on the object\". Call me crazy, but isn't that a bug that you should fix?\nRe \"don't export\": try \"don't expose\".\nIn your C code, you will have something like this where you list out all the APIs that your module is providing, both types and functions:\nstatic struct PyMethodDef public_functions[] = {\n {\"EvilType\", (PyCFunction) py_EvilType, ......},\n /* omit above line and punters can't call it directly from Python */\n {\"make_evil\", (PyCFunction) py_make_evil, ......},\n ......,\n };\n\nmodule = Py_InitModule4(\"mymodule\", public_functions, module_doc, ...\n\n", "There is a fantastically bulletproof way. Let people create the object, and have Python crash. That should stop them doing it pretty efficiently. ;)\nAlso you can underscore the class name, to indicate that it should be internal. (At least, I assume you can create underscored classnames from C too, I haven't actually ever done it.)\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "cpython", "python" ]
stackoverflow_0001079690_cpython_python.txt
Q: How to reload a Python module that was imported in another file? I am trying to learn how Python reloads modules, but have hit a roadblock. Let's say I have: dir1\file1.py: from dir2.file2 import ClassOne myObject = ClassOne() dir1\dir2\file2.py: class ClassOne(): def reload_module(): reload(file2) The reload call fails to find module "file2". My question is, how do I do this properly, without having to keep everything in one file? A related question: When the reload does work, will myObject use the new code? thank you A: def reload_module(): import file2 reload(file2) However, this will not per se change the type of objects you've instantiated from classes held in the previous version of file2. The Python Cookbook 2nd edition has a recipe on how to accomplish such feats, and it's far too long and complex in both code and discussion to reproduce here (I believe you can read it on google book search, or failing that the original "raw" version [before all the enhancements we did to it], at least, should still be on the activestate cookbook online site).
How to reload a Python module that was imported in another file?
I am trying to learn how Python reloads modules, but have hit a roadblock. Let's say I have: dir1\file1.py: from dir2.file2 import ClassOne myObject = ClassOne() dir1\dir2\file2.py: class ClassOne(): def reload_module(): reload(file2) The reload call fails to find module "file2". My question is, how do I do this properly, without having to keep everything in one file? A related question: When the reload does work, will myObject use the new code? thank you
[ " def reload_module():\n import file2\n reload(file2)\n\nHowever, this will not per se change the type of objects you've instantiated from classes held in the previous version of file2. The Python Cookbook 2nd edition has a recipe on how to accomplish such feats, and it's far too long and complex in both code and discussion to reproduce here (I believe you can read it on google book search, or failing that the original \"raw\" version [before all the enhancements we did to it], at least, should still be on the activestate cookbook online site).\n" ]
[ 3 ]
[]
[]
[ "import", "module", "python", "reload" ]
stackoverflow_0001080521_import_module_python_reload.txt
Q: handler not working in gae python I have two handlers in webapp.WSGIApplication for two forms in a django template, one of the handler works on dopost but the other one goes to blank page. Why is this so? A: Some error or typo somewhere in your code or configuration, it's impossible to say much more without seeing those files of course.
handler not working in gae python
I have two handlers in webapp.WSGIApplication for two forms in a django template, one of the handler works on dopost but the other one goes to blank page. Why is this so?
[ "Some error or typo somewhere in your code or configuration, it's impossible to say much more without seeing those files of course.\n" ]
[ 1 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001080618_django_google_app_engine_python.txt
Q: Accessing Python Objects in a Core Dump Is there anyway to discover the python value of a PyObject* from a corefile in gdb A: It's lots of work, but of course it can be done, especially if you have all the symbols. Look at the header files for the specific version of Python (and compilation options in use to build it): they define PyObject as a struct which includes, first and foremost, a pointer to a type. Lots of macros are used, so you may want to run the compile of that Python from sources again, with exactly the same flags but in addition a -E to stop after preprocessing, so you can refer to the specific C code that made the bits you're seeing in the core dump. A type object has, among many other things, a string (array of char) that's its name, and from it you can infer what exactly objects of that type contain -- be it content directly, or maybe some content (such as a length, i.e. number of items) and a pointer to the actual data. I've done such super-advanced post-mortem debugging a couple of times (starting with VERY precise knowledge of the Python versions involved and all the prepared preprocessed sources &c) and each time it took me a day or two (were I still a freelance and charging by the hour, if I had to bid on such a task I'd say at least 20 hours -- at my not-cheap hourly rates!-). IOW, it's worth it only if it's really truly the only way out of some very costly pickle. On the plus side, it WILL teach you more about Python's internals than you ever thought was there, even after memorizing every line of the sources. Good luck, you'll need some!!!
Accessing Python Objects in a Core Dump
Is there anyway to discover the python value of a PyObject* from a corefile in gdb
[ "It's lots of work, but of course it can be done, especially if you have all the symbols. Look at the header files for the specific version of Python (and compilation options in use to build it): they define PyObject as a struct which includes, first and foremost, a pointer to a type. Lots of macros are used, so you may want to run the compile of that Python from sources again, with exactly the same flags but in addition a -E to stop after preprocessing, so you can refer to the specific C code that made the bits you're seeing in the core dump.\nA type object has, among many other things, a string (array of char) that's its name, and from it you can infer what exactly objects of that type contain -- be it content directly, or maybe some content (such as a length, i.e. number of items) and a pointer to the actual data.\nI've done such super-advanced post-mortem debugging a couple of times (starting with VERY precise knowledge of the Python versions involved and all the prepared preprocessed sources &c) and each time it took me a day or two (were I still a freelance and charging by the hour, if I had to bid on such a task I'd say at least 20 hours -- at my not-cheap hourly rates!-).\nIOW, it's worth it only if it's really truly the only way out of some very costly pickle. On the plus side, it WILL teach you more about Python's internals than you ever thought was there, even after memorizing every line of the sources. Good luck, you'll need some!!!\n" ]
[ 4 ]
[]
[]
[ "postmortem_debugging", "python", "python_c_api" ]
stackoverflow_0001080832_postmortem_debugging_python_python_c_api.txt
Q: How to direct tkinter to look elsewhere for Tcl/Tk library (to dodge broken library without reinstalling) I've written a Python script that uses Tkinter. I want to deploy that script on a handful of computers that are on Mac OS 10.4.11. But that build of MAC OS X seems to have a broken TCL/TK install. Even loading the package gives me: Traceback (most recent call last): File "<stdin>", line 1, in ? ImportError: dlopen(/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/lib-dynload/_tkinter.so 2): Symbol not found: _tclStubsPtr Referenced from: /System/Library/Frameworks/Tk.framework/Versions/8.4/Tk Expected in: /System/Library/Frameworks/Tcl.framework/Versions/8.4/Tcl Reinstalling TCL/TK isn't an option since we're in an office and we'd have to get IT to come to each computer, which would deter people from using the script. Is there any easy way to direct Tkinter to look elsewhere for the TK/TCL framework? I've downloaded a stand alone version of Tcl/Tk Aqua, but I don't know how to control which framework Tkinter uses... Thanks for the help. Adam A: You can change where your system looks for dynamic/shared libraries by altering DYLD_LIBRARY_PATH in your environment before launching Python. You can do this in Terminal like so: $ DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH python ... or create a wrapper: #!/bin/sh export DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH exec python "$@" The documentation for DYLD_LIBRARY_PATH can be found on the dyld man page. Do not set this in your .bashrc or any other profile- or system-wide setting, as it has the potential to cause some nasty problems.
How to direct tkinter to look elsewhere for Tcl/Tk library (to dodge broken library without reinstalling)
I've written a Python script that uses Tkinter. I want to deploy that script on a handful of computers that are on Mac OS 10.4.11. But that build of MAC OS X seems to have a broken TCL/TK install. Even loading the package gives me: Traceback (most recent call last): File "<stdin>", line 1, in ? ImportError: dlopen(/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/lib-dynload/_tkinter.so 2): Symbol not found: _tclStubsPtr Referenced from: /System/Library/Frameworks/Tk.framework/Versions/8.4/Tk Expected in: /System/Library/Frameworks/Tcl.framework/Versions/8.4/Tcl Reinstalling TCL/TK isn't an option since we're in an office and we'd have to get IT to come to each computer, which would deter people from using the script. Is there any easy way to direct Tkinter to look elsewhere for the TK/TCL framework? I've downloaded a stand alone version of Tcl/Tk Aqua, but I don't know how to control which framework Tkinter uses... Thanks for the help. Adam
[ "You can change where your system looks for dynamic/shared libraries by altering DYLD_LIBRARY_PATH in your environment before launching Python. You can do this in Terminal like so:\n$ DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH python\n\n... or create a wrapper:\n#!/bin/sh\nexport DYLD_LIBRARY_PATH=<insert path here>:$DYLD_LIBRARY_PATH\nexec python \"$@\"\n\nThe documentation for DYLD_LIBRARY_PATH can be found on the dyld man page.\nDo not set this in your .bashrc or any other profile- or system-wide setting, as it has the potential to cause some nasty problems.\n" ]
[ 1 ]
[]
[]
[ "macos", "python", "tkinter" ]
stackoverflow_0001060745_macos_python_tkinter.txt
Q: How Do I Remove Text From Generated Django Form? So earlier I asked a question about removing the label that Django forms have by default. That worked out great, and I removed the label. However, the text that is generated by the form is still there! I would very much like to remove the text. Here is what I mean: <p>Text: <textarea rows="10" cols="40" name="text"></textarea></p> I would like to remove the Text: part of this, as I do not want it. Again, it is generated with the form I create via: {{ form.as_p }} Here is the model I use for my form: class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea()) EDIT: So far, I've looked at all of the documentation about the label tag and what stuff Forms generate. Apparently, this is possible to remove, it just does not tell me how. Also, I can remove the colon by adding: label_suffix=None I have now also tried label, label_tag, label_prefix, prefix, both in the form constructor and the charField constructor. Nothing. As a parameter in the constructor, but this is not enough. Anyone know how to fix this one? EDIT2: I have changed around how the form is done: class CommentForm(forms.Form): comment = forms.Textarea() It's only that now. This means the Textarea is the problem. What parameter can I pass in the textarea or to the form that will remove the aforementioned problem? A: The answer: class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea(), label='') Also, no auto_id in the constructor when creating the object, it should be left as: comment = new CommentForm() A: Have you tried: class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea(), label=None) ? A: Try: class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea(), help_text="")
How Do I Remove Text From Generated Django Form?
So earlier I asked a question about removing the label that Django forms have by default. That worked out great, and I removed the label. However, the text that is generated by the form is still there! I would very much like to remove the text. Here is what I mean: <p>Text: <textarea rows="10" cols="40" name="text"></textarea></p> I would like to remove the Text: part of this, as I do not want it. Again, it is generated with the form I create via: {{ form.as_p }} Here is the model I use for my form: class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea()) EDIT: So far, I've looked at all of the documentation about the label tag and what stuff Forms generate. Apparently, this is possible to remove, it just does not tell me how. Also, I can remove the colon by adding: label_suffix=None I have now also tried label, label_tag, label_prefix, prefix, both in the form constructor and the charField constructor. Nothing. As a parameter in the constructor, but this is not enough. Anyone know how to fix this one? EDIT2: I have changed around how the form is done: class CommentForm(forms.Form): comment = forms.Textarea() It's only that now. This means the Textarea is the problem. What parameter can I pass in the textarea or to the form that will remove the aforementioned problem?
[ "The answer:\nclass CommentForm(forms.Form):\n comment = forms.CharField(widget=forms.Textarea(), label='')\n\nAlso, no auto_id in the constructor when creating the object, it should be left as:\ncomment = new CommentForm()\n\n", "Have you tried:\nclass CommentForm(forms.Form):\n comment = forms.CharField(widget=forms.Textarea(), label=None)\n\n?\n", "Try:\nclass CommentForm(forms.Form):\n comment = forms.CharField(widget=forms.Textarea(), help_text=\"\")\n\n" ]
[ 4, 1, 0 ]
[]
[]
[ "django", "forms", "python", "textarea" ]
stackoverflow_0001080828_django_forms_python_textarea.txt
Q: Decompile .swf file to get images in python I would like to decompile a .swf file and get all the images from it, in python. Are there any libraries that do this? A: The SWFTools distribution has a command line program, SWFExtract, that can do this. You could call that from python to do what you want: http://www.swftools.org/ http://www.swftools.org/swfextract.html A: I don't think there are any libraries available for python, but maybe you can have an offline process to decompile swf using sothink flash decompiler Also I did not come across any decompiler so far that is 100% accurate. A: There are no public swf decompiler libraries for Python. Use a 3rd party program and the Python subprocess module to execute it.
Decompile .swf file to get images in python
I would like to decompile a .swf file and get all the images from it, in python. Are there any libraries that do this?
[ "The SWFTools distribution has a command line program, SWFExtract, that can do this. You could call that from python to do what you want:\nhttp://www.swftools.org/\nhttp://www.swftools.org/swfextract.html\n", "I don't think there are any libraries available for python, but maybe you can have an offline process to decompile swf using sothink flash decompiler Also I did not come across any decompiler so far that is 100% accurate.\n", "There are no public swf decompiler libraries for Python.\nUse a 3rd party program and the Python subprocess module to execute it.\n" ]
[ 6, 2, 2 ]
[]
[]
[ "flash", "python" ]
stackoverflow_0001081183_flash_python.txt
Q: Changes to Python since Dive into Python I've been teaching myself Python by working through Dive Into Python by Mark Pilgrim. I thoroughly recommend it, as do other Stack Overflow users. However, the last update to Dive Into Python was five years ago. I look forward to reading the new Dive into Python 3 When I make the switch to 3.x, but for now, using django means I'll stick to 2.x. I'm interested to know what new features of Python I'm missing out on, if I've used Dive Into Python as my primary resource for learning the language. A couple of examples that I've come across are itertools ElementTree Is there anything else I'm missing out on? edit: As Bastien points out in his answer, I could just read the What's New in Python pages, but sometimes it's fun to discover a useful tip on Stack Overflow rather than struggle through the complete, comprehensive answer in the official documentation. A: Check out What's New in Python. It has all the versions in the 2.x series. Per Alex's comments, you'll want to look at all Python 2.x for x > 2. Highlights for day-to-day coding: Enumeration: Instead of doing: for i in xrange(len(sequence)): val = sequence[i] pass You can now more succinctly write: for i, val in enumerate(iterable): pass This is important because it works for non-getitemable iterables (you would otherwise have to use an incrementing index counter alongside value iteration). Logging: a sane alternative to print-based debugging, standardized in a Log4j-style library module. Booleans: True and False, added for clarity: return True clearer intention than return 1. Generators: An expressive form of lazy evaluation evens = (i for i in xrange(limit) if i % 2 == 0) Extended slices: Builtins support strides in slices. assert [1, 2, 3, 4][::2] == [1, 3] Sets: For O(1) lookup semantics, you no longer have to do: pseudo_set = {'foo': None, 'bar': None} assert 'foo' in pseudo_set You can now do: set_ = set(['foo', 'bar']) assert 'foo' in set_ Reverse iteration: reversed(sequence) is more readable than sequence[::-1]. Subprocess: Unifies all the ways you might want to invoke a subprocess -- capturing outputs, feeding input, blocking or non-blocking. Conditional expressions: There's an issue with the idiom: a and b or c Namely, when b is falsy. b if a else c resolves that issue. Context management: Resource acquisition/release simplified via the with statement. with open(filename) as file: print file.read() # File is closed outside the `with` block. Better string formatting: Too much to describe -- see Python documentation under str.format(). A: Mark(author of the book) had some comments on this. I've shamelessly copied the related paragraph here: """If you choose Python 2, I can only recommend "Dive Into Python" chapters 2-7, 13-15, and 17. The rest of the book is horribly out of date.""" A: Here's a couple of examples of the sort of answer I was thinking of: Conditional Expressions Instead of the and-or trick, 2.5 offers a new way to write conditional expressions. #and-or trick: x = condition and 'true_value' or 'false_value' #new in 2.5: x = 'true_value' if condition else 'false_value' Testing for keys in dictionaries has_key() is deprecated in favor of key in d. >>>d={'key':'value','key2':'value2'} >>>'key1' in d True A: A few "minor" features were added in 2.4 and are pervasive in new 2.x python code: decorator (2.4) and try/except/finally clauses. Before you could not do: try: do_something() except FunkyException: handle_exception(): finally: clean_up() Both are essentially syntactic sugar, though, since you could do the same, just with a bit more code. A: import antigravity See the documentation A: I suggest that you read the “what's in Python 2.x?” documents. Some things that may have missed: New-style classes (allows standard types subtyping, properties, ...). The with keyword, which helps allocating and releasing resources.
Changes to Python since Dive into Python
I've been teaching myself Python by working through Dive Into Python by Mark Pilgrim. I thoroughly recommend it, as do other Stack Overflow users. However, the last update to Dive Into Python was five years ago. I look forward to reading the new Dive into Python 3 When I make the switch to 3.x, but for now, using django means I'll stick to 2.x. I'm interested to know what new features of Python I'm missing out on, if I've used Dive Into Python as my primary resource for learning the language. A couple of examples that I've come across are itertools ElementTree Is there anything else I'm missing out on? edit: As Bastien points out in his answer, I could just read the What's New in Python pages, but sometimes it's fun to discover a useful tip on Stack Overflow rather than struggle through the complete, comprehensive answer in the official documentation.
[ "Check out What's New in Python. It has all the versions in the 2.x series. Per Alex's comments, you'll want to look at all Python 2.x for x > 2.\nHighlights for day-to-day coding:\nEnumeration: Instead of doing:\nfor i in xrange(len(sequence)):\n val = sequence[i]\n pass\n\nYou can now more succinctly write:\nfor i, val in enumerate(iterable):\n pass\n\nThis is important because it works for non-getitemable iterables (you would otherwise have to use an incrementing index counter alongside value iteration).\nLogging: a sane alternative to print-based debugging, standardized in a Log4j-style library module.\nBooleans: True and False, added for clarity: return True clearer intention than return 1.\nGenerators: An expressive form of lazy evaluation\nevens = (i for i in xrange(limit) if i % 2 == 0)\n\nExtended slices: Builtins support strides in slices.\nassert [1, 2, 3, 4][::2] == [1, 3]\n\nSets: For O(1) lookup semantics, you no longer have to do:\npseudo_set = {'foo': None, 'bar': None}\nassert 'foo' in pseudo_set\n\nYou can now do:\nset_ = set(['foo', 'bar'])\nassert 'foo' in set_\n\nReverse iteration: reversed(sequence) is more readable than sequence[::-1].\nSubprocess: Unifies all the ways you might want to invoke a subprocess -- capturing outputs, feeding input, blocking or non-blocking.\nConditional expressions: There's an issue with the idiom:\na and b or c\n\nNamely, when b is falsy. b if a else c resolves that issue.\nContext management: Resource acquisition/release simplified via the with statement.\nwith open(filename) as file:\n print file.read()\n# File is closed outside the `with` block.\n\nBetter string formatting: Too much to describe -- see Python documentation under str.format().\n", "Mark(author of the book) had some comments on this. I've shamelessly copied the related paragraph here:\n\"\"\"If you choose Python 2, I can only recommend \"Dive Into Python\" chapters 2-7, 13-15, and 17. The rest of the book is horribly out of date.\"\"\"\n", "Here's a couple of examples of the sort of answer I was thinking of:\nConditional Expressions\nInstead of the and-or trick, 2.5 offers a new way to write conditional expressions.\n#and-or trick:\nx = condition and 'true_value' or 'false_value'\n\n#new in 2.5:\nx = 'true_value' if condition else 'false_value'\n\nTesting for keys in dictionaries\nhas_key() is deprecated in favor of key in d.\n>>>d={'key':'value','key2':'value2'}\n>>>'key1' in d\n True\n\n", "A few \"minor\" features were added in 2.4 and are pervasive in new 2.x python code: decorator (2.4) and try/except/finally clauses. Before you could not do:\ntry:\n do_something()\nexcept FunkyException:\n handle_exception():\nfinally:\n clean_up()\n\nBoth are essentially syntactic sugar, though, since you could do the same, just with a bit more code.\n", "import antigravity\n\nSee the documentation\n", "I suggest that you read the “what's in Python 2.x?” documents.\nSome things that may have missed:\n\nNew-style classes (allows standard types subtyping, properties, ...).\nThe with keyword, which helps allocating and releasing resources.\n\n" ]
[ 9, 6, 3, 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001080734_python.txt
Q: How can I report the API of a class programmatically? My goal is to parse a class and return a data-structure (object, dictionary, etc) that is descriptive of the methods and the related parameters contained within the class. Bonus points for types and returns... Requirements: Must be Python For example, the below class: class Foo: def bar(hello=None): return hello def baz(world=None): return baz Would be parsed to return result = {class:"Foo", methods: [{name: "bar", params:["hello"]}, {name: "baz", params:["world"]}]} So that's just an example of what I'm thinking... I'm really flexible on the data-structure. Any ideas/examples on how to achieve this? A: You probably want to check out Python's inspect module. It will get you most of the way there: >>> class Foo: ... def bar(hello=None): ... return hello ... def baz(world=None): ... return baz ... >>> import inspect >>> members = inspect.getmembers(Foo) >>> print members [('__doc__', None), ('__module__', '__main__'), ('bar', <unbound method Foo.bar> ), ('baz', <unbound method Foo.baz>)] >>> inspect.getargspec(members[2][1]) (['hello'], None, None, (None,)) >>> inspect.getargspec(members[3][1]) (['world'], None, None, (None,)) This isn't in the syntax you wanted, but that part should be fairly straight forward as you read the docs.
How can I report the API of a class programmatically?
My goal is to parse a class and return a data-structure (object, dictionary, etc) that is descriptive of the methods and the related parameters contained within the class. Bonus points for types and returns... Requirements: Must be Python For example, the below class: class Foo: def bar(hello=None): return hello def baz(world=None): return baz Would be parsed to return result = {class:"Foo", methods: [{name: "bar", params:["hello"]}, {name: "baz", params:["world"]}]} So that's just an example of what I'm thinking... I'm really flexible on the data-structure. Any ideas/examples on how to achieve this?
[ "You probably want to check out Python's inspect module. It will get you most of the way there:\n>>> class Foo:\n... def bar(hello=None):\n... return hello\n... def baz(world=None):\n... return baz\n...\n>>> import inspect\n>>> members = inspect.getmembers(Foo)\n>>> print members\n[('__doc__', None), ('__module__', '__main__'), ('bar', <unbound method Foo.bar>\n), ('baz', <unbound method Foo.baz>)]\n>>> inspect.getargspec(members[2][1])\n(['hello'], None, None, (None,))\n>>> inspect.getargspec(members[3][1])\n(['world'], None, None, (None,))\n\nThis isn't in the syntax you wanted, but that part should be fairly straight forward as you read the docs.\n" ]
[ 8 ]
[]
[]
[ "api", "python" ]
stackoverflow_0001081392_api_python.txt
Q: Yahoo Chat in Python I'm wondering how the best way to build a way to interface with Yahoo Chat is. I haven't found anything that looks incredibly easy to do yet. One thought it to build it all from scratch, the other thought would be to grab the code from open source software. I could use something like zinc, however, this maybe more complex than it needs to be. Another option would be to find a library that supports it, however, I haven't seen one. What are your thoughts on how to proceed and what would be the best way? I'm not necessarily looking for the fastest way as this is a bit of a learning project for me. A: Python-purple is a python API for accessing libpurple, the Pidgin backend. It will give you access to all the IM networks which Pidgin supports, including Y!Messenger, MSN Messenger, Jabber/GTalk/XMPP, and more...
Yahoo Chat in Python
I'm wondering how the best way to build a way to interface with Yahoo Chat is. I haven't found anything that looks incredibly easy to do yet. One thought it to build it all from scratch, the other thought would be to grab the code from open source software. I could use something like zinc, however, this maybe more complex than it needs to be. Another option would be to find a library that supports it, however, I haven't seen one. What are your thoughts on how to proceed and what would be the best way? I'm not necessarily looking for the fastest way as this is a bit of a learning project for me.
[ "Python-purple is a python API for accessing libpurple, the Pidgin backend. It will give you access to all the IM networks which Pidgin supports, including Y!Messenger, MSN Messenger, Jabber/GTalk/XMPP, and more...\n" ]
[ 4 ]
[]
[]
[ "networking", "python" ]
stackoverflow_0001082078_networking_python.txt
Q: 2D vector projection in Python The code below projects the blue vector, AC, onto the red vector, AB, the resulting projected vector, AD, is drawn as purple. This is intended as my own implementation of this Wolfram demonstration. Something is wrong however and I can really figure out what. Should be either that the projection formula itself is wrong or that I mistake some local coordinates with world coordinates. Any help is appreciated. This code is trimmed but can still be executed without problems, assuming you have pygame: import pygame from pygame.locals import * def vadd(a,b): return (a[0]+b[0],a[1]+b[1]) def vsub(a,b): return (a[0]-b[0],a[1]-b[1]) def project(a, b): """ project a onto b formula: b(dot(a,b)/(|b|^2)) """ abdot = (a[0]*b[0])+(a[1]*b[1]) blensq = (b[0]*b[0])+(b[1]*b[1]) temp = float(abdot)/float(blensq) c = (b[0]*temp,b[1]*temp) print a,b,abdot,blensq,temp,c return c pygame.init() screen = pygame.display.set_mode((150, 150)) running = True A = (75.0,75.0) B = (100.0,50.0) C = (90,70) AB = vsub(B,A) AC = vsub(C,A) D = project(AC,AB) AD = vsub(D,A) while running: for event in pygame.event.get(): if event.type == QUIT or (event.type == KEYDOWN and event.key == K_ESCAPE): running = False pygame.draw.line(screen, (255,0,0), A, B) pygame.draw.line(screen, (0,0,255), A, C) pygame.draw.line(screen, (255,0,255), A, D) pygame.display.flip() A: Shouldn't this D = project(AC,AB) AD = vsub(D,A) be AD = project(AC,AB) D = vadd(A,AD) Unfortunately, I can't test it, but that's the only thing that looks wrong to me.
2D vector projection in Python
The code below projects the blue vector, AC, onto the red vector, AB, the resulting projected vector, AD, is drawn as purple. This is intended as my own implementation of this Wolfram demonstration. Something is wrong however and I can really figure out what. Should be either that the projection formula itself is wrong or that I mistake some local coordinates with world coordinates. Any help is appreciated. This code is trimmed but can still be executed without problems, assuming you have pygame: import pygame from pygame.locals import * def vadd(a,b): return (a[0]+b[0],a[1]+b[1]) def vsub(a,b): return (a[0]-b[0],a[1]-b[1]) def project(a, b): """ project a onto b formula: b(dot(a,b)/(|b|^2)) """ abdot = (a[0]*b[0])+(a[1]*b[1]) blensq = (b[0]*b[0])+(b[1]*b[1]) temp = float(abdot)/float(blensq) c = (b[0]*temp,b[1]*temp) print a,b,abdot,blensq,temp,c return c pygame.init() screen = pygame.display.set_mode((150, 150)) running = True A = (75.0,75.0) B = (100.0,50.0) C = (90,70) AB = vsub(B,A) AC = vsub(C,A) D = project(AC,AB) AD = vsub(D,A) while running: for event in pygame.event.get(): if event.type == QUIT or (event.type == KEYDOWN and event.key == K_ESCAPE): running = False pygame.draw.line(screen, (255,0,0), A, B) pygame.draw.line(screen, (0,0,255), A, C) pygame.draw.line(screen, (255,0,255), A, D) pygame.display.flip()
[ "Shouldn't this\nD = project(AC,AB)\nAD = vsub(D,A)\n\nbe\nAD = project(AC,AB)\nD = vadd(A,AD)\n\nUnfortunately, I can't test it, but that's the only thing that looks wrong to me.\n" ]
[ 6 ]
[]
[]
[ "2d", "math", "projection", "python" ]
stackoverflow_0001082187_2d_math_projection_python.txt
Q: Automatically pressing a "submit" button using python The bus company I use runs an awful website (Hebrew,English) which making a simple "From A to B timetable today" query a nightmare. I suspect they are trying to encourage the usage of the costly SMS query system. I'm trying to harvest the entire timetable from the site, by submitting the query for every possible point to every possible point, which would sum to about 10k queries. The query result appears in a popup window. I'm quite new to web programming, but familiar with the basic aspects of python. What's the most elegant way to parse the page, select a value fro a drop-down menu, and press "submit" using a script? How do I give the program the contents of the new pop-up as input? Thanks! A: Twill is a simple scripting language for Web browsing. It happens to sport a python api. twill is essentially a thin shell around the mechanize package. All twill commands are implemented in the commands.py file, and pyparsing does the work of parsing the input and converting it into Python commands (see parse.py). Interactive shell work and readline support is implemented via the cmd module (from the standard Python library). An example of "pressing" submit from the above linked doc: from twill.commands import go, showforms, formclear, fv, submit go('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/') go('./widgets') showforms() formclear('1') fv("1", "name", "test") fv("1", "password", "testpass") fv("1", "confirm", "yes") showforms() submit('0') A: I would suggest you use mechanize. Here's a code snippet from their page that shows how to submit a form : import re from mechanize import Browser br = Browser() br.open("http://www.example.com/") # follow second link with element text matching regular expression response1 = br.follow_link(text_regex=r"cheese\s*shop", nr=1) assert br.viewing_html() print br.title() print response1.geturl() print response1.info() # headers print response1.read() # body response1.close() # (shown for clarity; in fact Browser does this for you) br.select_form(name="order") # Browser passes through unknown attributes (including methods) # to the selected HTMLForm (from ClientForm). br["cheeses"] = ["mozzarella", "caerphilly"] # (the method here is __setitem__) response2 = br.submit() # submit current form # print currently selected form (don't call .submit() on this, use br.submit()) print br.form A: You very rarely want to actually "press the submit button", rather than making GET or POST requests to the handler resource directly. Look at the HTML where the form is, and see what parameters its submitting to what URL, and if it is GET or POST method. You can form these requests with urllib(2) easily enough.
Automatically pressing a "submit" button using python
The bus company I use runs an awful website (Hebrew,English) which making a simple "From A to B timetable today" query a nightmare. I suspect they are trying to encourage the usage of the costly SMS query system. I'm trying to harvest the entire timetable from the site, by submitting the query for every possible point to every possible point, which would sum to about 10k queries. The query result appears in a popup window. I'm quite new to web programming, but familiar with the basic aspects of python. What's the most elegant way to parse the page, select a value fro a drop-down menu, and press "submit" using a script? How do I give the program the contents of the new pop-up as input? Thanks!
[ "Twill is a simple scripting language for Web browsing. It happens to sport a python api.\n\ntwill is essentially a thin shell around the mechanize package. All twill commands are implemented in the commands.py file, and pyparsing does the work of parsing the input and converting it into Python commands (see parse.py). Interactive shell work and readline support is implemented via the cmd module (from the standard Python library).\n\nAn example of \"pressing\" submit from the above linked doc:\nfrom twill.commands import go, showforms, formclear, fv, submit\n\ngo('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/')\ngo('./widgets')\nshowforms()\n\nformclear('1')\nfv(\"1\", \"name\", \"test\")\nfv(\"1\", \"password\", \"testpass\")\nfv(\"1\", \"confirm\", \"yes\")\nshowforms()\n\nsubmit('0')\n\n", "I would suggest you use mechanize. Here's a code snippet from their page that shows how to submit a form :\n\nimport re\nfrom mechanize import Browser\n\nbr = Browser()\nbr.open(\"http://www.example.com/\")\n# follow second link with element text matching regular expression\nresponse1 = br.follow_link(text_regex=r\"cheese\\s*shop\", nr=1)\nassert br.viewing_html()\nprint br.title()\nprint response1.geturl()\nprint response1.info() # headers\nprint response1.read() # body\nresponse1.close() # (shown for clarity; in fact Browser does this for you)\n\nbr.select_form(name=\"order\")\n# Browser passes through unknown attributes (including methods)\n# to the selected HTMLForm (from ClientForm).\nbr[\"cheeses\"] = [\"mozzarella\", \"caerphilly\"] # (the method here is __setitem__)\nresponse2 = br.submit() # submit current form\n\n# print currently selected form (don't call .submit() on this, use br.submit())\nprint br.form\n\n\n", "You very rarely want to actually \"press the submit button\", rather than making GET or POST requests to the handler resource directly. Look at the HTML where the form is, and see what parameters its submitting to what URL, and if it is GET or POST method. You can form these requests with urllib(2) easily enough.\n" ]
[ 11, 10, 7 ]
[]
[]
[ "data_harvest", "form_submit", "python", "scripting" ]
stackoverflow_0001082361_data_harvest_form_submit_python_scripting.txt
Q: Sort a list of strings based on regular expression match I have a text file that looks a bit like: random text random text, can be anything blabla %A blabla random text random text, can be anything blabla %D blabla random text random text, can be anything blabla blabla %F random text random text, can be anything blabla blabla random text random text, %C can be anything blabla blabla When I readlines() it in, it becomes a list of sentences. Now I want this list to be sorted by the letter after the %. So basically, when the sort is applied to the above, it should look like: random text random text, can be anything blabla %A blabla random text random text, %C can be anything blabla blabla random text random text, can be anything blabla %D blabla random text random text, can be anything blabla blabla %F random text random text, can be anything blabla blabla Is there a good way to do this, or will I have to break each string in to tubles, and then move the letters to a specific column, and then sort using key=operator.itemgetter(col)? Thank you A: In [1]: def grp(pat, txt): ...: r = re.search(pat, txt) ...: return r.group(0) if r else '&' In [2]: y Out[2]: ['random text random text, can be anything blabla %A blabla', 'random text random text, can be anything blabla %D blabla', 'random text random text, can be anything blabla blabla %F', 'random text random text, can be anything blabla blabla', 'random text random text, %C can be anything blabla blabla'] In [3]: y.sort(key=lambda l: grp("%\w", l)) In [4]: y Out[4]: ['random text random text, can be anything blabla %A blabla', 'random text random text, %C can be anything blabla blabla', 'random text random text, can be anything blabla %D blabla', 'random text random text, can be anything blabla blabla %F', 'random text random text, can be anything blabla blabla'] A: what about this? hope this helps. def k(line): v = line.partition("%")[2] v = v[0] if v else 'z' # here z stands for the max value return v print ''.join(sorted(open('data.txt', 'rb'), key = k)) A: You could use a custom key function to compare the strings. Using the lambda syntax you can write that inline, like so: strings.sort(key=lambda str: re.sub(".*%", "", str)); The re.sub(".*%", "", str) call effectively removes anything before the first percent sign so if the string has a percent sign it'll compare what comes after it, otherwise it'll compare the entire string. Pedantically speaking, this doesn't just use the letter following the percent sign, it also uses everything after. If you want to use the letter and only the letter try this slightly longer line: strings.sort(key=lambda str: re.sub(".*%(.).*", "\\1", str)); A: Here is a quick-and-dirty approach. Without knowing more about the requirements of your sort, I can't know if this satisfies your need. Assume that your list is held in 'listoflines': listoflines.sort( key=lambda x: x[x.find('%'):] ) Note that this will sort all lines without a '%' character by their final character.
Sort a list of strings based on regular expression match
I have a text file that looks a bit like: random text random text, can be anything blabla %A blabla random text random text, can be anything blabla %D blabla random text random text, can be anything blabla blabla %F random text random text, can be anything blabla blabla random text random text, %C can be anything blabla blabla When I readlines() it in, it becomes a list of sentences. Now I want this list to be sorted by the letter after the %. So basically, when the sort is applied to the above, it should look like: random text random text, can be anything blabla %A blabla random text random text, %C can be anything blabla blabla random text random text, can be anything blabla %D blabla random text random text, can be anything blabla blabla %F random text random text, can be anything blabla blabla Is there a good way to do this, or will I have to break each string in to tubles, and then move the letters to a specific column, and then sort using key=operator.itemgetter(col)? Thank you
[ "In [1]: def grp(pat, txt): \n ...: r = re.search(pat, txt)\n ...: return r.group(0) if r else '&'\n\nIn [2]: y\nOut[2]: \n['random text random text, can be anything blabla %A blabla',\n 'random text random text, can be anything blabla %D blabla',\n 'random text random text, can be anything blabla blabla %F',\n 'random text random text, can be anything blabla blabla',\n 'random text random text, %C can be anything blabla blabla']\n\nIn [3]: y.sort(key=lambda l: grp(\"%\\w\", l))\n\nIn [4]: y\nOut[4]: \n['random text random text, can be anything blabla %A blabla',\n 'random text random text, %C can be anything blabla blabla',\n 'random text random text, can be anything blabla %D blabla',\n 'random text random text, can be anything blabla blabla %F',\n 'random text random text, can be anything blabla blabla']\n\n", "what about this? hope this helps.\ndef k(line):\n v = line.partition(\"%\")[2]\n v = v[0] if v else 'z' # here z stands for the max value\n return v\nprint ''.join(sorted(open('data.txt', 'rb'), key = k))\n\n", "You could use a custom key function to compare the strings. Using the lambda syntax you can write that inline, like so:\nstrings.sort(key=lambda str: re.sub(\".*%\", \"\", str));\n\nThe re.sub(\".*%\", \"\", str) call effectively removes anything before the first percent sign so if the string has a percent sign it'll compare what comes after it, otherwise it'll compare the entire string.\nPedantically speaking, this doesn't just use the letter following the percent sign, it also uses everything after. If you want to use the letter and only the letter try this slightly longer line:\nstrings.sort(key=lambda str: re.sub(\".*%(.).*\", \"\\\\1\", str));\n\n", "Here is a quick-and-dirty approach. Without knowing more about the requirements of your sort, I can't know if this satisfies your need. \nAssume that your list is held in 'listoflines':\nlistoflines.sort( key=lambda x: x[x.find('%'):] )\n\nNote that this will sort all lines without a '%' character by their final character.\n" ]
[ 9, 4, 1, 1 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0001082413_python_sorting.txt
Q: USB - sync vs async vs semi-async Updates: I wrote an asynchronous C version and it works as it should. Turns out the speed issue was due to Python's GIL. There's a method to fine tune its behavior. sys.setcheckinterval(interval) Setting interval to zero (default is 100) fixes the slow speed issue. Now all that's left is to figure out is what's causing the other issue (not all pixels are filled). This one doesn't make any sense. usbmon shows all the communications are going through. libusb's debug messaging shows nothing out of the ordinary. I guess I need to take usbmon's output and compare sync vs async. The data that usbmon shows seems to look correct at a glance (The first byte should be 0x96 or 0x95). As said below in the original question, S. Lott, it's for a USB LCD controller. There are three different versions of drv_send, which is the outgoing endpoint method. I've explained the differences below. Maybe it'll help if I outline the asynchronous USB operations. Note that syncrhonous USB operations work the same way, it's just that it's done synchronously. We can view asynchronous I/O as a 5 step process: Allocation: allocate a libusb_transfer (This is self.transfer) Filling: populate the libusb_transfer instance with information about the transfer you wish to perform (libusb_fill_bulk_transfer) Submission: ask libusb to submit the transfer (libusb_submit_transfer) Completion handling: examine transfer results in the libusb_transfer structure (libusb_handle_events and libusb_handle_events_timeout) Deallocation: clean up resources (Not shown below) Original question: I have three different versions. One's entirely synchronous, one's semi-asynchronous, and the last is fully asynchronous. The differences is that synchronous fully populates the LCD display I'm controlling with the expected pixels, and it's really fast. The semi-asynchronous version only populates a portion of the display, but it's still very fast. The asynchronous version is really slow and only fills a portion of the display. I'm baffled why the pixels aren't fully populated, and why the asynchronous version is really slow. Any clues? Here's the fully synchronous version: def drv_send(self, data): if not self.Connected(): return self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) while self.drv_locked: r = lib.libusb_handle_events(None) if r < 0: if r == LIBUSB_ERROR_INTERRUPTED: continue lib.libusb_cancel_transfer(transfer) while self.drv_locked: if lib.libusb_handle_events(None) < 0: break self.count += 1 Here's the semi-asynchronous version: def drv_send(self, data): if not self.Connected(): return def f(d): self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) while self.drv_locked: r = lib.libusb_handle_events(None) if r < 0: if r == LIBUSB_ERROR_INTERRUPTED: continue lib.libusb_cancel_transfer(transfer) while self.drv_locked: if lib.libusb_handle_events(None) < 0: break self.count += 1 self.command_queue.put(Command(f, data)) Here's the fully asynchronous version. device_poll is in a thread by itself. def device_poll(self): while self.Connected(): tv = TIMEVAL(1, 0) r = lib.libusb_handle_events_timeout(None, byref(tv)) if r < 0: break def drv_send(self, data): if not self.Connected(): return def f(d): self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) self.count += 1 self.command_queue.put(Command(f, data)) And here's where the queue is emptied. It's the callback for a gobject timeout. def command_worker(self): if self.drv_locked: # or time.time() - self.command_time < self.command_rate: return True try: tmp = self.command_queue.get_nowait() except Queue.Empty: return True tmp.func(*tmp.args) self.command_time = time.time() return True Here's the transfer's callback. It just changes the locked state back to false, indicating the operation's finished. def cb_send_transfer(self, transfer): if transfer[0].status.value != LIBUSB_TRANSFER_COMPLETED: error("%s: transfer status %d" % (self.name, transfer.status)) print "cb_send_transfer", self.count self.drv_locked = False A: Ok I don't know if I get you right. You have some device with LCD, you have some firmware on it to handle USB requests. On PC side you are using PyUSB wich wraps libUsb. Couple of suggestions if you are experiancing speed problems, try to limit data you are transfering. Do not transfer whole raw data, mayby only pixels that changed. Second, have you measured speed of transfers by using some USB analuzer sofware, if you don't have money for hardvare usb analyzer maybe try software version. I never used that kind of analyzers but I think data provided by them is not very reiable. Thirdly, see what device is realy doing, maybe that is bottleneck of your data transfers. I have not much time today to exactly anwser your question so I will get back on this later. I am watching this thread for some time, and there is dead silence around this, so I tried to spare some time and look deeper. Still not much time today maybe later today. Unfortunetly I am no python expert but I know some stuff about C,C++, windows and most of all USB. But I think this may be LCD device problem, what are you using, Because if the transfers works fine, and data was recived by the device it points that is device problem. I looked at your code a little, could you do some testing, sending only 1 byte, 8 bytes, and Endpoint size byte length transfer. And see how it looks on USB mon ? Endpoint size is size of Hardvare buffer used by PICO LCD USB controler. I am not sure what it is for your's but I am guessing that when you send ENdpoint size message next masage should be 0 bytes length. Maybe there is the problem. Regarding the test I assume you have seen data wich you programed to send. Second thing could be that the data gets overwriten, or not recived fast enough. Saying overwriten I mean LCD could not see data end, and mix one transfer with another. I am not sure what USB mon is capable of showing, but according to USB standart after Endpoint size packet len, there should be 0 len packet data send, showing that is end of transfer.
USB - sync vs async vs semi-async
Updates: I wrote an asynchronous C version and it works as it should. Turns out the speed issue was due to Python's GIL. There's a method to fine tune its behavior. sys.setcheckinterval(interval) Setting interval to zero (default is 100) fixes the slow speed issue. Now all that's left is to figure out is what's causing the other issue (not all pixels are filled). This one doesn't make any sense. usbmon shows all the communications are going through. libusb's debug messaging shows nothing out of the ordinary. I guess I need to take usbmon's output and compare sync vs async. The data that usbmon shows seems to look correct at a glance (The first byte should be 0x96 or 0x95). As said below in the original question, S. Lott, it's for a USB LCD controller. There are three different versions of drv_send, which is the outgoing endpoint method. I've explained the differences below. Maybe it'll help if I outline the asynchronous USB operations. Note that syncrhonous USB operations work the same way, it's just that it's done synchronously. We can view asynchronous I/O as a 5 step process: Allocation: allocate a libusb_transfer (This is self.transfer) Filling: populate the libusb_transfer instance with information about the transfer you wish to perform (libusb_fill_bulk_transfer) Submission: ask libusb to submit the transfer (libusb_submit_transfer) Completion handling: examine transfer results in the libusb_transfer structure (libusb_handle_events and libusb_handle_events_timeout) Deallocation: clean up resources (Not shown below) Original question: I have three different versions. One's entirely synchronous, one's semi-asynchronous, and the last is fully asynchronous. The differences is that synchronous fully populates the LCD display I'm controlling with the expected pixels, and it's really fast. The semi-asynchronous version only populates a portion of the display, but it's still very fast. The asynchronous version is really slow and only fills a portion of the display. I'm baffled why the pixels aren't fully populated, and why the asynchronous version is really slow. Any clues? Here's the fully synchronous version: def drv_send(self, data): if not self.Connected(): return self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) while self.drv_locked: r = lib.libusb_handle_events(None) if r < 0: if r == LIBUSB_ERROR_INTERRUPTED: continue lib.libusb_cancel_transfer(transfer) while self.drv_locked: if lib.libusb_handle_events(None) < 0: break self.count += 1 Here's the semi-asynchronous version: def drv_send(self, data): if not self.Connected(): return def f(d): self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) while self.drv_locked: r = lib.libusb_handle_events(None) if r < 0: if r == LIBUSB_ERROR_INTERRUPTED: continue lib.libusb_cancel_transfer(transfer) while self.drv_locked: if lib.libusb_handle_events(None) < 0: break self.count += 1 self.command_queue.put(Command(f, data)) Here's the fully asynchronous version. device_poll is in a thread by itself. def device_poll(self): while self.Connected(): tv = TIMEVAL(1, 0) r = lib.libusb_handle_events_timeout(None, byref(tv)) if r < 0: break def drv_send(self, data): if not self.Connected(): return def f(d): self.drv_locked = True buffer = '' for c in data: buffer = buffer + chr(c) length = len(buffer) out_buffer = cast(buffer, POINTER(c_ubyte)) libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0) lib.libusb_submit_transfer(self.transfer) self.count += 1 self.command_queue.put(Command(f, data)) And here's where the queue is emptied. It's the callback for a gobject timeout. def command_worker(self): if self.drv_locked: # or time.time() - self.command_time < self.command_rate: return True try: tmp = self.command_queue.get_nowait() except Queue.Empty: return True tmp.func(*tmp.args) self.command_time = time.time() return True Here's the transfer's callback. It just changes the locked state back to false, indicating the operation's finished. def cb_send_transfer(self, transfer): if transfer[0].status.value != LIBUSB_TRANSFER_COMPLETED: error("%s: transfer status %d" % (self.name, transfer.status)) print "cb_send_transfer", self.count self.drv_locked = False
[ "Ok I don't know if I get you right. You have some device with LCD, you have some firmware on it to handle USB requests. On PC side you are using PyUSB wich wraps libUsb. \nCouple of suggestions if you are experiancing speed problems, try to limit data you are transfering. Do not transfer whole raw data, mayby only pixels that changed. \nSecond, have you measured speed of transfers by using some USB analuzer sofware, if you don't have money for hardvare usb analyzer maybe try software version. I never used that kind of analyzers but I think data provided by them is not very reiable. \nThirdly, see what device is realy doing, maybe that is bottleneck of your data transfers.\nI have not much time today to exactly anwser your question so I will get back on this later.\nI am watching this thread for some time, and there is dead silence around this, so I tried to spare some time and look deeper. Still not much time today maybe later today. Unfortunetly I am no python expert but I know some stuff about C,C++, windows and most of all USB. But I think this may be LCD device problem, what are you using, Because if the transfers works fine, and data was recived by the device it points that is device problem. \nI looked at your code a little, could you do some testing, sending only 1 byte, 8 bytes, and Endpoint size byte length transfer. And see how it looks on USB mon ? \nEndpoint size is size of Hardvare buffer used by PICO LCD USB controler. I am not sure what it is for your's but I am guessing that when you send ENdpoint size message next masage should be 0 bytes length. Maybe there is the problem. \nRegarding the test I assume you have seen data wich you programed to send. \nSecond thing could be that the data gets overwriten, or not recived fast enough. Saying overwriten I mean LCD could not see data end, and mix one transfer with another. \nI am not sure what USB mon is capable of showing, but according to USB standart after Endpoint size packet len, there should be 0 len packet data send, showing that is end of transfer. \n" ]
[ 1 ]
[]
[]
[ "ctypes", "libusb", "python", "usb" ]
stackoverflow_0001060305_ctypes_libusb_python_usb.txt
Q: How do you do something after you render the view? (Django) I want to do something after I have rendered the view using return render_to_response() Are signals the only way to do this? Do I need to write a custom signal or does request_finished give me enough information? Basically I need to know what page was rendered, and then do an action in response to that. Thanks. UPDATE FROM COMMENTS: I don't want to hold up the rendering of the page, so I want to render the page first and then do the action. A: You spawn a separate thread and have it do the action. t = threading.Thread(target=do_my_action, args=[my_argument]) # We want the program to wait on this thread before shutting down. t.setDaemon(False) t.start() This will cause 'do_my_action(my_argument)' to be executed in a second thread which will keep working even after you send your Django response and terminate the initial thread. For example it could send an email without delaying the response. A: If you have a long-running process, you have two simple choices. Spawn a subprocess prior to sending the response page. Create a "background service daemon" and pass work requests to it. This is all outside Django. You use subprocess or some other IPC method to communicate with the other process. A: A common way to do this is to use message queues. You place a message on the queue, and worker threads (or processes, etc.) consume the queue and do the work after your view has completed. Google App Engine has the task queue api http://code.google.com/appengine/docs/python/taskqueue/, amazon has the Simple Queue Service http://aws.amazon.com/sqs/. A quick search didn't turn up any django pluggables that look like accepted standards. A quick and dirty way to emulate the functionality is to place the 'message' in a database table, and have a cron job periodically check the table to perform the work. A: Django's HttpResponse object accepts an iterator in its constructor: http://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators So you could do something like: def myiter(): yield "my content" enqueue_some_task() return def myview(request): return HttpResponse(myiter()) The normal use of an iterator is to send large data without reading it all into memory. For example, read chunks from a file and yield appropriately. I've never used it in this way, but it seems like it should work. A: My favourite solution: A separate process that handles background tasks, typically things like indexing and sending notification mails etc. And then, during the view rendering, you send an event to the event handling system (I don't know if Django has one built in, but you always need one anyway so you should have one) and the even system then puts a message in a message queue (which is trivial to write unless you have multiple machines or multiple background processes) that does the task in question.
How do you do something after you render the view? (Django)
I want to do something after I have rendered the view using return render_to_response() Are signals the only way to do this? Do I need to write a custom signal or does request_finished give me enough information? Basically I need to know what page was rendered, and then do an action in response to that. Thanks. UPDATE FROM COMMENTS: I don't want to hold up the rendering of the page, so I want to render the page first and then do the action.
[ "You spawn a separate thread and have it do the action.\nt = threading.Thread(target=do_my_action, args=[my_argument])\n# We want the program to wait on this thread before shutting down.\nt.setDaemon(False)\nt.start()\n\nThis will cause 'do_my_action(my_argument)' to be executed in a second thread which will keep working even after you send your Django response and terminate the initial thread. For example it could send an email without delaying the response.\n", "If you have a long-running process, you have two simple choices.\n\nSpawn a subprocess prior to sending the response page.\nCreate a \"background service daemon\" and pass work requests to it.\n\nThis is all outside Django. You use subprocess or some other IPC method to communicate with the other process.\n", "A common way to do this is to use message queues. You place a message on the queue, and worker threads (or processes, etc.) consume the queue and do the work after your view has completed.\nGoogle App Engine has the task queue api http://code.google.com/appengine/docs/python/taskqueue/, amazon has the Simple Queue Service http://aws.amazon.com/sqs/. \nA quick search didn't turn up any django pluggables that look like accepted standards. \nA quick and dirty way to emulate the functionality is to place the 'message' in a database table, and have a cron job periodically check the table to perform the work.\n", "Django's HttpResponse object accepts an iterator in its constructor:\nhttp://docs.djangoproject.com/en/dev/ref/request-response/#passing-iterators\nSo you could do something like:\ndef myiter():\n yield \"my content\"\n enqueue_some_task()\n return\n\ndef myview(request):\n return HttpResponse(myiter())\n\nThe normal use of an iterator is to send large data without reading it all into memory. For example, read chunks from a file and yield appropriately. I've never used it in this way, but it seems like it should work.\n", "My favourite solution: A separate process that handles background tasks, typically things like indexing and sending notification mails etc. And then, during the view rendering, you send an event to the event handling system (I don't know if Django has one built in, but you always need one anyway so you should have one) and the even system then puts a message in a message queue (which is trivial to write unless you have multiple machines or multiple background processes) that does the task in question.\n" ]
[ 12, 5, 5, 5, 2 ]
[ "In render to response, you pass the html page that you want displayed. That other page needs to send a post (via Javascript or something) that triggers the correct function in your views, then that view calls the correct next page to be shown.\n", "Perhaps I do not understand your question. But why not something simple like:\ntry:\n return render_to_response()\nfinally:\n do_what_needs_to_be_done()\n\n" ]
[ -1, -2 ]
[ "django", "python" ]
stackoverflow_0001081340_django_python.txt
Q: What is the Pythonic way to write this loop? for jr in json_reports: jr['time_created'] = str(jr['time_created']) A: Looks to me that you're already there A: That would be the pythonic way to write the loop if you need to assign it to the same list. If you just want to pull out strings of all time_created indices in each element of json_reports, you can use a list comprehension: strings = [str(i['time_created']) for i in json_reports]
What is the Pythonic way to write this loop?
for jr in json_reports: jr['time_created'] = str(jr['time_created'])
[ "Looks to me that you're already there\n", "That would be the pythonic way to write the loop if you need to assign it to the same list.\nIf you just want to pull out strings of all time_created indices in each element of json_reports, you can use a list comprehension:\nstrings = [str(i['time_created']) for i in json_reports]\n\n" ]
[ 10, 5 ]
[]
[]
[ "python" ]
stackoverflow_0001083115_python.txt
Q: Send an xmpp message using a python library How can I send an XMPP message using one of the following Python libraries: wokkel, xmpppy, or jabber.py ? I think I am aware of the pseudo-code, but so far have not been able to get one running correctly. This is what I have tried so far: Call some API and pass the servername and port number to connect to that server. Call some API and pass the username, password to construct a JID object. Authenticate with that JID. Construct a Message object and call some API and pass that message obj in the argument. Call some send API. It seems easy enough in concept, but the devil is somewhere in the details. Please show a sample snippet if that's possible. A: This is the simplest possible xmpp client. It will send a 'hello :)' message. I'm using xmpppy in the example. And connecting to gtalk server. I think the example is self-explanatory: import xmpp username = 'username' passwd = 'password' to='[email protected]' msg='hello :)' client = xmpp.Client('gmail.com') client.connect(server=('talk.google.com',5223)) client.auth(username, passwd, 'botty') client.sendInitPresence() message = xmpp.Message(to, msg) message.setAttr('type', 'chat') client.send(message) A: xmpppy has a number of examples listed on its main page (under "examples"), the most basic of which sends a single test message. They make the examples progressively more interesting -- they introduce the callback-oriented API via a chat bot program.
Send an xmpp message using a python library
How can I send an XMPP message using one of the following Python libraries: wokkel, xmpppy, or jabber.py ? I think I am aware of the pseudo-code, but so far have not been able to get one running correctly. This is what I have tried so far: Call some API and pass the servername and port number to connect to that server. Call some API and pass the username, password to construct a JID object. Authenticate with that JID. Construct a Message object and call some API and pass that message obj in the argument. Call some send API. It seems easy enough in concept, but the devil is somewhere in the details. Please show a sample snippet if that's possible.
[ "This is the simplest possible xmpp client. It will send a 'hello :)' message. I'm using xmpppy in the example. And connecting to gtalk server. I think the example is self-explanatory:\nimport xmpp\n\nusername = 'username'\npasswd = 'password'\nto='[email protected]'\nmsg='hello :)'\n\n\nclient = xmpp.Client('gmail.com')\nclient.connect(server=('talk.google.com',5223))\nclient.auth(username, passwd, 'botty')\nclient.sendInitPresence()\nmessage = xmpp.Message(to, msg)\nmessage.setAttr('type', 'chat')\nclient.send(message)\n\n", "xmpppy has a number of examples listed on its main page (under \"examples\"), the most basic of which sends a single test message. They make the examples progressively more interesting -- they introduce the callback-oriented API via a chat bot program.\n" ]
[ 39, 1 ]
[]
[]
[ "python", "xmpp" ]
stackoverflow_0000910737_python_xmpp.txt
Q: Get brief human-readable info about XRI OpenID with Python? I'd like to be able to tell to the site visitor that comes with his/her OpenID: you are using your XYZ id for the first time on mysite - please create your sceen name, where XYZ is a nice token that makes sense. For example - XYZ could be the provider name. I'd like to find a solution that works for OpenID as defined in the standard - i.e. work for XRI type of ID - extensible resource identifier. urlparse (as suggested by RichieHindle) works for url-type openid, but does not work in general, e.g. for i-name IDs like "=somename". There are many other forms of valid OpenID string that don't even closely look like url. Thanks. A: Since OpenIDs are URLs, this might be the cleanest way in the absence of built-in support in Janrain: from urlparse import urlparse openid_str = "http://myprovider/myname" # str(openid_obj) parts = urlparse(openid_str) provider_name = parts[1] print (provider_name) # Prints myprovider
Get brief human-readable info about XRI OpenID with Python?
I'd like to be able to tell to the site visitor that comes with his/her OpenID: you are using your XYZ id for the first time on mysite - please create your sceen name, where XYZ is a nice token that makes sense. For example - XYZ could be the provider name. I'd like to find a solution that works for OpenID as defined in the standard - i.e. work for XRI type of ID - extensible resource identifier. urlparse (as suggested by RichieHindle) works for url-type openid, but does not work in general, e.g. for i-name IDs like "=somename". There are many other forms of valid OpenID string that don't even closely look like url. Thanks.
[ "Since OpenIDs are URLs, this might be the cleanest way in the absence of built-in support in Janrain:\nfrom urlparse import urlparse\nopenid_str = \"http://myprovider/myname\" # str(openid_obj)\nparts = urlparse(openid_str)\nprovider_name = parts[1]\nprint (provider_name) # Prints myprovider\n\n" ]
[ 3 ]
[]
[]
[ "janrain", "openid", "python", "xri" ]
stackoverflow_0001083435_janrain_openid_python_xri.txt
Q: Missing datetime.timedelta.to_seconds() -> float in Python? I understand that seconds and microseconds are probably represented separately in datetime.timedelta for efficiency reasons, but I just wrote this simple function: def to_seconds_float(timedelta): """Calculate floating point representation of combined seconds/microseconds attributes in :param:`timedelta`. :raise ValueError: If :param:`timedelta.days` is truthy. >>> to_seconds_float(datetime.timedelta(seconds=1, milliseconds=500)) 1.5 >>> too_big = datetime.timedelta(days=1, seconds=12) >>> to_seconds_float(too_big) # doctest: +ELLIPSIS Traceback (most recent call last): ... ValueError: ('Must not have days', datetime.timedelta(1, 12)) """ if timedelta.days: raise ValueError('Must not have days', timedelta) return timedelta.seconds + timedelta.microseconds / 1E6 This is useful for things like passing a value to time.sleep or select.select. Why isn't something like this part of the datetime.timedelta interface? I may be missing some corner case. Time representation seems to have so many non-obvious corner cases... I rejected days right out to have a reasonable shot at some precision (I'm too lazy to actually work out the math ATM, so this seems like a reasonable compromise ;-). A: A Python float has about 15 significant digits, so with seconds being up to 86400 (5 digits to the left of the decimal point) and microseconds needing 6 digits, you could well include the days (up to several years' worth) without loss of precision. A good mantra is "pi seconds is a nanocentury" -- about 3.14E9 seconds per 100 years, i.e. 3E7 per year, so 3E13 microseconds per year. The mantra is good because it's memorable, even though it does require you to do a little mental arithmetic afterwards (but, like spinach, it's GOOD for you -- keeps you nimble and alert!-). The design philosophy of datetime is somewhat minimalist, so it's not surprising it omits many possible helper methods that boil down to simple arithmetic expressions. A: Your concern for precision is misplaced. Here's a simple two-liner to calculate roughly how many YEARS you can squeeze into what's left of the 53 bits of precsion in an IEEE754 64-bit float: >>> import math >>> 10 ** (math.log10(2 ** 53) - math.log10(60 * 60 * 24) - 6) / 365.25 285.42092094268787 >>> Watch out for round-off; add the smallest non-zero numbers first: return timedelta.seconds + timedelta.microseconds / 1E6 + timedelta.days * 86400
Missing datetime.timedelta.to_seconds() -> float in Python?
I understand that seconds and microseconds are probably represented separately in datetime.timedelta for efficiency reasons, but I just wrote this simple function: def to_seconds_float(timedelta): """Calculate floating point representation of combined seconds/microseconds attributes in :param:`timedelta`. :raise ValueError: If :param:`timedelta.days` is truthy. >>> to_seconds_float(datetime.timedelta(seconds=1, milliseconds=500)) 1.5 >>> too_big = datetime.timedelta(days=1, seconds=12) >>> to_seconds_float(too_big) # doctest: +ELLIPSIS Traceback (most recent call last): ... ValueError: ('Must not have days', datetime.timedelta(1, 12)) """ if timedelta.days: raise ValueError('Must not have days', timedelta) return timedelta.seconds + timedelta.microseconds / 1E6 This is useful for things like passing a value to time.sleep or select.select. Why isn't something like this part of the datetime.timedelta interface? I may be missing some corner case. Time representation seems to have so many non-obvious corner cases... I rejected days right out to have a reasonable shot at some precision (I'm too lazy to actually work out the math ATM, so this seems like a reasonable compromise ;-).
[ "A Python float has about 15 significant digits, so with seconds being up to 86400 (5 digits to the left of the decimal point) and microseconds needing 6 digits, you could well include the days (up to several years' worth) without loss of precision.\nA good mantra is \"pi seconds is a nanocentury\" -- about 3.14E9 seconds per 100 years, i.e. 3E7 per year, so 3E13 microseconds per year. The mantra is good because it's memorable, even though it does require you to do a little mental arithmetic afterwards (but, like spinach, it's GOOD for you -- keeps you nimble and alert!-).\nThe design philosophy of datetime is somewhat minimalist, so it's not surprising it omits many possible helper methods that boil down to simple arithmetic expressions.\n", "Your concern for precision is misplaced. Here's a simple two-liner to calculate roughly how many YEARS you can squeeze into what's left of the 53 bits of precsion in an IEEE754 64-bit float:\n>>> import math\n>>> 10 ** (math.log10(2 ** 53) - math.log10(60 * 60 * 24) - 6) / 365.25\n285.42092094268787\n>>>\n\nWatch out for round-off; add the smallest non-zero numbers first:\nreturn timedelta.seconds + timedelta.microseconds / 1E6 + timedelta.days * 86400\n\n" ]
[ 12, 3 ]
[]
[]
[ "datetime", "python", "timedelta" ]
stackoverflow_0001083402_datetime_python_timedelta.txt
Q: python csv question i'm just testing out the csv component in python, and i am having some trouble with it. I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row? Thanks a lot for any enlightenment provided! Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import csv >>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n" >>> test = csv.reader(c) >>> for t in test: ... print t ... ['1'] ['', ''] [' '] ['2'] ['', ''] [' '] ['3'] ['', ''] [' '] ['4'] [] [' '] ['5'] ['', ''] [' '] ['6'] ['', ''] [' '] ['7'] ['', ''] [' '] ['8'] [] >>> A: csv.reader expects an iterable. You gave it "1, 2, 3, 4\n 5, 6, 7, 8\n"; iteration produces characters. Try giving it ["1, 2, 3, 4\n", "5, 6, 7, 8\n"] -- iteration will produce lines. A: csv.reader takes an iterable or iterator returning lines, see the docs. You're passing it a string, which is an iterable returning single characters. So, use csv.reader(c.splitlines()) or similar constructs! A: test = csv.reader(c.split('\n')) A: To make it more file-like try this. import StringIO c= StringIO.StringIO( "1, 2, 3, 4\n 5, 6, 7, 8\n" ) Now c looks like a file. A file is what you use with csv most (if not all) of the time.
python csv question
i'm just testing out the csv component in python, and i am having some trouble with it. I have a fairly standard csv string, and the default options all seems to fit with my test, but the result shouldn't group 1, 2, 3, 4 in a row and 5, 6, 7, 8 in a row? Thanks a lot for any enlightenment provided! Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import csv >>> c = "1, 2, 3, 4\n 5, 6, 7, 8\n" >>> test = csv.reader(c) >>> for t in test: ... print t ... ['1'] ['', ''] [' '] ['2'] ['', ''] [' '] ['3'] ['', ''] [' '] ['4'] [] [' '] ['5'] ['', ''] [' '] ['6'] ['', ''] [' '] ['7'] ['', ''] [' '] ['8'] [] >>>
[ "csv.reader expects an iterable. You gave it \"1, 2, 3, 4\\n 5, 6, 7, 8\\n\"; iteration produces characters. Try giving it [\"1, 2, 3, 4\\n\", \"5, 6, 7, 8\\n\"] -- iteration will produce lines.\n", "csv.reader takes an iterable or iterator returning lines, see the docs. You're passing it a string, which is an iterable returning single characters.\nSo, use csv.reader(c.splitlines()) or similar constructs!\n", "test = csv.reader(c.split('\\n'))\n", "To make it more file-like try this.\nimport StringIO\nc= StringIO.StringIO( \"1, 2, 3, 4\\n 5, 6, 7, 8\\n\" )\n\nNow c looks like a file. A file is what you use with csv most (if not all) of the time.\n" ]
[ 8, 3, 2, 2 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0001083364_csv_python.txt
Q: Best way to programmatically create image I'm looking for a way to create a graphics file (I don't really mind the file type, as they are easily converted). The input would be the desired resolution, and a list of pixels and colors (x, y, RGB color). Is there a convenient python library for that? What are the pros\cons\pitfalls? A: PIL is the canonical Python Imaging Library. Pros: Everybody wanting to do what you're doing uses PIL. 8-) Cons: None springs to mind. A: Alternatively, you can try ImageMagick. Last time I checked, PIL didn't work on Python 3, which is potentially a con. (I don't know about ImageMagick's API.) I believe an updated version of PIL is expected in the year.
Best way to programmatically create image
I'm looking for a way to create a graphics file (I don't really mind the file type, as they are easily converted). The input would be the desired resolution, and a list of pixels and colors (x, y, RGB color). Is there a convenient python library for that? What are the pros\cons\pitfalls?
[ "PIL is the canonical Python Imaging Library.\nPros: Everybody wanting to do what you're doing uses PIL. 8-)\nCons: None springs to mind.\n", "Alternatively, you can try ImageMagick.\nLast time I checked, PIL didn't work on Python 3, which is potentially a con. (I don't know about ImageMagick's API.) I believe an updated version of PIL is expected in the year.\n" ]
[ 7, 0 ]
[]
[]
[ "graphics", "python", "python_imaging_library" ]
stackoverflow_0001083943_graphics_python_python_imaging_library.txt
Q: Implementing a custom Python authentication handler The answer to a previous question showed that Nexus implement a custom authentication helper called "NxBASIC". How do I begin to implement a handler in python? Update: Implementing the handler per Alex's suggestion looks to be the right approach, but fails trying to extract the scheme and realm from the authreq. The returned value for authreq is: str: NxBASIC realm="Sonatype Nexus Repository Manager API"" AbstractBasicAuthHandler.rx.search(authreq) is only returning a single tuple: tuple: ('NxBASIC', '"', 'Sonatype Nexus Repository Manager API') so scheme,realm = mo.groups() fails. From my limited regex knowledge it looks like the standard regex from AbstractBasicAuthHandler should match scheme and realm, but it seems not to. The regex is: rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' 'realm=(["\'])(.*?)\\2', re.I) Update 2: From inspection of AbstractBasicAuthHandler, the default processing is to do: scheme, quote, realm = mo.groups() Changing to this works. I now just need to set the password against the correct realm. Thanks Alex! A: If, as described, name and description are the only differences between this "NxBasic" and good old "Basic", then you could essentially copy-paste-edit some code from urllib2.py (which unfortunately doesn't expose the scheme name as easily overridable in itself), as follows (see urllib2.py's online sources): import urllib2 class HTTPNxBasicAuthHandler(urllib2.HTTPBasicAuthHandler): def http_error_auth_reqed(self, authreq, host, req, headers): # host may be an authority (without userinfo) or a URL with an # authority # XXX could be multiple headers authreq = headers.get(authreq, None) if authreq: mo = AbstractBasicAuthHandler.rx.search(authreq) if mo: scheme, realm = mo.groups() if scheme.lower() == 'nxbasic': return self.retry_http_basic_auth(host, req, realm) def retry_http_basic_auth(self, host, req, realm): user, pw = self.passwd.find_user_password(realm, host) if pw is not None: raw = "%s:%s" % (user, pw) auth = 'NxBasic %s' % base64.b64encode(raw).strip() if req.headers.get(self.auth_header, None) == auth: return None req.add_header(self.auth_header, auth) return self.parent.open(req) else: return None As you can see by inspection, I've just changed two strings from "Basic" to "NxBasic" (and the lowercase equivalents) from what's in urrlib2.py (in the abstract basic auth handler superclass of the http basic auth handler class). Try using this version -- and if it's still not working, at least having it be your code can help you add print/logging statements, breakpoints, etc, to better understand what's breaking and how. Best of luck! (Sorry I can't help further but I don't have any Nexus around to experiment with).
Implementing a custom Python authentication handler
The answer to a previous question showed that Nexus implement a custom authentication helper called "NxBASIC". How do I begin to implement a handler in python? Update: Implementing the handler per Alex's suggestion looks to be the right approach, but fails trying to extract the scheme and realm from the authreq. The returned value for authreq is: str: NxBASIC realm="Sonatype Nexus Repository Manager API"" AbstractBasicAuthHandler.rx.search(authreq) is only returning a single tuple: tuple: ('NxBASIC', '"', 'Sonatype Nexus Repository Manager API') so scheme,realm = mo.groups() fails. From my limited regex knowledge it looks like the standard regex from AbstractBasicAuthHandler should match scheme and realm, but it seems not to. The regex is: rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+' 'realm=(["\'])(.*?)\\2', re.I) Update 2: From inspection of AbstractBasicAuthHandler, the default processing is to do: scheme, quote, realm = mo.groups() Changing to this works. I now just need to set the password against the correct realm. Thanks Alex!
[ "If, as described, name and description are the only differences between this \"NxBasic\" and good old \"Basic\", then you could essentially copy-paste-edit some code from urllib2.py (which unfortunately doesn't expose the scheme name as easily overridable in itself), as follows (see urllib2.py's online sources):\nimport urllib2\n\nclass HTTPNxBasicAuthHandler(urllib2.HTTPBasicAuthHandler):\n\n def http_error_auth_reqed(self, authreq, host, req, headers):\n # host may be an authority (without userinfo) or a URL with an\n # authority\n # XXX could be multiple headers\n authreq = headers.get(authreq, None)\n if authreq:\n mo = AbstractBasicAuthHandler.rx.search(authreq)\n if mo:\n scheme, realm = mo.groups()\n if scheme.lower() == 'nxbasic':\n return self.retry_http_basic_auth(host, req, realm)\n\n def retry_http_basic_auth(self, host, req, realm):\n user, pw = self.passwd.find_user_password(realm, host)\n if pw is not None:\n raw = \"%s:%s\" % (user, pw)\n auth = 'NxBasic %s' % base64.b64encode(raw).strip()\n if req.headers.get(self.auth_header, None) == auth:\n return None\n req.add_header(self.auth_header, auth)\n return self.parent.open(req)\n else:\n return None\n\nAs you can see by inspection, I've just changed two strings from \"Basic\" to \"NxBasic\" (and the lowercase equivalents) from what's in urrlib2.py (in the abstract basic auth handler superclass of the http basic auth handler class).\nTry using this version -- and if it's still not working, at least having it be your code can help you add print/logging statements, breakpoints, etc, to better understand what's breaking and how. Best of luck! (Sorry I can't help further but I don't have any Nexus around to experiment with).\n" ]
[ 1 ]
[]
[]
[ "httplib2", "nexus", "python", "restlet" ]
stackoverflow_0001080920_httplib2_nexus_python_restlet.txt
Q: Django Database Caching I'm working on a small project, and I wanted to provide multiple caching options to the end user. I figured with Django it's pretty simplistic to swap memcached for database or file based caching. My memcached implementation works like a champ without any issues. I placed time stamps on my pages, and curl consistently shows the older timestamps in locations where I want caching to work properly. However, when I switch over to the database caching, I don't get any entries in the database, and caching blatantly doesn't work. From what I see in the documentation all that should be necessary is to change the backend from: CACHE_BACKEND = 'memcached://localhost:11211' To: CACHE_BACKEND = 'db://cache_table' The table exists after running the required manage.py (createcachetable) line, and I can view it just fine. I'm currently in testing, so I am using sqlite3, but that shouldn't matter as far as I can tell. I can confirm that the table is completely empty, and hasn't been written to at any point. Also, as I stated previously, my timestamps are 'wrong' as well, giving me more evidence that something isn't quite right. Any thoughts? I'm using sqlite3, Django 1.0.2, python 2.6, serving via Apache currently on an Ubuntu Jaunty machine. I'm sure I'm just glossing over something simple. Thanks for any help provided. A: According to the documentation you're supposed to create the table not by using syncdb but with the following: python manage.py createcachetable cache_table If you haven't done that, try and see if it doesn't work.
Django Database Caching
I'm working on a small project, and I wanted to provide multiple caching options to the end user. I figured with Django it's pretty simplistic to swap memcached for database or file based caching. My memcached implementation works like a champ without any issues. I placed time stamps on my pages, and curl consistently shows the older timestamps in locations where I want caching to work properly. However, when I switch over to the database caching, I don't get any entries in the database, and caching blatantly doesn't work. From what I see in the documentation all that should be necessary is to change the backend from: CACHE_BACKEND = 'memcached://localhost:11211' To: CACHE_BACKEND = 'db://cache_table' The table exists after running the required manage.py (createcachetable) line, and I can view it just fine. I'm currently in testing, so I am using sqlite3, but that shouldn't matter as far as I can tell. I can confirm that the table is completely empty, and hasn't been written to at any point. Also, as I stated previously, my timestamps are 'wrong' as well, giving me more evidence that something isn't quite right. Any thoughts? I'm using sqlite3, Django 1.0.2, python 2.6, serving via Apache currently on an Ubuntu Jaunty machine. I'm sure I'm just glossing over something simple. Thanks for any help provided.
[ "According to the documentation you're supposed to create the table not by using syncdb but with the following:\npython manage.py createcachetable cache_table\n\nIf you haven't done that, try and see if it doesn't work.\n" ]
[ 8 ]
[]
[]
[ "django", "django_cache", "python" ]
stackoverflow_0001084569_django_django_cache_python.txt
Q: Getting a dict out of a method? I'm trying to get a dict out of a method, so far I'm able to get the method name, and its arguments (using the inspect module), the problem I'm facing is that I'd like to have the default arguments too (or the argument type). This is basically my unit test: class Test: def method1(anon_type, array=[], string="string", integer=12, obj=None): pass target = {"method1": [ {"anon": "None"}, {"array": "[]"}, {"string": "str"}, {"integer": "int"}, {"obj": "None"}] } method1_dict = get_method_dict(Test().method1) self.assertEqual(target, method1_dict) Here, I try to use inspect to get the method: >>> import inspect >>> class Class: ... def method(self, string='str', integer=12): ... pass ... >>> m_desc = inspect.getargspec(Class().method) >>> m_desc ArgSpec(args=['self', 'string', 'integer'], varargs=None, keywords=None, defaults=('str', 12)) >>> but my problem is with the default args, as you see here: >>> class Class: ... def method(self, no_def_args, string='str', integer=12): ... pass ... >>> m_desc = inspect.getargspec(Class().method) >>> m_desc ArgSpec(args=['self', 'no_def_args', 'string', 'integer'], varargs=None, keywords=None, defaults=('str', 12)) As you see the no_def_args is not in the defaults, so it's a problem to try to match the argument with their default arguments. A: what exactly is the problem? all arguments are ordered, keyword arguments should be the last in definition. do you know how to slice a list? A: Here is what I usually do: import inspect, itertools args, varargs, keywords, defaults = inspect.getargspec(method1) print dict(itertools.izip_longest(args[::-1], defaults[::-1], fillvalue=None)) --> {'integer': 12, 'array': [], 'anon_type': None, 'obj': None, 'string': 'string'} This will only work on python2.6
Getting a dict out of a method?
I'm trying to get a dict out of a method, so far I'm able to get the method name, and its arguments (using the inspect module), the problem I'm facing is that I'd like to have the default arguments too (or the argument type). This is basically my unit test: class Test: def method1(anon_type, array=[], string="string", integer=12, obj=None): pass target = {"method1": [ {"anon": "None"}, {"array": "[]"}, {"string": "str"}, {"integer": "int"}, {"obj": "None"}] } method1_dict = get_method_dict(Test().method1) self.assertEqual(target, method1_dict) Here, I try to use inspect to get the method: >>> import inspect >>> class Class: ... def method(self, string='str', integer=12): ... pass ... >>> m_desc = inspect.getargspec(Class().method) >>> m_desc ArgSpec(args=['self', 'string', 'integer'], varargs=None, keywords=None, defaults=('str', 12)) >>> but my problem is with the default args, as you see here: >>> class Class: ... def method(self, no_def_args, string='str', integer=12): ... pass ... >>> m_desc = inspect.getargspec(Class().method) >>> m_desc ArgSpec(args=['self', 'no_def_args', 'string', 'integer'], varargs=None, keywords=None, defaults=('str', 12)) As you see the no_def_args is not in the defaults, so it's a problem to try to match the argument with their default arguments.
[ "what exactly is the problem? all arguments are ordered, keyword arguments should be the last in definition. do you know how to slice a list?\n", "Here is what I usually do:\nimport inspect, itertools\nargs, varargs, keywords, defaults = inspect.getargspec(method1)\nprint dict(itertools.izip_longest(args[::-1], defaults[::-1], fillvalue=None))\n\n-->\n{'integer': 12, 'array': [], 'anon_type': None, 'obj': None, 'string': 'string'}\n\nThis will only work on python2.6\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001084566_python.txt
Q: C++ or Python as a starting point into GUI programming? I have neglected my programming skills since i left school and now i want to start a few things that are running around in my head. Qt would be the toolkit for me to use but i am undecided if i should use Python (looks to me like the easier to learn with a few general ideas about programming) or C++ (the thing to use with Qt). In my school we learned the basics with Turbo Pascal, VB and a voluntary C course, though right now i only know a hint of all the things i learned back then. Can you recommend me a way and a site or book (or two) that would bring me on that path (a perfect one would be one that teaches the language with help of the toolkit)? Thank you in advance. A: Being an expert in both C++ and Python, my mantra has long been "Python where I can, C++ where I must": Python is faster (in term of programmer productivity and development cycle) and easier, C++ can give that extra bit of power when I have to get close to the hardware or be extremely careful about every byte or machine cycle I spend. In your situation, I would recommend Python (and the many excellent books and URLs already recommended in other answers). A: http://wiki.python.org/moin/PyQt You can use PyQT for Qt in Python. They have recommendations for tutorials and references on there. Google "How to learn Qt" and "Learning C++". There are some decent sources on there. A: I have read Rapid GUI Programming with Python and Qt: The Definitive Guide to PyQt Programming by Mark Summerfield , it's cool. for C++ : C++ GUI Programming with Qt 4 (2nd Edition) just my two cents.
C++ or Python as a starting point into GUI programming?
I have neglected my programming skills since i left school and now i want to start a few things that are running around in my head. Qt would be the toolkit for me to use but i am undecided if i should use Python (looks to me like the easier to learn with a few general ideas about programming) or C++ (the thing to use with Qt). In my school we learned the basics with Turbo Pascal, VB and a voluntary C course, though right now i only know a hint of all the things i learned back then. Can you recommend me a way and a site or book (or two) that would bring me on that path (a perfect one would be one that teaches the language with help of the toolkit)? Thank you in advance.
[ "Being an expert in both C++ and Python, my mantra has long been \"Python where I can, C++ where I must\": Python is faster (in term of programmer productivity and development cycle) and easier, C++ can give that extra bit of power when I have to get close to the hardware or be extremely careful about every byte or machine cycle I spend. In your situation, I would recommend Python (and the many excellent books and URLs already recommended in other answers).\n", "http://wiki.python.org/moin/PyQt\nYou can use PyQT for Qt in Python. They have recommendations for tutorials and references on there.\nGoogle \"How to learn Qt\" and \"Learning C++\".\nThere are some decent sources on there.\n", "I have read Rapid GUI Programming with Python and Qt: The Definitive Guide to PyQt Programming by Mark Summerfield , it's cool.\nfor C++ : C++ GUI Programming with Qt 4 (2nd Edition)\njust my two cents. \n" ]
[ 25, 5, 4 ]
[ "How about Ruby? You can write Qt apps in Ruby allegedly (http://rubyforge.org/projects/korundum), and it gives you a good excuse to look at the very excellent \"Why's Poignant Guide...\" (http://poignantguide.net) which is how Monty Python would have introduced programming....\n(Actually thinking about learning python myself, so feel free to ignore my advice (but visit Why's site anyway))\n" ]
[ -1 ]
[ "c++", "python", "qt" ]
stackoverflow_0001084935_c++_python_qt.txt
Q: Transition from Python2.4 to Python2.6 on CentOS, module migration problem I have a problem of upgrading python from 2.4 to 2.6: I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum"). So what is the right way to migrate/install modules to python2.6? A: They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that... Yes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct? If that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages you need in 2.6. But if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy_install script, and you can then install modules with easy_install <modulename> It will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze. If the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way. You shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C will break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-) In general, I try too keep the system python "clean", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way. A: There are a couple of options... If the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory. If they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling "python setup.py" ... You may want to post this on serverfault.com, if you run into additional challenges. A: Some Python libs may be still not accessible as with Python 2.6 site-packages is changed to dist-packages. The only way in that case is to do move all stuff generated in site-packages (e.g. by make install) to dist-packages and create a sym-link. A: easy_install is good one but there are low level way for installing module, just: unpack module source to some directory type "python setup.py install" Of course you should do this with required installed python interpreter version; for checking it type: python -V
Transition from Python2.4 to Python2.6 on CentOS, module migration problem
I have a problem of upgrading python from 2.4 to 2.6: I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum"). So what is the right way to migrate/install modules to python2.6?
[ "They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that...\nYes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct?\nIf that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages you need in 2.6.\nBut if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy_install script, and you can then install modules with \neasy_install <modulename>\n\nIt will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze.\nIf the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way.\nYou shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C will break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-)\nIn general, I try too keep the system python \"clean\", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way.\n", "There are a couple of options...\n\nIf the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory.\nIf they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling \"python setup.py\"\n\n...\nYou may want to post this on serverfault.com, if you run into additional challenges.\n", "Some Python libs may be still not accessible as with Python 2.6 site-packages is changed to dist-packages. \nThe only way in that case is to do move all stuff generated in site-packages (e.g. by make install) to dist-packages and create a sym-link.\n", "easy_install is good one but there are low level way for installing module, just:\n\nunpack module source to some directory\ntype \"python setup.py install\"\n\nOf course you should do this with required installed python interpreter version; for checking it type:\npython -V\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "centos", "linux", "python", "python_2.6", "upgrade" ]
stackoverflow_0001081698_centos_linux_python_python_2.6_upgrade.txt
Q: Downloading a Large Number of Files from S3 What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. Would some type of concurrency help? PyCurl.CurlMulti object? I am open to all suggestions. Thanks! A: I don't know anything about python, but in general you would want to break the task down into smaller chunks so that they can be run concurrently. You could break it down by file type, or alphabetical or something, and then run a separate script for each portion of the break down. A: In the case of python, as this is IO bound, multiple threads will use of the CPU, but it will probably use up only one core. If you have multiple cores, you might want to consider the new multiprocessor module. Even then you may want to have each process use multiple threads. You would have to do some tweaking of number of processors and threads. If you do use multiple threads, this is a good candidate for the Queue class. A: You might consider using s3fs, and just running concurrent file system commands from Python. A: I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction). Twisted makes it easy to run at whatever concurrency you want. For something on the order of 200,000, I'd probably make a generator and use a cooperator to set my concurrency and just let the generator generate every required download request. If you're not familiar with twisted, you'll find the model takes a bit of time to get used to, but it's oh so worth it. In this case, I'd expect it to take minimal CPU and memory overhead, but you'd have to worry about file descriptors. It's quite easy to mix in perspective broker and farm the work out to multiple machines should you find yourself needing more file descriptors or if you have multiple connections over which you'd like it to pull down. A: what about thread + queue, I love this article: Practical threaded programming with Python A: Each job can be done with appropriate tools :) You want use python for stress testing S3 :), so I suggest find a large volume downloader program and pass link to it. On Windows I have experience for installing ReGet program (shareware, from http://reget.com) and creating downloading tasks via COM interface. Of course there may other programs with usable interface exists. Regards!
Downloading a Large Number of Files from S3
What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. Would some type of concurrency help? PyCurl.CurlMulti object? I am open to all suggestions. Thanks!
[ "I don't know anything about python, but in general you would want to break the task down into smaller chunks so that they can be run concurrently. You could break it down by file type, or alphabetical or something, and then run a separate script for each portion of the break down.\n", "In the case of python, as this is IO bound, multiple threads will use of the CPU, but it will probably use up only one core. If you have multiple cores, you might want to consider the new multiprocessor module. Even then you may want to have each process use multiple threads. You would have to do some tweaking of number of processors and threads.\nIf you do use multiple threads, this is a good candidate for the Queue class.\n", "You might consider using s3fs, and just running concurrent file system commands from Python.\n", "I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction).\nTwisted makes it easy to run at whatever concurrency you want. For something on the order of 200,000, I'd probably make a generator and use a cooperator to set my concurrency and just let the generator generate every required download request.\nIf you're not familiar with twisted, you'll find the model takes a bit of time to get used to, but it's oh so worth it. In this case, I'd expect it to take minimal CPU and memory overhead, but you'd have to worry about file descriptors. It's quite easy to mix in perspective broker and farm the work out to multiple machines should you find yourself needing more file descriptors or if you have multiple connections over which you'd like it to pull down.\n", "what about thread + queue, I love this article: Practical threaded programming with Python\n", "Each job can be done with appropriate tools :)\nYou want use python for stress testing S3 :), so I suggest find a large volume downloader program and pass link to it.\nOn Windows I have experience for installing ReGet program (shareware, from http://reget.com) and creating downloading tasks via COM interface.\nOf course there may other programs with usable interface exists.\nRegards!\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto", "curl", "python" ]
stackoverflow_0001051275_amazon_s3_amazon_web_services_boto_curl_python.txt
Q: Problem with printing contents of a list I'm having a somewhat odd problem with Python(2.6.2) that I've come to the conclusion is a bug in the Vista port (I cant replicate it in XP or Linux). I have a list of users, encrypted passwords, and their host that I am storing in a larger list (it's acting as a sort of database). This all works fine and dandy, except for that there is an inconsistency in how a single user's data is stored and how the group is stored. created by the 'create_user' method ['localhost', 'demo', 'demouserpasswordhash'] created by the 'create_database' method ['\xff\xfel\x00o\x00c\x00a\x00l\x00h\x00o\x00s\x00t\x00', '\x00d\x00e\x00m\x00o\x00', '\x00d\x00e\x00m\x00o\x00u\x00s\x00e\x00r\x00p\x00a\x00s\x00s\x00w\x00o\x00r\x00d\x00h\x00a\x00s\x00h\x00\r\x00\n'] I don't understand why it's doing this, given how simple the code for it is: # ----- base functions def create_user ( user_data ): return user_data.split(":") def show_user ( user_data ): print "Host: ", user_data[0] print "Username: ", user_data[1] print "Password: ", user_data[2] print def create_database ( user_list ): database = [] for user in user_list: database.append( create_user( user ) ) return database def show_database( database ): for row in database: show_user( row ) # ----- test area users = open( "users.txt" ) test_user = create_user( "localhost:demo:demouserpasswordhash" ) db = create_database( users ) print db[0] print test_user # ----- Anyone have any similar experiences with this or is it just me? A: Your file users.txt is in UTF-16, but you're opening it as ASCII. Either change it to ASCII, or open it like this: import codecs users = codecs.open( "users-16.txt", "r", "utf-16" ) A: Try replacing create_user( user ) with create_user( user.decode("utf16") )
Problem with printing contents of a list
I'm having a somewhat odd problem with Python(2.6.2) that I've come to the conclusion is a bug in the Vista port (I cant replicate it in XP or Linux). I have a list of users, encrypted passwords, and their host that I am storing in a larger list (it's acting as a sort of database). This all works fine and dandy, except for that there is an inconsistency in how a single user's data is stored and how the group is stored. created by the 'create_user' method ['localhost', 'demo', 'demouserpasswordhash'] created by the 'create_database' method ['\xff\xfel\x00o\x00c\x00a\x00l\x00h\x00o\x00s\x00t\x00', '\x00d\x00e\x00m\x00o\x00', '\x00d\x00e\x00m\x00o\x00u\x00s\x00e\x00r\x00p\x00a\x00s\x00s\x00w\x00o\x00r\x00d\x00h\x00a\x00s\x00h\x00\r\x00\n'] I don't understand why it's doing this, given how simple the code for it is: # ----- base functions def create_user ( user_data ): return user_data.split(":") def show_user ( user_data ): print "Host: ", user_data[0] print "Username: ", user_data[1] print "Password: ", user_data[2] print def create_database ( user_list ): database = [] for user in user_list: database.append( create_user( user ) ) return database def show_database( database ): for row in database: show_user( row ) # ----- test area users = open( "users.txt" ) test_user = create_user( "localhost:demo:demouserpasswordhash" ) db = create_database( users ) print db[0] print test_user # ----- Anyone have any similar experiences with this or is it just me?
[ "Your file users.txt is in UTF-16, but you're opening it as ASCII.\nEither change it to ASCII, or open it like this:\nimport codecs\nusers = codecs.open( \"users-16.txt\", \"r\", \"utf-16\" )\n\n", "Try replacing\ncreate_user( user )\n\nwith\ncreate_user( user.decode(\"utf16\") )\n\n" ]
[ 6, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001085051_python.txt
Q: Does creating separate functions instead of one big one slow processing time? I'm working in the Google App Engine environment and programming in Python. I am creating a function that essentially generates a random number/letter string and then stores to the memcache. def generate_random_string(): # return a random 6-digit long string def check_and_store_to_memcache(): randomstring = generate_random_string() #check against memcache #if ok, then store key value with another value #if not ok, run generate_random_string() again and check again. Does creating two functions instead of just one big one affect performance? I prefer two, as it better matches how I think, but don't mind combining them if that's "best practice". A: Focus on being able to read and easily understand your code. Once you've done this, if you have a performance problem, then look into what might be causing it. Most languages, python included, tend to have fairly low overhead for making method calls. Putting this code into a single function is not going to (dramatically) change the performance metrics - I'd guess that your random number generation will probably be the bulk of the time, not having 2 functions. That being said, splitting functions does have a (very, very minor) impact on performance. However, I'd think of it this way - it may take you from going 80 mph on the highway to 79.99mph (which you'll never really notice). The important things to watch for are avoiding stoplights and traffic jams, since they're going to make you have to stop altogether... A: In almost all cases, "inlining" functions to increase speed is like getting a hair cut to lose weight. A: Reed is right. For the change you're considering, the cost of a function call is a small number of cycles, and you'd have to be doing it 10^8 or so times per second before you'd notice. However, I would caution that often people go to the other extreme, and then it is as if function calls were costly. I've seen this in over-designed systems where there were many layers of abstraction. What happens is there is some human psychology that says if something is easy to call, then it is fast. This leads to writing more function calls than strictly necessary, and when this occurs over multiple layers of abstraction, the wastage can be exponential. Following Reed's driving example, a function call can be like a detour, and if the detour contains detours, and if those also contain detours, soon there is tremendous time being wasted, for no obvious reason, because each function call looks innocent. A: Like others have said, I wouldn't worry about it in this particular scenario. The very small overhead involved in function calls would pale in comparison to what is done inside each function. And as long as these functions don't get called in rapid succession, it probably wouldn't matter much anyway. It is a good question though. In some cases it's best not to break code into multiple functions. For example, when working with math intensive tasks with nested loops it's best to make as few function calls as possible in the inner loop. That's because the simple math operations themselves are very cheap, and next to that the function-call-overhead can cause a noticeable performance penalty. Years ago I discovered the hypot (hypotenuse) function in the math library I was using in a VC++ app was very slow. It seemed ridiculous to me because it's such a simple set of functionality -- return sqrt(a * a + b * b) -- how hard is that? So I wrote my own and managed to improve performance 16X over. Then I added the "inline" keyword to the function and made it 3X faster than that (about 50X faster at this point). Then I took the code out of the function and put it in my loop itself and saw yet another small performance increase. So... yeah, those are the types of scenarios where you can see a difference.
Does creating separate functions instead of one big one slow processing time?
I'm working in the Google App Engine environment and programming in Python. I am creating a function that essentially generates a random number/letter string and then stores to the memcache. def generate_random_string(): # return a random 6-digit long string def check_and_store_to_memcache(): randomstring = generate_random_string() #check against memcache #if ok, then store key value with another value #if not ok, run generate_random_string() again and check again. Does creating two functions instead of just one big one affect performance? I prefer two, as it better matches how I think, but don't mind combining them if that's "best practice".
[ "Focus on being able to read and easily understand your code.\nOnce you've done this, if you have a performance problem, then look into what might be causing it.\nMost languages, python included, tend to have fairly low overhead for making method calls. Putting this code into a single function is not going to (dramatically) change the performance metrics - I'd guess that your random number generation will probably be the bulk of the time, not having 2 functions. \nThat being said, splitting functions does have a (very, very minor) impact on performance. However, I'd think of it this way - it may take you from going 80 mph on the highway to 79.99mph (which you'll never really notice). The important things to watch for are avoiding stoplights and traffic jams, since they're going to make you have to stop altogether...\n", "In almost all cases, \"inlining\" functions to increase speed is like getting a hair cut to lose weight.\n", "Reed is right. For the change you're considering, the cost of a function call is a small number of cycles, and you'd have to be doing it 10^8 or so times per second before you'd notice.\nHowever, I would caution that often people go to the other extreme, and then it is as if function calls were costly. I've seen this in over-designed systems where there were many layers of abstraction.\nWhat happens is there is some human psychology that says if something is easy to call, then it is fast. This leads to writing more function calls than strictly necessary, and when this occurs over multiple layers of abstraction, the wastage can be exponential.\nFollowing Reed's driving example, a function call can be like a detour, and if the detour contains detours, and if those also contain detours, soon there is tremendous time being wasted, for no obvious reason, because each function call looks innocent.\n", "Like others have said, I wouldn't worry about it in this particular scenario. The very small overhead involved in function calls would pale in comparison to what is done inside each function. And as long as these functions don't get called in rapid succession, it probably wouldn't matter much anyway.\nIt is a good question though. In some cases it's best not to break code into multiple functions. For example, when working with math intensive tasks with nested loops it's best to make as few function calls as possible in the inner loop. That's because the simple math operations themselves are very cheap, and next to that the function-call-overhead can cause a noticeable performance penalty.\nYears ago I discovered the hypot (hypotenuse) function in the math library I was using in a VC++ app was very slow. It seemed ridiculous to me because it's such a simple set of functionality -- return sqrt(a * a + b * b) -- how hard is that? So I wrote my own and managed to improve performance 16X over. Then I added the \"inline\" keyword to the function and made it 3X faster than that (about 50X faster at this point). Then I took the code out of the function and put it in my loop itself and saw yet another small performance increase. So... yeah, those are the types of scenarios where you can see a difference.\n" ]
[ 38, 24, 4, 2 ]
[]
[]
[ "function", "google_app_engine", "performance", "python" ]
stackoverflow_0001083105_function_google_app_engine_performance_python.txt
Q: How to recommend the next achievement Short version: I have a similar setup to StackOverflow. Users get Achievements. I have many more achievements than SO, lets say on the order of 10k, and each user has in the 100s of achievements. Now, how would you recommend (to recommend) the next achievement for a user to try for? Long version: The objects are modeled like this in django (showing only important parts) : class User(models.Model): alias = models.ForeignKey(Alias) class Alias(models.Model): achievements = models.ManyToManyField('Achievement', through='Achiever') class Achievement(models.Model): points = models.IntegerField() class Achiever(models.Model): achievement = models.ForeignKey(Achievement) alias = models.ForeignKey(Alias) count = models.IntegerField(default=1) and my algorithm is just to find every other user that has a shared achievement with the logged in user, and then go through all their achievements and sort by number of occurrences : def recommended(request) : user = request.user.get_profile() // The final response r = {} // Get all the achievements the user's aliases have received // in a set so they aren't double counted achievements = set() for alias in user.alias_set.select_related('achievements').all() : achievements.update(alias.achievements.all()) // Find all other aliases that have gotten at least one of the same // same achievements as the user otherAliases = set() for ach in achievements : otherAliases.update(ach.alias_set.all()) // Find other achievements the other users have gotten in addition to // the shared ones. // And count the number of times each achievement appears for otherAlias in otherAliases : for otherAch in otherAlias.achievements.all() : r[otherAch] = r.get(otherAch, 0) + 1 // Remove all the achievements that the user has already gotten for ach in achievements : r.pop(ach) // Sort by number of times the achievements have been received r = sorted(r.items(), lambda x, y: cmp(x[1], y[1]), reverse=True) // Put in the template for showing on the screen template_values = {} template_values['achievements'] = r But it takes FOREVER to run, and always returns the whole list, which is unneeded. A user would only need the top few achievements to go after. So, I'm welcome to recommendations on other algorithms and/or code improvements. I'll give you an achievement in my system for coming up with the recommendation algorithm :) A: One method you can recommend which achievements to go for is to see how many of your users already have those achievements and recommend those popular ones. When they have achieved those you go down the list and recommend slightly less popular ones. However, this has a naive assumption that everyone wants to go for popular achievements. It might cause popular achievements to be even more popular and less popular ones, well... A consolation is that this doesn't take up much resources and is likely to run very fast. (Just keep a list of achievements + number of times it's achieved) Another method (which attempts to guess which achievements the user is likely to go after based on what achievements he already had) is to use some machine learning algorithms. I think the k-nearest neighbor algorithm will perform quite well here. Select a threshold and just output everything that is above this threshold. Now, I don't know if this will run faster than what you already have, but you should just run the recommendation engine once every time the user has made a new achievement, store the top (let's say) five, and just output it back to the user whenever a recommendation is needed. I hope this helps. =) A: I would suggest that you do the first three steps (achievements, otherAliases, count) as one single SQL statement. As it is now, you are issuing a lot of queries and summarising thousands of rows in Python which is a task you should delegate to the DB. For example the code for otherAlias in otherAliases : #For every single other user for otherAch in otherAlias.achievements.all() : #execute a query r[otherAch] = r.get(otherAch, 0) + 1 Does thousands of huge queries. Instead, you can use SQL to do this by joining Achiever on itself based on Alias id being different and achievement id being the same. You then group by achievement id and run a count. In the query below, the table "B" is other user's achievements and "Achiever" is our achievements. If any other user shares an achievement, they appear once in "B" for each achievement they share. We then group those by alias_id and count the number of times they appeared so you get a nice id, count table out. Very very rough code (no SQL available here) SELECT B.Alias_id, COUNT(B.achievement_id) FROM Achiever, Achiever as B WHERE Achiever.achievement_id == B.achievement_id AND Achiever.Alias_id == <insert current user alias here>; GROUP BY B.Alias_id If that works the way I think it will, you will get a table of other user aliases, along with the number of achievements they share with the current user. The next thing you do is an SQL statement that uses the one above as an "inner select" - call it users. You join that with your achievements table and your Achiever table for the current user. You might want to ignore all but the top 10 users who are similar to the current user. I don't have time to write up a good query right now, but look at the JOIN statement for your DB that joins on achievement_id between the nominated 10 users and the current user - setting that id to NULL if it doesn't exist. The filter only to rows where it turned up NULL (unachieved achievements).
How to recommend the next achievement
Short version: I have a similar setup to StackOverflow. Users get Achievements. I have many more achievements than SO, lets say on the order of 10k, and each user has in the 100s of achievements. Now, how would you recommend (to recommend) the next achievement for a user to try for? Long version: The objects are modeled like this in django (showing only important parts) : class User(models.Model): alias = models.ForeignKey(Alias) class Alias(models.Model): achievements = models.ManyToManyField('Achievement', through='Achiever') class Achievement(models.Model): points = models.IntegerField() class Achiever(models.Model): achievement = models.ForeignKey(Achievement) alias = models.ForeignKey(Alias) count = models.IntegerField(default=1) and my algorithm is just to find every other user that has a shared achievement with the logged in user, and then go through all their achievements and sort by number of occurrences : def recommended(request) : user = request.user.get_profile() // The final response r = {} // Get all the achievements the user's aliases have received // in a set so they aren't double counted achievements = set() for alias in user.alias_set.select_related('achievements').all() : achievements.update(alias.achievements.all()) // Find all other aliases that have gotten at least one of the same // same achievements as the user otherAliases = set() for ach in achievements : otherAliases.update(ach.alias_set.all()) // Find other achievements the other users have gotten in addition to // the shared ones. // And count the number of times each achievement appears for otherAlias in otherAliases : for otherAch in otherAlias.achievements.all() : r[otherAch] = r.get(otherAch, 0) + 1 // Remove all the achievements that the user has already gotten for ach in achievements : r.pop(ach) // Sort by number of times the achievements have been received r = sorted(r.items(), lambda x, y: cmp(x[1], y[1]), reverse=True) // Put in the template for showing on the screen template_values = {} template_values['achievements'] = r But it takes FOREVER to run, and always returns the whole list, which is unneeded. A user would only need the top few achievements to go after. So, I'm welcome to recommendations on other algorithms and/or code improvements. I'll give you an achievement in my system for coming up with the recommendation algorithm :)
[ "One method you can recommend which achievements to go for is to see how many of your users already have those achievements and recommend those popular ones. When they have achieved those you go down the list and recommend slightly less popular ones. However, this has a naive assumption that everyone wants to go for popular achievements. It might cause popular achievements to be even more popular and less popular ones, well... A consolation is that this doesn't take up much resources and is likely to run very fast. (Just keep a list of achievements + number of times it's achieved)\nAnother method (which attempts to guess which achievements the user is likely to go after based on what achievements he already had) is to use some machine learning algorithms. I think the k-nearest neighbor algorithm will perform quite well here. Select a threshold and just output everything that is above this threshold. Now, I don't know if this will run faster than what you already have, but you should just run the recommendation engine once every time the user has made a new achievement, store the top (let's say) five, and just output it back to the user whenever a recommendation is needed.\nI hope this helps. =)\n", "I would suggest that you do the first three steps (achievements, otherAliases, count) as one single SQL statement. As it is now, you are issuing a lot of queries and summarising thousands of rows in Python which is a task you should delegate to the DB. For example the code\nfor otherAlias in otherAliases : #For every single other user\n for otherAch in otherAlias.achievements.all() : #execute a query\n r[otherAch] = r.get(otherAch, 0) + 1\n\nDoes thousands of huge queries.\nInstead, you can use SQL to do this by joining Achiever on itself based on Alias id being different and achievement id being the same. You then group by achievement id and run a count. \nIn the query below, the table \"B\" is other user's achievements and \"Achiever\" is our achievements. If any other user shares an achievement, they appear once in \"B\" for each achievement they share. We then group those by alias_id and count the number of times they appeared so you get a nice id, count table out.\nVery very rough code (no SQL available here)\nSELECT B.Alias_id, COUNT(B.achievement_id) \n FROM Achiever, Achiever as B \n WHERE Achiever.achievement_id == B.achievement_id \n AND Achiever.Alias_id == <insert current user alias here>;\n GROUP BY B.Alias_id\n\nIf that works the way I think it will, you will get a table of other user aliases, along with the number of achievements they share with the current user.\nThe next thing you do is an SQL statement that uses the one above as an \"inner select\" - call it users. You join that with your achievements table and your Achiever table for the current user. You might want to ignore all but the top 10 users who are similar to the current user.\nI don't have time to write up a good query right now, but look at the JOIN statement for your DB that joins on achievement_id between the nominated 10 users and the current user - setting that id to NULL if it doesn't exist. The filter only to rows where it turned up NULL (unachieved achievements). \n" ]
[ 3, 2 ]
[]
[]
[ "achievements", "django", "optimization", "python" ]
stackoverflow_0001081789_achievements_django_optimization_python.txt
Q: How to check available Python libraries on Google App Engine & add more How to check available Python libraries on Google App Engine & add more? Is SQLite available or we must use GQL with their database system only? Thank you in advance. A: SQLite is there (but since you cannot write to files, you must use it in a read-only way, or on a :memory: database). App engine docs do a good job at documenting what's there. You can add any other pure-python library, typically as a zipfile of .py (NOT .pyc) files to upload in the main directory of your app (you can directly import from inside the zipfile, of course). A few more pure-Python third-party libraries included with app engine are listed and documented here -- the paragraph on zipimport at this URL has a bit more details on the ways and limitations of using zipfiles to add more third-party pure-Python libs to your app. A: Afaik, you can only use the GAE specific database.
How to check available Python libraries on Google App Engine & add more
How to check available Python libraries on Google App Engine & add more? Is SQLite available or we must use GQL with their database system only? Thank you in advance.
[ "SQLite is there (but since you cannot write to files, you must use it in a read-only way, or on a :memory: database).\nApp engine docs do a good job at documenting what's there. You can add any other pure-python library, typically as a zipfile of .py (NOT .pyc) files to upload in the main directory of your app (you can directly import from inside the zipfile, of course).\nA few more pure-Python third-party libraries included with app engine are listed and documented here -- the paragraph on zipimport at this URL has a bit more details on the ways and limitations of using zipfiles to add more third-party pure-Python libs to your app.\n", "Afaik, you can only use the GAE specific database.\n" ]
[ 4, 1 ]
[]
[]
[ "google_app_engine", "python", "sqlite" ]
stackoverflow_0001085538_google_app_engine_python_sqlite.txt
Q: How do I install with distutils to a specific Python installation? I have a Windows machine with Python 2.3, 2.6 and 3.0 installed and 2.5 installed with Cygwin. I've downloaded the pexpect package but when I run "python setup.py install" it installs to the 2.6 installation. How could I have it install to the Cygwin Python installation, or any other installation? A: call the specific python version that you want to install for. For example: $ python2.3 setup.py install should install the package for python 2.3 A: using "python2.3" can be wrong if another (default) installation patched PATH to itself only. Task can be solved by: finding full path to desired python interpreter, on ActivePython it is C:\Python26 for default installation of Python 2.6 make a full path to binary (in this case C:\Python26\python.exe) execute module install command from unpacked module directory using full path to interpreter: C:\Python26\python.exe setup.py install
How do I install with distutils to a specific Python installation?
I have a Windows machine with Python 2.3, 2.6 and 3.0 installed and 2.5 installed with Cygwin. I've downloaded the pexpect package but when I run "python setup.py install" it installs to the 2.6 installation. How could I have it install to the Cygwin Python installation, or any other installation?
[ "call the specific python version that you want to install for. For example:\n$ python2.3 setup.py install\n\nshould install the package for python 2.3\n", "using \"python2.3\" can be wrong if another (default) installation patched PATH to itself only.\nTask can be solved by:\n\nfinding full path to desired python interpreter, on ActivePython it is C:\\Python26 for default installation of Python 2.6\nmake a full path to binary (in this case C:\\Python26\\python.exe)\nexecute module install command from unpacked module directory using full path to interpreter: C:\\Python26\\python.exe setup.py install\n\n" ]
[ 5, 0 ]
[]
[]
[ "distutils", "installation", "python" ]
stackoverflow_0001059594_distutils_installation_python.txt
Q: What's the simplest way to put a python script into the system tray (Windows) What's the simplest way to put a python script into the system tray? My target platform is Windows. I don't want to see the 'cmd.exe' window. A: Those are two questions, actually: Adding a tray icon can be done with Win32 API. Example: SysTrayIcon.py Hiding the cmd.exe window is as easy as using pythonw.exe instead of python.exe to run your scripts.
What's the simplest way to put a python script into the system tray (Windows)
What's the simplest way to put a python script into the system tray? My target platform is Windows. I don't want to see the 'cmd.exe' window.
[ "Those are two questions, actually:\n\nAdding a tray icon can be done with Win32 API. Example: SysTrayIcon.py\nHiding the cmd.exe window is as easy as using pythonw.exe instead of python.exe to run your scripts.\n\n" ]
[ 54 ]
[]
[]
[ "python", "system_tray" ]
stackoverflow_0001085694_python_system_tray.txt
Q: How do I use TLS with asyncore? An asyncore-based XMPP client opens a normal TCP connection to an XMPP server. The server indicates it requires an encrypted connection. The client is now expected to start a TLS handshake so that subsequent requests can be encrypted. tlslite integrates with asyncore, but the sample code is for a server (?) and I don't understand what it's doing. I'm on Python 2.5. How can I get the TLS magic working? Here's what ended up working for me: from tlslite.api import * def handshakeTls(self): """ Encrypt the socket using the tlslite module """ self.logger.info("activating TLS encrpytion") self.socket = TLSConnection(self.socket) self.socket.handshakeClientCert() A: Definitely check out twisted and wokkel. I've been building tons of xmpp bots and components with it and it's a dream. A: I've followed what I believe are all the steps tlslite documents to make an asyncore client work -- I can't actually get it to work since the only asyncore client I have at hand to tweak for the purpose is the example in the Python docs, which is an HTTP 1.0 client, and I believe that because of this I'm trying to set up an HTTPS connection in a very half-baked way. And I have no asyncore XMPP client, nor any XMPP server requesting TLS, to get anywhere close to your situation. Nevertheless I decided to share the fruits of my work anyway because (even though some step may be missing) it does seem to be a bit better than what you previously had -- I think I'm showing all the needed steps in the __init__. BTW, I copied the pem files from the tlslite/test directory. import asyncore, socket from tlslite.api import * s = open("./clientX509Cert.pem").read() x509 = X509() x509.parse(s) certChain = X509CertChain([x509]) s = open("./clientX509Key.pem").read() privateKey = parsePEMKey(s, private=True) class http_client(TLSAsyncDispatcherMixIn, asyncore.dispatcher): ac_in_buffer_size = 16384 def __init__(self, host, path): asyncore.dispatcher.__init__(self) self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect( (host, 80) ) TLSAsyncDispatcherMixIn.__init__(self, self.socket) self.tlsConnection.ignoreAbruptClose = True handshaker = self.tlsConnection.handshakeClientCert( certChain=certChain, privateKey=privateKey, async=True) self.setHandshakeOp(handshaker) self.buffer = 'GET %s HTTP/1.0\r\n\r\n' % path def handle_connect(self): pass def handle_close(self): self.close() def handle_read(self): print self.recv(8192) def writable(self): return (len(self.buffer) > 0) def handle_write(self): sent = self.send(self.buffer) self.buffer = self.buffer[sent:] c = http_client('www.readyhosting.com', '/') asyncore.loop() This is a mix of the asyncore example http client in the Python docs, plus what I've gleaned from the tlslite docs and have been able to reverse engineer from their sources. Hope this (even though incomplete/not working) can at least advance you in your quest... Personally, in your shoes, I'd consider switching from asyncore to twisted -- asyncore is old and rusty, Twisted already integrates a lot of juicy, useful bits (the URL I gave is to a bit in the docs that already does integrate TLS and XMPP for you...).
How do I use TLS with asyncore?
An asyncore-based XMPP client opens a normal TCP connection to an XMPP server. The server indicates it requires an encrypted connection. The client is now expected to start a TLS handshake so that subsequent requests can be encrypted. tlslite integrates with asyncore, but the sample code is for a server (?) and I don't understand what it's doing. I'm on Python 2.5. How can I get the TLS magic working? Here's what ended up working for me: from tlslite.api import * def handshakeTls(self): """ Encrypt the socket using the tlslite module """ self.logger.info("activating TLS encrpytion") self.socket = TLSConnection(self.socket) self.socket.handshakeClientCert()
[ "Definitely check out twisted and wokkel. I've been building tons of xmpp bots and components with it and it's a dream.\n", "I've followed what I believe are all the steps tlslite documents to make an asyncore client work -- I can't actually get it to work since the only asyncore client I have at hand to tweak for the purpose is the example in the Python docs, which is an HTTP 1.0 client, and I believe that because of this I'm trying to set up an HTTPS connection in a very half-baked way. And I have no asyncore XMPP client, nor any XMPP server requesting TLS, to get anywhere close to your situation. Nevertheless I decided to share the fruits of my work anyway because (even though some step may be missing) it does seem to be a bit better than what you previously had -- I think I'm showing all the needed steps in the __init__. BTW, I copied the pem files from the tlslite/test directory.\nimport asyncore, socket\nfrom tlslite.api import *\n\ns = open(\"./clientX509Cert.pem\").read()\nx509 = X509()\nx509.parse(s)\ncertChain = X509CertChain([x509])\n\ns = open(\"./clientX509Key.pem\").read()\nprivateKey = parsePEMKey(s, private=True)\n\n\nclass http_client(TLSAsyncDispatcherMixIn, asyncore.dispatcher):\n ac_in_buffer_size = 16384\n\n def __init__(self, host, path):\n asyncore.dispatcher.__init__(self)\n self.create_socket(socket.AF_INET, socket.SOCK_STREAM)\n self.connect( (host, 80) )\n\n TLSAsyncDispatcherMixIn.__init__(self, self.socket)\n self.tlsConnection.ignoreAbruptClose = True\n handshaker = self.tlsConnection.handshakeClientCert(\n certChain=certChain,\n privateKey=privateKey,\n async=True)\n self.setHandshakeOp(handshaker)\n\n self.buffer = 'GET %s HTTP/1.0\\r\\n\\r\\n' % path\n\n def handle_connect(self):\n pass\n\n def handle_close(self):\n self.close()\n\n def handle_read(self):\n print self.recv(8192)\n\n def writable(self):\n return (len(self.buffer) > 0)\n\n def handle_write(self):\n sent = self.send(self.buffer)\n self.buffer = self.buffer[sent:]\n\nc = http_client('www.readyhosting.com', '/')\n\nasyncore.loop()\n\nThis is a mix of the asyncore example http client in the Python docs, plus what I've gleaned from the tlslite docs and have been able to reverse engineer from their sources. Hope this (even though incomplete/not working) can at least advance you in your quest...\nPersonally, in your shoes, I'd consider switching from asyncore to twisted -- asyncore is old and rusty, Twisted already integrates a lot of juicy, useful bits (the URL I gave is to a bit in the docs that already does integrate TLS and XMPP for you...).\n" ]
[ 4, 2 ]
[]
[]
[ "python", "ssl" ]
stackoverflow_0001085050_python_ssl.txt
Q: where to start OpenID implementation in python,which API is better suits im using python API in our research project.i read lot of ppt and material, and finally understand this concept now i have task to execute simple function which is chek user credential thorough openid provider and return successful after the valid user check..... A: To add to the recommendation of the Python OpenID library: their docs pages for the server and consumer modules both have useful Overview sections which you should read as a good starting point. The examples directory is also useful; I've written things starting from server.py and consumer.py from there. A: Why not use Python OpenID library?
where to start OpenID implementation in python,which API is better suits
im using python API in our research project.i read lot of ppt and material, and finally understand this concept now i have task to execute simple function which is chek user credential thorough openid provider and return successful after the valid user check.....
[ "To add to the recommendation of the Python OpenID library: their docs pages for the server and consumer modules both have useful Overview sections which you should read as a good starting point. The examples directory is also useful; I've written things starting from server.py and consumer.py from there.\n", "Why not use Python OpenID library?\n" ]
[ 3, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001086127_python.txt
Q: Is is possible to return an object or value from a Python script to the hosting application? For example in Lua you can place the following line at the end of a script: return <some-value/object> The value/object that is returned can then be retrieved by the hosting application. I use this pattern so that scripts can represent factories for event handlers. The script-based event handlers are then used to extend the application. For example the hosting application runs a script called 'SomeEventHandler.lua' which defines and returns an object that is an event handler for 'SomeEvent' in your application. Can this be done in Python? Or is there a better way to achieve this? More specifically I am embedding IronPython in my C# application and am looking for a way to instance these script-based event handlers which will allow the application to be extended using Python. A: It's totally possible and a common technique when embedding Python. This article shows the basics, as does this page. The core function is PyObject_CallObject() which calls code written in Python, from C. A: This can be done in Python just the same way. You can require the plugin to provide a getHandler() function / method that returns the event handler: class myPlugin(object): def doIt(self,event,*args): print "Doing important stuff" def getHandler(self,document): print "Initializing plugin" self._doc = document return doIt When the user says "I want to use plugin X now," you know which function to call. If the plugin is not only to be called after a direct command, but also on certain events (like e.g. loading a graphics element), you can also provide the plugin author with possibilities to bind the handler to this very event. A: See some examples in Embedding the Dynamic Language Runtime. A simple example, setting-and-fetching-variables: SourceCodeKind st = SourceCodeKind.Statements; string source = "print 'Hello World'"; script = eng.CreateScriptSourceFromString(source, st); scope = eng.CreateScope(); script.Execute(scope); // The namespace holds the variables that the code creates in the process of executing it. int value = 3; scope.SetVariable("name", value); script.Execute(scope); int result = scope.GetVariable<int>("name"); A: The way I would do it (and the way I've seen it done) is have a function for each event all packed into one module (or across several, doesn't matter), and then call the function through C/C++/C# and use its return value.
Is is possible to return an object or value from a Python script to the hosting application?
For example in Lua you can place the following line at the end of a script: return <some-value/object> The value/object that is returned can then be retrieved by the hosting application. I use this pattern so that scripts can represent factories for event handlers. The script-based event handlers are then used to extend the application. For example the hosting application runs a script called 'SomeEventHandler.lua' which defines and returns an object that is an event handler for 'SomeEvent' in your application. Can this be done in Python? Or is there a better way to achieve this? More specifically I am embedding IronPython in my C# application and am looking for a way to instance these script-based event handlers which will allow the application to be extended using Python.
[ "It's totally possible and a common technique when embedding Python. This article shows the basics, as does this page. The core function is PyObject_CallObject() which calls code written in Python, from C.\n", "This can be done in Python just the same way. You can require the plugin to provide a getHandler() function / method that returns the event handler:\nclass myPlugin(object):\n\n def doIt(self,event,*args):\n print \"Doing important stuff\"\n\n def getHandler(self,document):\n print \"Initializing plugin\"\n self._doc = document\n return doIt\n\nWhen the user says \"I want to use plugin X now,\" you know which function to call. If the plugin is not only to be called after a direct command, but also on certain events (like e.g. loading a graphics element), you can also provide the plugin author with possibilities to bind the handler to this very event.\n", "See some examples in Embedding the Dynamic Language Runtime.\nA simple example, setting-and-fetching-variables:\nSourceCodeKind st = SourceCodeKind.Statements;\nstring source = \"print 'Hello World'\";\nscript = eng.CreateScriptSourceFromString(source, st);\nscope = eng.CreateScope();\nscript.Execute(scope);\n// The namespace holds the variables that the code creates in the process of executing it.\nint value = 3;\nscope.SetVariable(\"name\", value);\n\nscript.Execute(scope);\n\nint result = scope.GetVariable<int>(\"name\");\n\n", "The way I would do it (and the way I've seen it done) is have a function for each event all packed into one module (or across several, doesn't matter), and then call the function through C/C++/C# and use its return value.\n" ]
[ 2, 2, 2, 0 ]
[]
[]
[ "c#", "ironpython", "python" ]
stackoverflow_0001086188_c#_ironpython_python.txt
Q: Parsing HTML rows into CSV First off the html row looks like this: <tr class="evenColor"> blahblah TheTextIneed blahblah and ends with </tr> I would show the real html but I am sorry to say don't know how to block it. feels shame Using BeautifulSoup (Python) or any other recommended Screen Scraping/Parsing method I would like to output about 1200 .htm files in the same directory into a CSV format. This will eventually go into an SQL database. Each directory represents a year and I plan to do at least 5 years. I have been goofing around with glob as the best way to do this from some advice. This is what I have so far and am stuck. import glob from BeautifulSoup import BeautifulSoup for filename in glob.glob('/home/phi/data/NHL/pl0708/pl02*.htm'): #these files go from pl020001.htm to pl021230.htm sequentially soup = BeautifulSoup(open(filename["r"])) for row in soup.findAll("tr", attrs={ "class" : "evenColor" }) I realize this is ugly but it's my first time attempting anything like this. This one problem has taken me months to get to this point after realizing that I don't have to manually go through thousands of files copy and pasting into excel. I have also realized that I can kick my computer repeatedly out of frustration and it still works (not recommended). I am getting close and I need to know what to do next to make those CSV files. Please help or my monitor finally gets hammer punched. A: You don't really explain why you are stuck - what's not working exactly? The following line may well be your problem: soup = BeautifulSoup(open(filename["r"])) It looks to me like this should be: soup = BeautifulSoup(open(filename, "r")) The following line: for row in soup.findAll("tr", attrs={ "class" : "evenColor" }) looks like it will only pick out even rows (assuming your even rows have the class 'evenColor' and odd rows have 'oddColor'). Assuming you want all rows with a class of either evenColor or oddColor, you can use a regular expression to match the class value: for row in soup.findAll("tr", attrs={ "class" : re.compile(r"evenColor|oddColor") }) A: You need to import the csv module by adding import csv to the top of your file. Then you'll need something to create a csv file outside your loop of the rows, like so: writer = csv.writer(open("%s.csv" % filename, "wb")) Then you need to actually pull the data out of the html row in your loop, similar to values = (td.fetchText() for td in row) writer.writerow(values) A: That looks fine, and BeautifulSoup is useful for this (although I personally tend to use lxml). You should be able to take that data you get, and make a csv file out of is using the csv module without any obvious problems... I think you need to actually tell us what the problem is. "It still doesn't work" is not a problem descripton.
Parsing HTML rows into CSV
First off the html row looks like this: <tr class="evenColor"> blahblah TheTextIneed blahblah and ends with </tr> I would show the real html but I am sorry to say don't know how to block it. feels shame Using BeautifulSoup (Python) or any other recommended Screen Scraping/Parsing method I would like to output about 1200 .htm files in the same directory into a CSV format. This will eventually go into an SQL database. Each directory represents a year and I plan to do at least 5 years. I have been goofing around with glob as the best way to do this from some advice. This is what I have so far and am stuck. import glob from BeautifulSoup import BeautifulSoup for filename in glob.glob('/home/phi/data/NHL/pl0708/pl02*.htm'): #these files go from pl020001.htm to pl021230.htm sequentially soup = BeautifulSoup(open(filename["r"])) for row in soup.findAll("tr", attrs={ "class" : "evenColor" }) I realize this is ugly but it's my first time attempting anything like this. This one problem has taken me months to get to this point after realizing that I don't have to manually go through thousands of files copy and pasting into excel. I have also realized that I can kick my computer repeatedly out of frustration and it still works (not recommended). I am getting close and I need to know what to do next to make those CSV files. Please help or my monitor finally gets hammer punched.
[ "You don't really explain why you are stuck - what's not working exactly?\nThe following line may well be your problem:\nsoup = BeautifulSoup(open(filename[\"r\"]))\n\nIt looks to me like this should be:\nsoup = BeautifulSoup(open(filename, \"r\"))\n\nThe following line:\nfor row in soup.findAll(\"tr\", attrs={ \"class\" : \"evenColor\" })\n\nlooks like it will only pick out even rows (assuming your even rows have the class 'evenColor' and odd rows have 'oddColor'). Assuming you want all rows with a class of either evenColor or oddColor, you can use a regular expression to match the class value:\nfor row in soup.findAll(\"tr\", attrs={ \"class\" : re.compile(r\"evenColor|oddColor\") })\n\n", "You need to import the csv module by adding import csv to the top of your file.\nThen you'll need something to create a csv file outside your loop of the rows, like so:\nwriter = csv.writer(open(\"%s.csv\" % filename, \"wb\"))\n\nThen you need to actually pull the data out of the html row in your loop, similar to\nvalues = (td.fetchText() for td in row)\nwriter.writerow(values)\n\n", "That looks fine, and BeautifulSoup is useful for this (although I personally tend to use lxml). You should be able to take that data you get, and make a csv file out of is using the csv module without any obvious problems...\nI think you need to actually tell us what the problem is. \"It still doesn't work\" is not a problem descripton.\n" ]
[ 4, 4, 2 ]
[]
[]
[ "beautifulsoup", "csv", "html", "python", "screen_scraping" ]
stackoverflow_0001086266_beautifulsoup_csv_html_python_screen_scraping.txt
Q: Django: information captured from URLs available in template files? Given: urlpatterns = \ patterns('blog.views', (r'^blog/(?P<year>\d{4})/$', 'year_archive', {'foo': 'bar'}), ) in a urls.py file. (Should it be 'archive_year' instead of 'year_archive' ? - see below for ref.) Is it possible to capture information from the URL matching (the value of "year" in this case) for use in the optional dictionary?. E.g.: the value of year instead 'bar'? Replacing 'bar' with year results in: "NameError ... name 'year' is not defined". The above is just an example; I know that year is available in the template HTML file for archive_year, but this is not the case for archive_month. And there could be custom information represented in the URL that is needed in a template HTML file. (The example is from page "URL dispatcher", section "Passing extra options to view functions", http://docs.djangoproject.com/en/dev/topics/http/urls/, in the Django documentation.) A: No, that's not possible within the URLConf -- the dispatcher has a fixed set of things it does. (It takes the group dictionary from your regex match and passes it as keyword arguments to your view function.) Within your (custom) view function, you should be able to manipulate how those values are passed into the template context. Writing a custom view to map year to "foo" given this URLConf would be something like: def custom_view(request, year, foo): context = RequestContext(request, {'foo': year}) return render_to_response('my_template.tmpl', context) The reason that you get a NameError in the case you're describing is because Python is looking for an identifier called year in the surrounding scope and it doesn't exist there -- it's only a substring in the regex pattern.
Django: information captured from URLs available in template files?
Given: urlpatterns = \ patterns('blog.views', (r'^blog/(?P<year>\d{4})/$', 'year_archive', {'foo': 'bar'}), ) in a urls.py file. (Should it be 'archive_year' instead of 'year_archive' ? - see below for ref.) Is it possible to capture information from the URL matching (the value of "year" in this case) for use in the optional dictionary?. E.g.: the value of year instead 'bar'? Replacing 'bar' with year results in: "NameError ... name 'year' is not defined". The above is just an example; I know that year is available in the template HTML file for archive_year, but this is not the case for archive_month. And there could be custom information represented in the URL that is needed in a template HTML file. (The example is from page "URL dispatcher", section "Passing extra options to view functions", http://docs.djangoproject.com/en/dev/topics/http/urls/, in the Django documentation.)
[ "No, that's not possible within the URLConf -- the dispatcher has a fixed set of things it does. (It takes the group dictionary from your regex match and passes it as keyword arguments to your view function.) Within your (custom) view function, you should be able to manipulate how those values are passed into the template context.\nWriting a custom view to map year to \"foo\" given this URLConf would be something like:\ndef custom_view(request, year, foo):\n context = RequestContext(request, {'foo': year})\n return render_to_response('my_template.tmpl', context)\n\nThe reason that you get a NameError in the case you're describing is because Python is looking for an identifier called year in the surrounding scope and it doesn't exist there -- it's only a substring in the regex pattern.\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001086531_django_python.txt
Q: What is the best way to do Bit Field manipulation in Python? I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C). Suggestions? A: The bitstring module is designed to address just this problem. It will let you read, modify and construct data using bits as the basic building blocks. The latest versions are for Python 2.6 or later (including Python 3) but version 1.0 supported Python 2.4 and 2.5 as well. A relevant example for you might be this, which strips out all the null packets from a transport stream (and quite possibly uses your 13 bit field?): from bitstring import Bits, BitStream # Opening from a file means that it won't be all read into memory s = Bits(filename='test.ts') outfile = open('test_nonull.ts', 'wb') # Cut the stream into 188 byte packets for packet in s.cut(188*8): # Take a 13 bit slice and interpret as an unsigned integer PID = packet[11:24].uint # Write out the packet if the PID doesn't indicate a 'null' packet if PID != 8191: # The 'bytes' property converts back to a string. outfile.write(packet.bytes) Here's another example including reading from bitstreams: # You can create from hex, binary, integers, strings, floats, files... # This has a hex code followed by two 12 bit integers s = BitStream('0x000001b3, uint:12=352, uint:12=288') # Append some other bits s += '0b11001, 0xff, int:5=-3' # read back as 32 bits of hex, then two 12 bit unsigned integers start_code, width, height = s.readlist('hex:32, 2*uint:12') # Skip some bits then peek at next bit value s.pos += 4 if s.peek(1): flags = s.read(9) You can use standard slice notation to slice, delete, reverse, overwrite, etc. at the bit level, and there are bit level find, replace, split etc. functions. Different endiannesses are also supported. # Replace every '1' bit by 3 bits s.replace('0b1', '0b001') # Find all occurrences of a bit sequence bitposlist = list(s.findall('0b01000')) # Reverse bits in place s.reverse() The full documentation is here. A: It's an often-asked question. There's an ASPN Cookbook entry on it that has served me in the past. And there is an extensive page of requirements one person would like to see from a module doing this.
What is the best way to do Bit Field manipulation in Python?
I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C). Suggestions?
[ "The bitstring module is designed to address just this problem. It will let you read, modify and construct data using bits as the basic building blocks. The latest versions are for Python 2.6 or later (including Python 3) but version 1.0 supported Python 2.4 and 2.5 as well.\nA relevant example for you might be this, which strips out all the null packets from a transport stream (and quite possibly uses your 13 bit field?):\nfrom bitstring import Bits, BitStream \n\n# Opening from a file means that it won't be all read into memory\ns = Bits(filename='test.ts')\noutfile = open('test_nonull.ts', 'wb')\n\n# Cut the stream into 188 byte packets\nfor packet in s.cut(188*8):\n # Take a 13 bit slice and interpret as an unsigned integer\n PID = packet[11:24].uint\n # Write out the packet if the PID doesn't indicate a 'null' packet\n if PID != 8191:\n # The 'bytes' property converts back to a string.\n outfile.write(packet.bytes)\n\nHere's another example including reading from bitstreams: \n# You can create from hex, binary, integers, strings, floats, files...\n# This has a hex code followed by two 12 bit integers\ns = BitStream('0x000001b3, uint:12=352, uint:12=288')\n# Append some other bits\ns += '0b11001, 0xff, int:5=-3'\n# read back as 32 bits of hex, then two 12 bit unsigned integers\nstart_code, width, height = s.readlist('hex:32, 2*uint:12')\n# Skip some bits then peek at next bit value\ns.pos += 4\nif s.peek(1):\n flags = s.read(9)\n\nYou can use standard slice notation to slice, delete, reverse, overwrite, etc. at the bit level, and there are bit level find, replace, split etc. functions. Different endiannesses are also supported.\n# Replace every '1' bit by 3 bits\ns.replace('0b1', '0b001')\n# Find all occurrences of a bit sequence\nbitposlist = list(s.findall('0b01000'))\n# Reverse bits in place\ns.reverse()\n\nThe full documentation is here.\n", "It's an often-asked question. There's an ASPN Cookbook entry on it that has served me in the past.\nAnd there is an extensive page of requirements one person would like to see from a module doing this.\n" ]
[ 26, 8 ]
[]
[]
[ "bit", "bit_fields", "python", "udp" ]
stackoverflow_0000039663_bit_bit_fields_python_udp.txt
Q: Python: How can I import all variables? I'm new to Python and programming in general (a couple of weeks at most). Concerning Python and using modules, I realise that functions can imported using from a import *. So instead of typing a.sayHi() a.sayBye() I can say sayHi() sayBye() which I find simplifies things a great deal. Now, say I have a bunch of variables that I want to use across modules and I have them all defined in one python module. How can I, using a similar method as mentioned above or an equally simple one, import these variables. I don't want to use import a and then be required to prefix all my variables with a.. The following situation would by ideal: a.py name = "Michael" age = 15 b.py some_function if name == "Michael": if age == 15: print("Simple!") Output: Simple! A: You gave the solution yourself: from a import * will work just fine. Python does not differentiate between functions and variables in this respect. >>> from a import * >>> if name == "Michael" and age == 15: ... print('Simple!') ... Simple! A: Just for some context, most linters will flag from module import * with a warning, because it's prone to namespace collisions that will cause headaches down the road. Nobody has noted yet that, as an alternative, you can use the from a import name, age form and then use name and age directly (without the a. prefix). The from [module] import [identifiers] form is more future proof because you can easily see when one import will be overriding another. Also note that "variables" aren't different from functions in Python in terms of how they're addressed -- every identifier like name or sayBye is pointing at some kind of object. The identifier name is pointing at a string object, sayBye is pointing at a function object, and age is pointing at an integer object. When you tell Python: from a import name, age you're saying "take those objects pointed at by name and age within module a and point at them in the current scope with the same identifiers". Similarly, if you want to point at them with different identifiers on import, you can use the from a import sayBye as bidFarewell form. The same function object gets pointed at, except in the current scope the identifier pointing at it is bidFarewell whereas in module a the identifier pointing at it is sayBye. A: Like others have said, from module import * will also import the modules variables. However, you need to understand that you are not importing variables, just references to objects. Assigning something else to the imported names in the importing module won't affect the other modules. Example: assume you have a module module.py containing the following code: a= 1 b= 2 Then you have two other modules, mod1.py and mod2.py which both do the following: from module import * In each module, two names, a and b are created, pointing to the objects 1 and 2, respectively. Now, if somewhere in mod1.py you assign something else to the global name a: a= 3 the name a in module.py and the name a in mod2.py will still point to the object 1. So from module import * will work if you want read-only globals, but it won't work if you want read-write globals. If the latter, you're better off just importing import module and then either getting the value (module.a) or setting the value (module.a= …) prefixed by the module. A: You didn't say this directly, but I'm assuming you're having trouble with manipulating these global variables. If you manipulate global variables from inside a function, you must declare them global a = 10 def x(): global a a = 15 print a x() print a If you don't do that, then a = 15 will just create a local variable and assign it 15, while the global a stays 10
Python: How can I import all variables?
I'm new to Python and programming in general (a couple of weeks at most). Concerning Python and using modules, I realise that functions can imported using from a import *. So instead of typing a.sayHi() a.sayBye() I can say sayHi() sayBye() which I find simplifies things a great deal. Now, say I have a bunch of variables that I want to use across modules and I have them all defined in one python module. How can I, using a similar method as mentioned above or an equally simple one, import these variables. I don't want to use import a and then be required to prefix all my variables with a.. The following situation would by ideal: a.py name = "Michael" age = 15 b.py some_function if name == "Michael": if age == 15: print("Simple!") Output: Simple!
[ "You gave the solution yourself: from a import * will work just fine. Python does not differentiate between functions and variables in this respect.\n>>> from a import *\n>>> if name == \"Michael\" and age == 15:\n... print('Simple!')\n...\nSimple!\n\n", "Just for some context, most linters will flag from module import * with a warning, because it's prone to namespace collisions that will cause headaches down the road.\nNobody has noted yet that, as an alternative, you can use the\nfrom a import name, age\n\nform and then use name and age directly (without the a. prefix). The from [module] import [identifiers] form is more future proof because you can easily see when one import will be overriding another.\nAlso note that \"variables\" aren't different from functions in Python in terms of how they're addressed -- every identifier like name or sayBye is pointing at some kind of object. The identifier name is pointing at a string object, sayBye is pointing at a function object, and age is pointing at an integer object. When you tell Python:\nfrom a import name, age\n\nyou're saying \"take those objects pointed at by name and age within module a and point at them in the current scope with the same identifiers\".\nSimilarly, if you want to point at them with different identifiers on import, you can use the\nfrom a import sayBye as bidFarewell\n\nform. The same function object gets pointed at, except in the current scope the identifier pointing at it is bidFarewell whereas in module a the identifier pointing at it is sayBye.\n", "Like others have said,\nfrom module import *\n\nwill also import the modules variables.\nHowever, you need to understand that you are not importing variables, just references to objects. Assigning something else to the imported names in the importing module won't affect the other modules.\nExample: assume you have a module module.py containing the following code:\na= 1\nb= 2\n\nThen you have two other modules, mod1.py and mod2.py which both do the following:\nfrom module import *\n\nIn each module, two names, a and b are created, pointing to the objects 1 and 2, respectively.\nNow, if somewhere in mod1.py you assign something else to the global name a:\na= 3\n\nthe name a in module.py and the name a in mod2.py will still point to the object 1.\nSo from module import * will work if you want read-only globals, but it won't work if you want read-write globals. If the latter, you're better off just importing import module and then either getting the value (module.a) or setting the value (module.a= …) prefixed by the module.\n", "You didn't say this directly, but I'm assuming you're having trouble with manipulating these global variables.\nIf you manipulate global variables from inside a function, you must declare them global\na = 10\ndef x():\n global a\n a = 15\n\nprint a\nx()\nprint a\n\nIf you don't do that, then a = 15 will just create a local variable and assign it 15, while the global a stays 10\n" ]
[ 79, 38, 14, 8 ]
[]
[]
[ "import", "module", "python", "variables" ]
stackoverflow_0001084977_import_module_python_variables.txt
Q: how to make table partitions? I am not very familiar with databases, and so I do not know how to partition a table using SQLAlchemy. Your help would be greatly appreciated. A: There are two kinds of partitioning: Vertical Partitioning and Horizontal Partitioning. From the docs: Vertical Partitioning Vertical partitioning places different kinds of objects, or different tables, across multiple databases: engine1 = create_engine('postgres://db1') engine2 = create_engine('postgres://db2') Session = sessionmaker(twophase=True) # bind User operations to engine 1, Account operations to engine 2 Session.configure(binds={User:engine1, Account:engine2}) session = Session() Horizontal Partitioning Horizontal partitioning partitions the rows of a single table (or a set of tables) across multiple databases. See the “sharding” example in attribute_shard.py Just ask if you need more information on those, preferably providing more information about what you want to do. A: It's quite an advanced subject for somebody not familiar with databases, but try Essential SQLAlchemy (you can read the key parts on Google Book Search -- p 122 to 124; the example on p. 125-126 is not freely readable online, so you'd have to purchase the book or read it on commercial services such as O'Reilly's Safari -- maybe on a free trial -- if you want to read the example). Perhaps you can get better answers if you mention whether you're talking about vertical or horizontal partitioning, why you need partitioning, and what underlying database engines you are considering for the purpose. A: Automatic partitioning is a very database engine specific concept and SQLAlchemy doesn't provide any generic tools to manage partitioning. Mostly because it wouldn't provide anything really useful while being another API to learn. If you want to do database level partitioning then do the CREATE TABLE statements using custom Oracle DDL statements (see Oracle documentation how to create partitioned tables and migrate data to them). You can use a partitioned table in SQLAlchemy just like you would use a normal table, you just need the table declaration so that SQLAlchemy knows what to query. You can reflect the definition from the database, or just duplicate the table declaration in SQLAlchemy code. Very large datasets are usually time-based, with older data becoming read-only or read-mostly and queries usually only look at data from a time interval. If that describes your data, you should probably partition your data using the date field. There's also application level partitioning, or sharding, where you use your application to split data across different database instances. This isn't all that popular in the Oracle world due to the exorbitant pricing models. If you do want to use sharding, then look at SQLAlchemy documentation and examples for that, for how SQLAlchemy can support you in that, but be aware that application level sharding will affect how you need to build your application code.
how to make table partitions?
I am not very familiar with databases, and so I do not know how to partition a table using SQLAlchemy. Your help would be greatly appreciated.
[ "There are two kinds of partitioning: Vertical Partitioning and Horizontal Partitioning.\nFrom the docs:\n\nVertical Partitioning\nVertical partitioning places different\n kinds of objects, or different tables,\n across multiple databases:\nengine1 = create_engine('postgres://db1')\nengine2 = create_engine('postgres://db2')\nSession = sessionmaker(twophase=True)\n# bind User operations to engine 1, Account operations to engine 2\nSession.configure(binds={User:engine1, Account:engine2})\nsession = Session()\n\nHorizontal Partitioning\nHorizontal partitioning partitions the\n rows of a single table (or a set of\n tables) across multiple databases.\nSee the “sharding” example in\n attribute_shard.py\n\nJust ask if you need more information on those, preferably providing more information about what you want to do.\n", "It's quite an advanced subject for somebody not familiar with databases, but try Essential SQLAlchemy (you can read the key parts on Google Book Search -- p 122 to 124; the example on p. 125-126 is not freely readable online, so you'd have to purchase the book or read it on commercial services such as O'Reilly's Safari -- maybe on a free trial -- if you want to read the example).\nPerhaps you can get better answers if you mention whether you're talking about vertical or horizontal partitioning, why you need partitioning, and what underlying database engines you are considering for the purpose.\n", "Automatic partitioning is a very database engine specific concept and SQLAlchemy doesn't provide any generic tools to manage partitioning. Mostly because it wouldn't provide anything really useful while being another API to learn. If you want to do database level partitioning then do the CREATE TABLE statements using custom Oracle DDL statements (see Oracle documentation how to create partitioned tables and migrate data to them). You can use a partitioned table in SQLAlchemy just like you would use a normal table, you just need the table declaration so that SQLAlchemy knows what to query. You can reflect the definition from the database, or just duplicate the table declaration in SQLAlchemy code.\nVery large datasets are usually time-based, with older data becoming read-only or read-mostly and queries usually only look at data from a time interval. If that describes your data, you should probably partition your data using the date field.\nThere's also application level partitioning, or sharding, where you use your application to split data across different database instances. This isn't all that popular in the Oracle world due to the exorbitant pricing models. If you do want to use sharding, then look at SQLAlchemy documentation and examples for that, for how SQLAlchemy can support you in that, but be aware that application level sharding will affect how you need to build your application code.\n" ]
[ 4, 3, 3 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001085304_python_sqlalchemy.txt
Q: Boolean value of objects in Python As we know, Python has boolean values for objects: If a class has a __len__ method, every instance of it for which __len__() happens to return 0 will be evaluated as a boolean False (for example, the empty list). In fact, every iterable, empty custom object is evaluated as False if it appears in boolean expression. Now suppose I have a class foo with attribute bar. How can I define its truth value, so that, say, it will be evaluated to True if bar % 2 == 0 and False otherwise? For example: myfoo = foo() myfoo.bar = 3 def a(foo): if foo: print "spam" else: print "eggs" so, a(myfoo) should print "eggs". A: In Python < 3.0 : You have to use __nonzero__ to achieve what you want. It's a method that is called automatically by Python when evaluating an object in a boolean context. It must return a boolean that will be used as the value to evaluate. E.G : class Foo(object): def __init__(self, bar) : self.bar = bar def __nonzero__(self) : return self.bar % 2 == 0 if __name__ == "__main__": if (Foo(2)) : print "yess !" In Python => 3.0 : Same thing, except the method has been renamed to the much more obvious __bool__. A: In Python 2, use __nonzero__: Refer to the Python 2 docs for __nonzero__. class foo(object): def __nonzero__( self) : return self.bar % 2 == 0 def a(foo): if foo: print "spam" else: print "eggs" def main(): myfoo = foo() myfoo.bar = 3 a(myfoo) if __name__ == "__main__": main()
Boolean value of objects in Python
As we know, Python has boolean values for objects: If a class has a __len__ method, every instance of it for which __len__() happens to return 0 will be evaluated as a boolean False (for example, the empty list). In fact, every iterable, empty custom object is evaluated as False if it appears in boolean expression. Now suppose I have a class foo with attribute bar. How can I define its truth value, so that, say, it will be evaluated to True if bar % 2 == 0 and False otherwise? For example: myfoo = foo() myfoo.bar = 3 def a(foo): if foo: print "spam" else: print "eggs" so, a(myfoo) should print "eggs".
[ "In Python < 3.0 :\nYou have to use __nonzero__ to achieve what you want. It's a method that is called automatically by Python when evaluating an object in a boolean context. It must return a boolean that will be used as the value to evaluate.\nE.G :\nclass Foo(object):\n\n def __init__(self, bar) :\n self.bar = bar\n\n def __nonzero__(self) :\n return self.bar % 2 == 0\n\nif __name__ == \"__main__\":\n if (Foo(2)) : print \"yess !\"\n\nIn Python => 3.0 :\nSame thing, except the method has been renamed to the much more obvious __bool__.\n", "In Python 2, use __nonzero__:\nRefer to the Python 2 docs for __nonzero__.\nclass foo(object):\n def __nonzero__( self) :\n return self.bar % 2 == 0\n\ndef a(foo):\n if foo:\n print \"spam\"\n else:\n print \"eggs\"\n\ndef main():\n myfoo = foo()\n myfoo.bar = 3\n a(myfoo)\n\nif __name__ == \"__main__\":\n main()\n\n" ]
[ 56, 39 ]
[]
[]
[ "boolean", "python" ]
stackoverflow_0001087135_boolean_python.txt
Q: Can I be warned when I used a generator function by accident I was working with generator functions and private functions of a class. I am wondering Why when yielding (which in my one case was by accident) in __someFunc that this function just appears not to be called from within __someGenerator. Also what is the terminology I want to use when referring to these aspects of the language? Can the python interpreter warn of such instances? Below is an example snippet of my scenario. class someClass(): def __init__(self): pass #Copy and paste mistake where yield ended up in a regular function def __someFunc(self): print "hello" #yield True #if yielding in this function it isn't called def __someGenerator (self): for i in range(0, 10): self.__someFunc() yield True yield False def someMethod(self): func = self.__someGenerator() while func.next(): print "next" sc = someClass() sc.someMethod() I got burned on this and spent some time trying to figure out why a function just wasn't getting called. I finally discovered I was yielding in function I didn't want to in. A: A "generator" isn't so much a language feature, as a name for functions that "yield." Yielding is pretty much always legal. There's not really any way for Python to know that you didn't "mean" to yield from some function. This PEP http://www.python.org/dev/peps/pep-0255/ talks about generators, and may help you understand the background better. I sympathize with your experience, but compilers can't figure out what you "meant for them to do", only what you actually told them to do. A: I'll try to answer the first of your questions. A regular function, when called like this: val = func() executes its inside statements until it ends or a return statement is reached. Then the return value of the function is assigned to val. If a compiler recognizes the function to actually be a generator and not a regular function (it does that by looking for yield statements inside the function -- if there's at least one, it's a generator), the scenario when calling it the same way as above has different consequences. Upon calling func(), no code inside the function is executed, and a special <generator> value is assigned to val. Then, the first time you call val.next(), the actual statements of func are being executed until a yield or return is encountered, upon which the execution of the function stops, value yielded is returned and generator waits for another call to val.next(). That's why, in your example, function __someFunc didn't print "hello" -- its statements were not executed, because you haven't called self.__someFunc().next(), but only self.__someFunc(). Unfortunately, I'm pretty sure there's no built-in warning mechanism for programming errors like yours. A: Python doesn't know whether you want to create a generator object for later iteration or call a function. But python isn't your only tool for seeing what's going on with your code. If you're using an editor or IDE that allows customized syntax highlighting, you can tell it to give the yield keyword a different color, or even a bright background, which will help you find your errors more quickly, at least. In vim, for example, you might do: :syntax keyword Yield yield :highlight yield ctermbg=yellow guibg=yellow ctermfg=blue guifg=blue Those are horrendous colors, by the way. I recommend picking something better. Another option, if your editor or IDE won't cooperate, is to set up a custom rule in a code checker like pylint. An example from pylint's source tarball: from pylint.interfaces import IRawChecker from pylint.checkers import BaseChecker class MyRawChecker(BaseChecker): """check for line continuations with '\' instead of using triple quoted string or parenthesis """ __implements__ = IRawChecker name = 'custom_raw' msgs = {'W9901': ('use \\ for line continuation', ('Used when a \\ is used for a line continuation instead' ' of using triple quoted string or parenthesis.')), } options = () def process_module(self, stream): """process a module the module's content is accessible via the stream object """ for (lineno, line) in enumerate(stream): if line.rstrip().endswith('\\'): self.add_message('W9901', line=lineno) def register(linter): """required method to auto register this checker""" linter.register_checker(MyRawChecker(linter)) The pylint manual is available here: http://www.logilab.org/card/pylint_manual And vim's syntax documentation is here: http://www.vim.org/htmldoc/syntax.html A: Because the return keyword is applicable in both generator functions and regular functions, there's nothing you could possibly check (as @Christopher mentions). The return keyword in a generator indicates that a StopIteration exception should be raised. If you try to return with a value from within a generator (which doesn't make sense, since return just means "stop iteration"), the compiler will complain at compile-time -- this may catch some copy-and-paste mistakes: >>> def foo(): ... yield 12 ... return 15 ... File "<stdin>", line 3 SyntaxError: 'return' with argument inside generator I personally just advise against copy and paste programming. :-) From the PEP: Note that return means "I'm done, and have nothing interesting to return", for both generator functions and non-generator functions. A: We do this. Generators have names with "generate" or "gen" in their name. It will have a yield statement in the body. Pretty easy to check visually, since no method is much over 20 lines of code. Other methods don't have "gen" in their name. Also, we do not every use __ (double underscore) names under any circumstances. 32,000 lines of code. Non __ names. The "generator vs. non-generator" method function is entirely a design question. What did the programmer "intend" to happen. The compiler can't easily validate your intent, it can only validate what you actually typed.
Can I be warned when I used a generator function by accident
I was working with generator functions and private functions of a class. I am wondering Why when yielding (which in my one case was by accident) in __someFunc that this function just appears not to be called from within __someGenerator. Also what is the terminology I want to use when referring to these aspects of the language? Can the python interpreter warn of such instances? Below is an example snippet of my scenario. class someClass(): def __init__(self): pass #Copy and paste mistake where yield ended up in a regular function def __someFunc(self): print "hello" #yield True #if yielding in this function it isn't called def __someGenerator (self): for i in range(0, 10): self.__someFunc() yield True yield False def someMethod(self): func = self.__someGenerator() while func.next(): print "next" sc = someClass() sc.someMethod() I got burned on this and spent some time trying to figure out why a function just wasn't getting called. I finally discovered I was yielding in function I didn't want to in.
[ "A \"generator\" isn't so much a language feature, as a name for functions that \"yield.\" Yielding is pretty much always legal. There's not really any way for Python to know that you didn't \"mean\" to yield from some function.\nThis PEP http://www.python.org/dev/peps/pep-0255/ talks about generators, and may help you understand the background better. \nI sympathize with your experience, but compilers can't figure out what you \"meant for them to do\", only what you actually told them to do.\n", "I'll try to answer the first of your questions.\nA regular function, when called like this:\nval = func()\n\nexecutes its inside statements until it ends or a return statement is reached. Then the return value of the function is assigned to val.\nIf a compiler recognizes the function to actually be a generator and not a regular function (it does that by looking for yield statements inside the function -- if there's at least one, it's a generator), the scenario when calling it the same way as above has different consequences. Upon calling func(), no code inside the function is executed, and a special <generator> value is assigned to val. Then, the first time you call val.next(), the actual statements of func are being executed until a yield or return is encountered, upon which the execution of the function stops, value yielded is returned and generator waits for another call to val.next().\nThat's why, in your example, function __someFunc didn't print \"hello\" -- its statements were not executed, because you haven't called self.__someFunc().next(), but only self.__someFunc().\nUnfortunately, I'm pretty sure there's no built-in warning mechanism for programming errors like yours.\n", "Python doesn't know whether you want to create a generator object for later iteration or call a function. But python isn't your only tool for seeing what's going on with your code. If you're using an editor or IDE that allows customized syntax highlighting, you can tell it to give the yield keyword a different color, or even a bright background, which will help you find your errors more quickly, at least. In vim, for example, you might do:\n:syntax keyword Yield yield\n:highlight yield ctermbg=yellow guibg=yellow ctermfg=blue guifg=blue\n\nThose are horrendous colors, by the way. I recommend picking something better. Another option, if your editor or IDE won't cooperate, is to set up a custom rule in a code checker like pylint. An example from pylint's source tarball: \nfrom pylint.interfaces import IRawChecker\nfrom pylint.checkers import BaseChecker\n\nclass MyRawChecker(BaseChecker):\n \"\"\"check for line continuations with '\\' instead of using triple\n quoted string or parenthesis\n \"\"\"\n\n __implements__ = IRawChecker\n\n name = 'custom_raw'\n msgs = {'W9901': ('use \\\\ for line continuation',\n ('Used when a \\\\ is used for a line continuation instead'\n ' of using triple quoted string or parenthesis.')),\n }\n options = ()\n\n def process_module(self, stream):\n \"\"\"process a module\n\n the module's content is accessible via the stream object\n \"\"\"\n for (lineno, line) in enumerate(stream):\n if line.rstrip().endswith('\\\\'):\n self.add_message('W9901', line=lineno)\n\n\ndef register(linter):\n \"\"\"required method to auto register this checker\"\"\"\n linter.register_checker(MyRawChecker(linter))\n\nThe pylint manual is available here: http://www.logilab.org/card/pylint_manual\nAnd vim's syntax documentation is here: http://www.vim.org/htmldoc/syntax.html\n", "Because the return keyword is applicable in both generator functions and regular functions, there's nothing you could possibly check (as @Christopher mentions). The return keyword in a generator indicates that a StopIteration exception should be raised.\nIf you try to return with a value from within a generator (which doesn't make sense, since return just means \"stop iteration\"), the compiler will complain at compile-time -- this may catch some copy-and-paste mistakes:\n>>> def foo():\n... yield 12\n... return 15\n... \n File \"<stdin>\", line 3\nSyntaxError: 'return' with argument inside generator\n\nI personally just advise against copy and paste programming. :-)\nFrom the PEP:\n\nNote that return means \"I'm done, and have nothing interesting to\n return\", for both generator functions and non-generator functions.\n\n", "We do this.\nGenerators have names with \"generate\" or \"gen\" in their name. It will have a yield statement in the body. Pretty easy to check visually, since no method is much over 20 lines of code.\nOther methods don't have \"gen\" in their name.\nAlso, we do not every use __ (double underscore) names under any circumstances. 32,000 lines of code. Non __ names. \nThe \"generator vs. non-generator\" method function is entirely a design question. What did the programmer \"intend\" to happen. The compiler can't easily validate your intent, it can only validate what you actually typed.\n" ]
[ 6, 2, 2, 1, 0 ]
[]
[]
[ "function", "generator", "language_features", "python" ]
stackoverflow_0001087019_function_generator_language_features_python.txt
Q: Can bin() be overloaded like oct() and hex() in Python 2.6? In Python 2.6 (and earlier) the hex() and oct() built-in functions can be overloaded in a class by defining __hex__ and __oct__ special functions. However there is not a __bin__ special function for overloading the behaviour of Python 2.6's new bin() built-in function. I want to know if there is any way of flexibly overloading bin(), and if not I was wondering why the inconsistent interface? I do know that the __index__ special function can be used, but this isn't flexible as it can only return an integer. My particular use case is from the bitstring module, where leading zero bits are considered significant: >>> a = BitString(length=12) # Twelve zero bits >>> hex(a) '0x000' >>> oct(a) '0o0000' >>> bin(a) '0b0' <------ I want it to output '0b000000000000' I suspect that there's no way of achieving this, but I thought it wouldn't hurt to ask! A: As you've already discovered, you can't override bin(), but it doesn't sound like you need to do that. You just want a 0-padded binary value. Unfortunately in python 2.5 and previous, you couldn't use "%b" to indicate binary, so you can't use the "%" string formatting operator to achieve the result you want. Luckily python 2.6 does offer what you want, in the form of the new str.format() method. I believe that this particular bit of line-noise is what you're looking for: >>> '{0:010b}'.format(19) '0000010011' The syntax for this mini-language is under "format specification mini-language" in the docs. To save you some time, I'll explain the string that I'm using: parameter zero (i.e. 19) should be formatted, using a magic "0" to indicate that I want 0-padded, right-aligned number, with 10 digits of precision, in binary format. You can use this syntax to achieve a variety of creative versions of alignment and padding. A: I think the short answer is 'No, bin() can't be overloaded like oct() and hex().' As to why, the answer must lie with Python 3.0, which uses __index__ to overload hex(), oct() and bin(), and has removed the __oct__ and __hex__ special functions altogether. So the Python 2.6 bin() looks very much like it's really a Python 3.0 feature that has been back-ported without much consideration that it's doing things the new Python 3 way rather than the old Python 2 way. I'd also guess that it's unlikely to get fixed, even if it is considered to be a bug. A: The bin function receives it's value from the object's __index__ function. So for an object, you can define the value converted to binary, but you can't define the format of the string. A: You could achieve the same behaviour as for hex and oct by overriding/replacing the built in bin() function with your own implementation that attempted to call bin on the object being passed and fell back to the standard bin() function if the object didn't provide bin. However, on the basis that explicit is better than implicit, coding your application to depend on a custom version of bin() is probably not a good idea so maybe just give the function a different name e.g. def mybin(n): try: return n.__bin__() except AttributeError: return bin(n) As for why the inconsistency in the interface, I'm not sure. Maybe it's because bin() was added more recently so it's a slight oversight?
Can bin() be overloaded like oct() and hex() in Python 2.6?
In Python 2.6 (and earlier) the hex() and oct() built-in functions can be overloaded in a class by defining __hex__ and __oct__ special functions. However there is not a __bin__ special function for overloading the behaviour of Python 2.6's new bin() built-in function. I want to know if there is any way of flexibly overloading bin(), and if not I was wondering why the inconsistent interface? I do know that the __index__ special function can be used, but this isn't flexible as it can only return an integer. My particular use case is from the bitstring module, where leading zero bits are considered significant: >>> a = BitString(length=12) # Twelve zero bits >>> hex(a) '0x000' >>> oct(a) '0o0000' >>> bin(a) '0b0' <------ I want it to output '0b000000000000' I suspect that there's no way of achieving this, but I thought it wouldn't hurt to ask!
[ "As you've already discovered, you can't override bin(), but it doesn't sound like you need to do that. You just want a 0-padded binary value. Unfortunately in python 2.5 and previous, you couldn't use \"%b\" to indicate binary, so you can't use the \"%\" string formatting operator to achieve the result you want.\nLuckily python 2.6 does offer what you want, in the form of the new str.format() method. I believe that this particular bit of line-noise is what you're looking for:\n>>> '{0:010b}'.format(19)\n'0000010011'\n\nThe syntax for this mini-language is under \"format specification mini-language\" in the docs. To save you some time, I'll explain the string that I'm using:\n\nparameter zero (i.e. 19) should be formatted, using\na magic \"0\" to indicate that I want 0-padded, right-aligned number, with\n10 digits of precision, in\nbinary format.\n\nYou can use this syntax to achieve a variety of creative versions of alignment and padding.\n", "I think the short answer is 'No, bin() can't be overloaded like oct() and hex().'\nAs to why, the answer must lie with Python 3.0, which uses __index__ to overload hex(), oct() and bin(), and has removed the __oct__ and __hex__ special functions altogether.\nSo the Python 2.6 bin() looks very much like it's really a Python 3.0 feature that has been back-ported without much consideration that it's doing things the new Python 3 way rather than the old Python 2 way. I'd also guess that it's unlikely to get fixed, even if it is considered to be a bug.\n", "The bin function receives it's value from the object's __index__ function. So for an object, you can define the value converted to binary, but you can't define the format of the string.\n", "You could achieve the same behaviour as for hex and oct by overriding/replacing the built in bin() function with your own implementation that attempted to call bin on the object being passed and fell back to the standard bin() function if the object didn't provide bin. However, on the basis that explicit is better than implicit, coding your application to depend on a custom version of bin() is probably not a good idea so maybe just give the function a different name e.g.\ndef mybin(n):\n try:\n return n.__bin__()\n except AttributeError:\n return bin(n)\n\nAs for why the inconsistency in the interface, I'm not sure. Maybe it's because bin() was added more recently so it's a slight oversight?\n" ]
[ 12, 7, 1, 0 ]
[]
[]
[ "binary", "overloading", "python", "python_2.6" ]
stackoverflow_0001002116_binary_overloading_python_python_2.6.txt
Q: file I/O in Spidermonkey Thanks to python-spidermonkey, using JavaScript code from Python is really easy. However, instead of using Python to read JS code from a file and passing the string to Spidermonkey, is there a way to read the file from within Spidermonkey (or pass the filepath as an argument, as in Rhino)? A: The SpiderMonkey as a library allows that by calling the JS_EvaluateScript with a non-NULL filename argument. However, the interfacing code of python-spidermonkey calls JS_EvaluateScript only inside the eval_script method, which as coded supplies source only as a string. You should address your issue to the python-spidermonkey developer, or —better, if possible!— provide a patch for a, say, eval_file_script method :) A: Turns out you can just bind a Python function and use it from within Spidermonkey: http://davisp.lighthouseapp.com/projects/26898/tickets/23-support-for-file-io-js_evaluatescript import spidermonkey def loadfile(fname): return open(fname).read() rt = spidermonkey.Runtime() cx = rt.new_context() cx.add_global("loadfile", loadfile) ret = cx.execute('var contents = loadfile("foo.js"); eval(contents);')
file I/O in Spidermonkey
Thanks to python-spidermonkey, using JavaScript code from Python is really easy. However, instead of using Python to read JS code from a file and passing the string to Spidermonkey, is there a way to read the file from within Spidermonkey (or pass the filepath as an argument, as in Rhino)?
[ "The SpiderMonkey as a library allows that by calling the JS_EvaluateScript with a non-NULL filename argument.\nHowever, the interfacing code of python-spidermonkey calls JS_EvaluateScript only inside the eval_script method, which as coded supplies source only as a string.\nYou should address your issue to the python-spidermonkey developer, or —better, if possible!— provide a patch for a, say, eval_file_script method :)\n", "Turns out you can just bind a Python function and use it from within Spidermonkey:\nhttp://davisp.lighthouseapp.com/projects/26898/tickets/23-support-for-file-io-js_evaluatescript\nimport spidermonkey\n\ndef loadfile(fname):\n return open(fname).read()\n\nrt = spidermonkey.Runtime()\ncx = rt.new_context()\ncx.add_global(\"loadfile\", loadfile)\nret = cx.execute('var contents = loadfile(\"foo.js\"); eval(contents);')\n\n" ]
[ 2, 2 ]
[]
[]
[ "javascript", "python", "spidermonkey" ]
stackoverflow_0001055850_javascript_python_spidermonkey.txt
Q: Reportlab page x of y NumberedCanvas and Images I had been using the reportlab NumberedCanvas given at http://code.activestate.com/recipes/546511/ . However, when i try to build a PDF that contains Image flowables, the images do not show, although enough vertical space is left for the image to fit. Is there any solution for this? A: See my new, improved recipe for this, which includes a simple test with images. Here's an excerpt from the recipe (which omits the test code): from reportlab.pdfgen import canvas from reportlab.lib.units import mm class NumberedCanvas(canvas.Canvas): def __init__(self, *args, **kwargs): canvas.Canvas.__init__(self, *args, **kwargs) self._saved_page_states = [] def showPage(self): self._saved_page_states.append(dict(self.__dict__)) self._startPage() def save(self): """add page info to each page (page x of y)""" num_pages = len(self._saved_page_states) for state in self._saved_page_states: self.__dict__.update(state) self.draw_page_number(num_pages) canvas.Canvas.showPage(self) canvas.Canvas.save(self) def draw_page_number(self, page_count): self.setFont("Helvetica", 7) self.drawRightString(200*mm, 20*mm, "Page %d of %d" % (self._pageNumber, page_count))
Reportlab page x of y NumberedCanvas and Images
I had been using the reportlab NumberedCanvas given at http://code.activestate.com/recipes/546511/ . However, when i try to build a PDF that contains Image flowables, the images do not show, although enough vertical space is left for the image to fit. Is there any solution for this?
[ "See my new, improved recipe for this, which includes a simple test with images. Here's an excerpt from the recipe (which omits the test code):\nfrom reportlab.pdfgen import canvas\nfrom reportlab.lib.units import mm\n\nclass NumberedCanvas(canvas.Canvas):\n def __init__(self, *args, **kwargs):\n canvas.Canvas.__init__(self, *args, **kwargs)\n self._saved_page_states = []\n\n def showPage(self):\n self._saved_page_states.append(dict(self.__dict__))\n self._startPage()\n\n def save(self):\n \"\"\"add page info to each page (page x of y)\"\"\"\n num_pages = len(self._saved_page_states)\n for state in self._saved_page_states:\n self.__dict__.update(state)\n self.draw_page_number(num_pages)\n canvas.Canvas.showPage(self)\n canvas.Canvas.save(self)\n\n def draw_page_number(self, page_count):\n self.setFont(\"Helvetica\", 7)\n self.drawRightString(200*mm, 20*mm,\n \"Page %d of %d\" % (self._pageNumber, page_count))\n\n" ]
[ 13 ]
[]
[]
[ "python", "reportlab" ]
stackoverflow_0001087495_python_reportlab.txt
Q: testing python applications that use mysql I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all. Any library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class. A: There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not. However, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server. A: I've used python-mock and mox for such purposes (extremely lightweight tests that absolutely cannot require ANY SQL server), but for more extensive/in-depth testing, starting and populating a local instance of MySQL isn't too bad either. Unfortunately SQLite's SQL dialect and MySQL's differ enough that trying to use the former for tests is somewhat impractical, unless you're using some ORM (Django, SQLObject, SQLAlchemy, ...) that can hide the dialect differences.
testing python applications that use mysql
I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all. Any library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class.
[ "There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not.\nHowever, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server.\n", "I've used python-mock and mox for such purposes (extremely lightweight tests that absolutely cannot require ANY SQL server), but for more extensive/in-depth testing, starting and populating a local instance of MySQL isn't too bad either.\nUnfortunately SQLite's SQL dialect and MySQL's differ enough that trying to use the former for tests is somewhat impractical, unless you're using some ORM (Django, SQLObject, SQLAlchemy, ...) that can hide the dialect differences.\n" ]
[ 12, 4 ]
[]
[]
[ "mysql", "python", "unit_testing" ]
stackoverflow_0001088077_mysql_python_unit_testing.txt
Q: Sorting a Python list by key... while checking for string OR float? Ok, I've got a list like this (just a sample of data): data = {"NAME": "James", "RANK": "3.0", "NUM": "27.5" ... } Now, if I run something like this: sortby = "NAME" //this gets passed to the function, hence why I am using a variable sortby instead data.sort(key=itemgetter(sortby)) I get all the strings sorted properly - alphabetically. However, when "sortby" is any of the floating values (RANK or NUM or any other), sort is done again, alphabetically, instead of numerically, so my sorted list looks something like this then: 0.441 101.404 107.558 107.558 108.48 108.945 11.195 12.143 12.801 131.73 which is obviously wrong. Now, how can I do a sort like that (most efficiently in terms of speed and resources/computations taken) but have it cast the value somehow to float when it's a float, and leave it as a string when it's a string... possible? And no, removing quotes from float values in the list is not an option - I don't have control over the source list, unfortunately (and I know, that would've been an easy solution). A: If you want a general function which you can pass as a parameter to sort(key=XXX), then here's a candidate complete with test: DATA = [ { 'name' : 'A', 'value' : '10.0' }, { 'name' : 'B', 'value' : '2.0' }, ] def get_attr(name): def inner_func(o): try: rv = float(o[name]) except ValueError: rv = o[name] return rv return inner_func for attrname in ('name', 'value'): DATA.sort(key=get_attr(attrname)) print "%r-sorted: %s" % (attrname, DATA) When you run the above script, you get: 'name'-sorted: [{'name': 'A', 'value': '10.0'}, {'name': 'B', 'value': '2.0'}] 'value'-sorted: [{'name': 'B', 'value': '2.0'}, {'name': 'A', 'value': '10.0'}] A: if you cant save your data properly ( floats as floats ), something like this sorters = { "NAME" : itemgetter("NAME"), "RANK" : lambda x: float(x["RANK"]), "NUM" : lambda x: float(x["NUM"]) } data.sort(key=sorters[sortby]) A: Slightly more verbose than just passing a field name, but this is an option: sort_by_name = lambda x: x['name'] sort_by_rank = lambda x: float(x['RANK']) # etc... data.sort(key=sort_by_rank) If the data is much more dense than what you've posted, you might want a separate dictionary mapping field names to data types, and then a factory function to produce sorters suitable for the key argument to list.sort()
Sorting a Python list by key... while checking for string OR float?
Ok, I've got a list like this (just a sample of data): data = {"NAME": "James", "RANK": "3.0", "NUM": "27.5" ... } Now, if I run something like this: sortby = "NAME" //this gets passed to the function, hence why I am using a variable sortby instead data.sort(key=itemgetter(sortby)) I get all the strings sorted properly - alphabetically. However, when "sortby" is any of the floating values (RANK or NUM or any other), sort is done again, alphabetically, instead of numerically, so my sorted list looks something like this then: 0.441 101.404 107.558 107.558 108.48 108.945 11.195 12.143 12.801 131.73 which is obviously wrong. Now, how can I do a sort like that (most efficiently in terms of speed and resources/computations taken) but have it cast the value somehow to float when it's a float, and leave it as a string when it's a string... possible? And no, removing quotes from float values in the list is not an option - I don't have control over the source list, unfortunately (and I know, that would've been an easy solution).
[ "If you want a general function which you can pass as a parameter to sort(key=XXX), then here's a candidate complete with test:\nDATA = [\n { 'name' : 'A', 'value' : '10.0' },\n { 'name' : 'B', 'value' : '2.0' },\n]\n\ndef get_attr(name):\n def inner_func(o):\n try:\n rv = float(o[name])\n except ValueError:\n rv = o[name]\n return rv\n return inner_func\n\nfor attrname in ('name', 'value'):\n DATA.sort(key=get_attr(attrname))\n print \"%r-sorted: %s\" % (attrname, DATA)\n\nWhen you run the above script, you get:\n'name'-sorted: [{'name': 'A', 'value': '10.0'}, {'name': 'B', 'value': '2.0'}]\n'value'-sorted: [{'name': 'B', 'value': '2.0'}, {'name': 'A', 'value': '10.0'}]\n\n", "if you cant save your data properly ( floats as floats ), something like this\nsorters = { \"NAME\" : itemgetter(\"NAME\"), \n \"RANK\" : lambda x: float(x[\"RANK\"]),\n \"NUM\" : lambda x: float(x[\"NUM\"])\n}\n\ndata.sort(key=sorters[sortby])\n\n", "Slightly more verbose than just passing a field name, but this is an option:\nsort_by_name = lambda x: x['name']\nsort_by_rank = lambda x: float(x['RANK'])\n# etc...\n\ndata.sort(key=sort_by_rank)\n\nIf the data is much more dense than what you've posted, you might want a separate dictionary mapping field names to data types, and then a factory function to produce sorters suitable for the key argument to list.sort()\n" ]
[ 7, 4, 0 ]
[]
[]
[ "list", "python", "sorting" ]
stackoverflow_0001088392_list_python_sorting.txt
Q: Python Libraries for FTP Upload/Download? Okay so a bit of forward: We have a service/daemon written in python that monitors remote ftp sites. These sites are not under our command, some of them we do NOT have del/rename/write access, some also are running extremely old ftp software. Such that certain commands do not work. There is no standardization among any of these ftp's, and they are out of our control(government). About a year ago i wrote a ftp wrapper library for in house that basically adds in stuff like resume upload/resume download/verifying files are not currently being written to, etc. The problem is we soon found out is that due to so many of the ftp servers running werid/non standard software we were constantly fighting with the wrapper library/ftplib. Basically I've given up on ftplib. Is there an alternative? I've looked at most of the ftp alternatives all of them are missing one or another key component of functionality. What ever the choice is, it must run for python 2.5.2 (we cannot change). and must run on Linux/Windows/HP-UX. Update: Sorry i forgot to tell you alternatives i looked at: ftputil, problem is it does not support resume upload/download and stuff like partially downloading files given an offset. Pycurl looked good, i'll look at it again. A: You don't mention which alternatives you've looked at already. Is ftputil one of them? http://ftputil.sschwarzer.net/trac/wiki/Documentation If you're trying to code around edge cases from various server implementations, you might be better off looking at the code used by Mozilla/Firefox. I imagine this is one of the things they have to deal with constantly. A: You may have better luck with one of the cURL bindings such as pycURL.
Python Libraries for FTP Upload/Download?
Okay so a bit of forward: We have a service/daemon written in python that monitors remote ftp sites. These sites are not under our command, some of them we do NOT have del/rename/write access, some also are running extremely old ftp software. Such that certain commands do not work. There is no standardization among any of these ftp's, and they are out of our control(government). About a year ago i wrote a ftp wrapper library for in house that basically adds in stuff like resume upload/resume download/verifying files are not currently being written to, etc. The problem is we soon found out is that due to so many of the ftp servers running werid/non standard software we were constantly fighting with the wrapper library/ftplib. Basically I've given up on ftplib. Is there an alternative? I've looked at most of the ftp alternatives all of them are missing one or another key component of functionality. What ever the choice is, it must run for python 2.5.2 (we cannot change). and must run on Linux/Windows/HP-UX. Update: Sorry i forgot to tell you alternatives i looked at: ftputil, problem is it does not support resume upload/download and stuff like partially downloading files given an offset. Pycurl looked good, i'll look at it again.
[ "You don't mention which alternatives you've looked at already. Is ftputil one of them?\nhttp://ftputil.sschwarzer.net/trac/wiki/Documentation\nIf you're trying to code around edge cases from various server implementations, you might be better off looking at the code used by Mozilla/Firefox. I imagine this is one of the things they have to deal with constantly.\n", "You may have better luck with one of the cURL bindings such as pycURL.\n" ]
[ 2, 1 ]
[]
[]
[ "ftp", "ftplib", "python" ]
stackoverflow_0001088518_ftp_ftplib_python.txt
Q: How to print output using python? When this .exe file runs it prints a screen full of information and I want to print a particular line out to the screen, here on line "6": cmd = ' -a ' + str(a) + ' -b ' + str(b) + str(Output) process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] outputlist = outputstring.splitlines() Output = outputlist[5] print cmd This works fine: cmd = ' -a ' + str(a) + ' -b ' + str(b) This doesn't work: cmd = ' -a ' + str(a) + ' -b ' + str(b) + str(Output) I get an error saying Output isn't defined. But when I cut and paste: outputstring = process.communicate()[0] outputlist = outputstring.splitlines() Output = outputlist[5] before the cmd statement it tells me the process isn't defined. str(Output) should be what is printed on line 6 when the .exe is ran. A: You're trying to append the result of a call into the call itself. You have to run the command once without the + str(Output) part to get the output in the first place. Think about it this way. Let's say I was adding some numbers together. z = 5 + b b = z + 2 I have to define either z or b before the statements, depending on the order of the two statements. I can't use a variable before I know what it is. You're doing the same thing, using the Output variable before you define it. A: It's not supposed to be a "dance" to move things around. It's a matter of what's on the left side of the "=". If it's on the left side, it's getting created; if it's on the right side it's being used. As it is, your example can't work even a little bit because line one wants part of output, which isn't created until the end. The easiest way to understand this is to work backwards. You want to see as the final result? print output[5] Right? So to get there, you have to get this from a larger string, right? output= outputstring.splitlines() print output[5] So where did outputstring come from? It was from some subprocess. outputstring = process.communicate()[0] output= outputstring.splitlines() print output[5] So where did process come from? It was created by subprocess Popen process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] output= outputstring.splitlines() print output[5] So where did cmd come from? I can't tell. Your example doesn't make sense on what command is being executed. cmd = ? process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] output= outputstring.splitlines() print output[5] A: Just change your first line to: cmd = ' -a ' + str(a) + ' -b ' + str(b) and the print statement at the end to: print cmd + str(Output) This is without knowing exactly what it is you want to print... It -seems- as if your problem is trying to use Output before you actually define what the Output variable is (as the posts above) A: Like you said, a variable has to be declared before you can use it. Therefore when you call str(Output) ABOVE Output = outputlist[5], Output doesn't exist yet. You need the actually call first: cmd = ' -a ' + str(a) + ' -b ' + str(b) then you can print the output of that command: cmd_return = ' -a ' + str(a) + ' -b ' + str(b) + str(Output) should be the line directly above print cmd_return.
How to print output using python?
When this .exe file runs it prints a screen full of information and I want to print a particular line out to the screen, here on line "6": cmd = ' -a ' + str(a) + ' -b ' + str(b) + str(Output) process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] outputlist = outputstring.splitlines() Output = outputlist[5] print cmd This works fine: cmd = ' -a ' + str(a) + ' -b ' + str(b) This doesn't work: cmd = ' -a ' + str(a) + ' -b ' + str(b) + str(Output) I get an error saying Output isn't defined. But when I cut and paste: outputstring = process.communicate()[0] outputlist = outputstring.splitlines() Output = outputlist[5] before the cmd statement it tells me the process isn't defined. str(Output) should be what is printed on line 6 when the .exe is ran.
[ "You're trying to append the result of a call into the call itself. You have to run the command once without the + str(Output) part to get the output in the first place.\nThink about it this way. Let's say I was adding some numbers together.\n z = 5 + b\n b = z + 2\n\nI have to define either z or b before the statements, depending on the order of the two statements. I can't use a variable before I know what it is. You're doing the same thing, using the Output variable before you define it.\n", "It's not supposed to be a \"dance\" to move things around. It's a matter of what's on the left side of the \"=\". If it's on the left side, it's getting created; if it's on the right side it's being used. \nAs it is, your example can't work even a little bit because line one wants part of output, which isn't created until the end.\nThe easiest way to understand this is to work backwards. You want to see as the final result?\nprint output[5]\n\nRight? So to get there, you have to get this from a larger string, right?\noutput= outputstring.splitlines()\nprint output[5]\n\nSo where did outputstring come from? It was from some subprocess.\noutputstring = process.communicate()[0]\noutput= outputstring.splitlines()\nprint output[5]\n\nSo where did process come from? It was created by subprocess Popen\nprocess = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE)\noutputstring = process.communicate()[0]\noutput= outputstring.splitlines()\nprint output[5]\n\nSo where did cmd come from? I can't tell. Your example doesn't make sense on what command is being executed.\ncmd = ? \nprocess = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE)\noutputstring = process.communicate()[0]\noutput= outputstring.splitlines()\nprint output[5]\n\n", "Just change your first line to:\ncmd = ' -a ' + str(a) + ' -b ' + str(b)\nand the print statement at the end to:\nprint cmd + str(Output)\nThis is without knowing exactly what it is you want to print...\nIt -seems- as if your problem is trying to use Output before you actually define what the Output variable is (as the posts above)\n", "Like you said, a variable has to be declared before you can use it. Therefore when you call str(Output) ABOVE Output = outputlist[5], Output doesn't exist yet. You need the actually call first:\ncmd = ' -a ' + str(a) + ' -b ' + str(b)\n\nthen you can print the output of that command:\ncmd_return = ' -a ' + str(a) + ' -b ' + str(b) + str(Output)\n\nshould be the line directly above print cmd_return.\n" ]
[ 5, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001088764_python.txt
Q: Decorator to mark a method to be executed no more than once even if called several times I will go straight to the example: class Foo: @execonce def initialize(self): print 'Called' >>> f1 = Foo() >>> f1.initialize() Called >>> f1.initialize() >>> f2 = Foo() >>> f2.initialize() Called >>> f2.initialize() >>> I tried to define execonce but could not write one that works with methods. PS: I cannot define the code in __init__ for initialize has to be called sometime after the object is initialized. cf - cmdln issue 13 A: import functools def execonce(f): @functools.wraps(f) def donothing(*a, **k): pass @functools.wraps(f) def doit(self, *a, **k): try: return f(self, *a, **k) finally: setattr(self, f.__name__, donothing) return doit A: You could do something like this: class Foo: def __init__(self): self.initialize_called = False def initialize(self): if self.initalize_called: return self.initialize_called = True print 'Called' This is straightforward and easy to read. There is another instance variable and some code required in the __init__ function, but it would satisfy your requirements. A: try something similar to this def foo(): try: foo.called except: print "called" foo.called = True methods and functions are objects. you can add methods and attributes on them. This can be useful for your case. If you want a decorator, just have the decorator allocate the method but first, check the flag. If the flag is found, a null method is returned and consequently executed.
Decorator to mark a method to be executed no more than once even if called several times
I will go straight to the example: class Foo: @execonce def initialize(self): print 'Called' >>> f1 = Foo() >>> f1.initialize() Called >>> f1.initialize() >>> f2 = Foo() >>> f2.initialize() Called >>> f2.initialize() >>> I tried to define execonce but could not write one that works with methods. PS: I cannot define the code in __init__ for initialize has to be called sometime after the object is initialized. cf - cmdln issue 13
[ "import functools\n\ndef execonce(f):\n\n @functools.wraps(f)\n def donothing(*a, **k):\n pass\n\n @functools.wraps(f)\n def doit(self, *a, **k):\n try:\n return f(self, *a, **k)\n finally:\n setattr(self, f.__name__, donothing)\n\n return doit\n\n", "You could do something like this:\nclass Foo:\n def __init__(self):\n self.initialize_called = False\n def initialize(self):\n if self.initalize_called:\n return\n self.initialize_called = True\n print 'Called'\n\nThis is straightforward and easy to read. There is another instance variable and some code required in the __init__ function, but it would satisfy your requirements.\n", "try something similar to this\ndef foo():\n try:\n foo.called\n except:\n print \"called\"\n foo.called = True\n\nmethods and functions are objects. you can add methods and attributes on them. This can be useful for your case. If you want a decorator, just have the decorator allocate the method but first, check the flag. If the flag is found, a null method is returned and consequently executed.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "class", "decorator", "methods", "python" ]
stackoverflow_0001089023_class_decorator_methods_python.txt
Q: Pythonic way to log specific things to a file? I understand that Python loggers cannot be instantiated directly, as the documentation suggests: Note that Loggers are never instantiated directly, but always through the module-level function logging.getLogger(name) .. which is reasonable, as you are expected not to create logger objects for every class/module for there is a better alternative. However, there are cases where I want to create a logger object and attach a file to it exclusively for logging some app-specific output to that file; and then close the log file. For instance, I have a program that builds all packages in PyPI. So basically assume there is a for loop going over every package. Inside the loop, I want to "create" a logger, attach a file handler (eg: /var/logs/pypi/django/20090302_1324.build.log and send the output of python setup.py build (along with other things) to this log file. Once that is done, I want to close/destroy the logger and continue building other packages in similar fashion. So you see .. the normal Pythonic way of calling logging.getLogger does not apply here. One needs to create temporary logger objects. Currently, I achieve this by passing the file name itself as the logger name: >>> packagelog = logging.getLogger('/var/..../..34.log') >>> # attach handler, etc.. I want to ask .. is there a better way to do this? A: Instead of many loggers, you could use one logger and many handlers. For example: log = logging.getLogger(name) while some_condition: try: handler = make_handler(filename) log.addHandler(handler) # do something and log finally: log.removeHandler(handler) handler.close() A: There are two issues here: Being able to direct output to different log files during different phases of the process. Being able to redirect stdout/stderr of arbitrary commands to those same log files. For point 1, I would go along with ars's answer: he's spot on about just using multiple handlers and one logger. his formatting is a little messed up so I'll reiterate below: logger = logging.getLogger("pypibuild") now_as_string = datetime.datetime.utcnow().strftime("%Y%m%d_%H%M") for package in get_pypi_packages(): fn = '/var/logs/pypi/%s/%s.log' % (package, now_as_string) h = logging.FileHandler(fn, 'w') logger.addHandler(h) perform_build(package) logger.removeHandler(h) h.close() As for point 2, the perform_build() step, I'll assume for simplicity's sake that we don't need to worry about a multicore environment. Then, the subprocess module is your friend. In the snippet below, I've left out error handling, fancy formatting and a couple of other niceties, but it should give you a fair idea. def perform_build(package): logger.debug("Starting build for package %r", package) command_line = compute_command_line_for_package(package) process = subprocess.Popen(command_line, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) stdout, stderr = process.communicate() logger.debug("Build stdout contents: %r", stdout) logger.debug("Build stderr contents: %r", stderr) logger.debug("Finished build for package %r", package) That's about it. A: Assuming you're calling to setup.py build as a subprocess I think you really just want output redirection, which you can get via the subprocess invocation. from subprocess import Popen with open('/var/logs/pypi/django/%s.build.log' % time_str, 'w') as fh: Popen('python setup.py build'.split(), stdout=fh, stderr=fh).communicate() If you're calling setup.py build as a Python subroutine (i.e. importing that module and invoking it's main routine) then you could try to add another logging.Handler (FileHandler) to a logger in that module if such a logger exists. Update Per answer comment, it sounds like you just want to add a new FileHandler to your current module's logger, then log things into that, then remove it from the logger later on. Is that more what you're looking for?
Pythonic way to log specific things to a file?
I understand that Python loggers cannot be instantiated directly, as the documentation suggests: Note that Loggers are never instantiated directly, but always through the module-level function logging.getLogger(name) .. which is reasonable, as you are expected not to create logger objects for every class/module for there is a better alternative. However, there are cases where I want to create a logger object and attach a file to it exclusively for logging some app-specific output to that file; and then close the log file. For instance, I have a program that builds all packages in PyPI. So basically assume there is a for loop going over every package. Inside the loop, I want to "create" a logger, attach a file handler (eg: /var/logs/pypi/django/20090302_1324.build.log and send the output of python setup.py build (along with other things) to this log file. Once that is done, I want to close/destroy the logger and continue building other packages in similar fashion. So you see .. the normal Pythonic way of calling logging.getLogger does not apply here. One needs to create temporary logger objects. Currently, I achieve this by passing the file name itself as the logger name: >>> packagelog = logging.getLogger('/var/..../..34.log') >>> # attach handler, etc.. I want to ask .. is there a better way to do this?
[ "Instead of many loggers, you could use one logger and many handlers. For example:\nlog = logging.getLogger(name)\nwhile some_condition:\n try:\n handler = make_handler(filename)\n log.addHandler(handler)\n # do something and log\n\n finally:\n log.removeHandler(handler)\n handler.close()\n\n", "There are two issues here:\n\nBeing able to direct output to different log files during different phases of the process.\nBeing able to redirect stdout/stderr of arbitrary commands to those same log files.\n\nFor point 1, I would go along with ars's answer: he's spot on about just using multiple handlers and one logger. his formatting is a little messed up so I'll reiterate below:\nlogger = logging.getLogger(\"pypibuild\")\nnow_as_string = datetime.datetime.utcnow().strftime(\"%Y%m%d_%H%M\")\nfor package in get_pypi_packages():\n fn = '/var/logs/pypi/%s/%s.log' % (package, now_as_string)\n h = logging.FileHandler(fn, 'w')\n logger.addHandler(h)\n perform_build(package)\n logger.removeHandler(h)\n h.close()\n\nAs for point 2, the perform_build() step, I'll assume for simplicity's sake that we don't need to worry about a multicore environment. Then, the subprocess module is your friend. In the snippet below, I've left out error handling, fancy formatting and a couple of other niceties, but it should give you a fair idea.\ndef perform_build(package):\n logger.debug(\"Starting build for package %r\", package)\n command_line = compute_command_line_for_package(package)\n process = subprocess.Popen(command_line, shell=True,\n stdin=PIPE, stdout=PIPE, stderr=PIPE)\n stdout, stderr = process.communicate()\n logger.debug(\"Build stdout contents: %r\", stdout)\n logger.debug(\"Build stderr contents: %r\", stderr)\n logger.debug(\"Finished build for package %r\", package)\n\nThat's about it.\n", "Assuming you're calling to setup.py build as a subprocess I think you really just want output redirection, which you can get via the subprocess invocation.\nfrom subprocess import Popen\nwith open('/var/logs/pypi/django/%s.build.log' % time_str, 'w') as fh:\n Popen('python setup.py build'.split(), stdout=fh, stderr=fh).communicate()\n\nIf you're calling setup.py build as a Python subroutine (i.e. importing that module and invoking it's main routine) then you could try to add another logging.Handler (FileHandler) to a logger in that module if such a logger exists.\nUpdate\nPer answer comment, it sounds like you just want to add a new FileHandler to your current module's logger, then log things into that, then remove it from the logger later on. Is that more what you're looking for?\n" ]
[ 4, 1, 0 ]
[]
[]
[ "logging", "python" ]
stackoverflow_0001089269_logging_python.txt
Q: App Engine db.model reference question How can I get at the Labels data from within my Task model? class Task(db.Model): title = db.StringProperty() class Label(db.Model): name = db.StringProperty() class Tasklabel(db.Model): task = db.ReferenceProperty(Task) label = db.ReferenceProperty(Label) creating the association is no problem, but how can I get at the labels associated with a task like: task = Task.get('...') for label in task.labels A: This worked for me with your current datamodel: taskObject = db.Query(Task).get() for item in taskObject.tasklabel_set: item.label.name Or you could remove the Label class and just do a one-to-many relationship between Task and TaskLabel: class Task(db.Model): title = db.StringProperty() class TaskLabel(db.Model): task = db.ReferenceProperty(Task) label = db.StringProperty() Then taskObject = db.Query(Task).get() for item in taskObject.tasklabel_set: item.label Here is a tip from the Google article on modeling relationships in the datastore By defining it as a ReferenceProperty, you have created a property that can only be assigned values of type 'Task'. Every time you define a reference property, it creates an implicit collection property on the referenced class. By default, this collection is called _set. In this case, it would make a property Task.tasklabel_set. The article can be found here. I also recommend playing around with this code in the interactive console on the dev appserver. A: Don't you want a ListProperty on Task like this to do a many-to-many? class Label(db.Model) name = db.StringProperty() @property def members(self): return Task.gql("WHERE labels = :1", self.key()) class Task(db.Model) title = db.StringProperty(); labels = db.ListProperty(db.Key) Then you could do foo_label = Label.gql("WHERE name = 'foo'").get() task1 = Task.gql("WHERE title = 'task 1'").get() if foo_label.key() not in task1.labels: task1.labels.append(foo_label.key()) task1.put() There's a thorough article about modeling entity relationships on Google code. I stole the code above from this article.
App Engine db.model reference question
How can I get at the Labels data from within my Task model? class Task(db.Model): title = db.StringProperty() class Label(db.Model): name = db.StringProperty() class Tasklabel(db.Model): task = db.ReferenceProperty(Task) label = db.ReferenceProperty(Label) creating the association is no problem, but how can I get at the labels associated with a task like: task = Task.get('...') for label in task.labels
[ "This worked for me with your current datamodel:\ntaskObject = db.Query(Task).get()\nfor item in taskObject.tasklabel_set:\n item.label.name\n\nOr you could remove the Label class and just do a one-to-many relationship between Task and TaskLabel:\nclass Task(db.Model):\n title = db.StringProperty()\n\nclass TaskLabel(db.Model):\n task = db.ReferenceProperty(Task)\n label = db.StringProperty()\n\nThen\ntaskObject = db.Query(Task).get()\nfor item in taskObject.tasklabel_set:\n item.label\n\nHere is a tip from the Google article on modeling relationships in the datastore\n\nBy defining it as a ReferenceProperty, you have created a property that can only be assigned values of type 'Task'. Every time you define a reference property, it creates an implicit collection property on the referenced class. By default, this collection is called _set. In this case, it would make a property Task.tasklabel_set.\n\nThe article can be found here.\nI also recommend playing around with this code in the interactive console on the dev appserver.\n", "Don't you want a ListProperty on Task like this to do a many-to-many?\nclass Label(db.Model)\n name = db.StringProperty()\n\n @property\n def members(self):\n return Task.gql(\"WHERE labels = :1\", self.key())\n\nclass Task(db.Model)\n title = db.StringProperty();\n labels = db.ListProperty(db.Key)\n\nThen you could do\nfoo_label = Label.gql(\"WHERE name = 'foo'\").get()\ntask1 = Task.gql(\"WHERE title = 'task 1'\").get()\nif foo_label.key() not in task1.labels:\n task1.labels.append(foo_label.key())\ntask1.put()\n\nThere's a thorough article about modeling entity relationships on Google code. I stole the code above from this article.\n" ]
[ 2, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001088678_google_app_engine_python.txt
Q: Django: Dynamic LOGIN_URL variable Currently, in my settings module I have this: LOGIN_URL = '/login' If I ever decide to change the login URL in urls.py, I'll have to change it here as well. Is there any more dynamic way of doing this? A: Settings IS where you are setting your dynamic login url. Make sure to import LOGIN_URL from settings.py in your urls.py and use that instead. from projectname.settings import LOGIN_URL A: This works for me ... with LOGIN_URL = '/accounts/login' If the problem is that settings.py has ... LOGIN_URL = '/login/' # <-- remember trailing slash! ... but, urls.py wants ... url(r'^login/$', auth_views.login, {'template_name': '/foo.html'}, name='auth_login'), Then do this: # - up top in the urls.py from django.conf import settings # - down below, in the list of URLs ... # - blindly remove the leading '/' & trust that you have a trailing '/' url(r'^%s$' % settings.LOGIN_URL[1:], auth_views.login, {'template_name': '/foo.html'}, name='auth_login'), If you can't trust whomever edits your settings.py ... then check LOGIN_URL startswith a slash & snip it off, or not. ... and then check for trailing slash LOGIN_URL endswith a slash & tack it on, or not ... and and then tack on the '$'
Django: Dynamic LOGIN_URL variable
Currently, in my settings module I have this: LOGIN_URL = '/login' If I ever decide to change the login URL in urls.py, I'll have to change it here as well. Is there any more dynamic way of doing this?
[ "Settings IS where you are setting your dynamic login url. Make sure to import LOGIN_URL from settings.py in your urls.py and use that instead.\nfrom projectname.settings import LOGIN_URL\n\n", "This works for me ... with LOGIN_URL = '/accounts/login'\nIf the problem is that settings.py has ...\nLOGIN_URL = '/login/' # <-- remember trailing slash!\n\n... but, urls.py wants ... \nurl(r'^login/$', \n auth_views.login, {'template_name': '/foo.html'}, \n name='auth_login'),\n\nThen do this:\n# - up top in the urls.py\nfrom django.conf import settings\n\n# - down below, in the list of URLs ...\n# - blindly remove the leading '/' & trust that you have a trailing '/'\nurl(r'^%s$' % settings.LOGIN_URL[1:], \n auth_views.login, {'template_name': '/foo.html'}, \n name='auth_login'),\n\nIf you can't trust whomever edits your settings.py\n... then check LOGIN_URL startswith a slash & snip it off, or not.\n... and then check for trailing slash LOGIN_URL endswith a slash & tack it on, or not\n... and and then tack on the '$'\n" ]
[ 7, 4 ]
[]
[]
[ "django", "django_urls", "python" ]
stackoverflow_0001088913_django_django_urls_python.txt
Q: How to make custom PhotoEffects in Django Photologue? I'm creating an image gallery in Django using the Photologue application. There are a number of PhotoEffects that come with it. I'd like to extend these and make my own so that I can do more complicated effects such as adding drop shadows, glossy overlays, etc. Is is possible to create custom effects that Photologue can then use to process images that are uploaded? A: I'm the developer of Photologue. I would suggest you look at the 3.x branch of Photologue and more specifically, django-imagekit, the new Library it's based on: http://bitbucket.org/jdriscoll/django-imagekit/wiki/Home. One of the goals of ImageKit was to make it easier to extend Photologue. All effects and manipulations are now implemented as "Processors" which are just a class wrapping a function that takes a PIL image, does something, and returns it. These processors are then chained together in whatever configuration you like. The 3.x branch is early and has been neglected lately (I'll spare you the excuses) but it shouldn't be hard to drop in the latest release of ImageKit and have close to feature parity with Photologue 2.x. A: Looks like you could define another preset effect in the utils file, and then import it into models.py. Then you'd want to add it as an option to the PhotoEffect class in models.py. This would of course make your Photologue a bit custom to your needs though.
How to make custom PhotoEffects in Django Photologue?
I'm creating an image gallery in Django using the Photologue application. There are a number of PhotoEffects that come with it. I'd like to extend these and make my own so that I can do more complicated effects such as adding drop shadows, glossy overlays, etc. Is is possible to create custom effects that Photologue can then use to process images that are uploaded?
[ "I'm the developer of Photologue. I would suggest you look at the 3.x branch of Photologue and more specifically, django-imagekit, the new Library it's based on: http://bitbucket.org/jdriscoll/django-imagekit/wiki/Home. One of the goals of ImageKit was to make it easier to extend Photologue. All effects and manipulations are now implemented as \"Processors\" which are just a class wrapping a function that takes a PIL image, does something, and returns it. These processors are then chained together in whatever configuration you like. The 3.x branch is early and has been neglected lately (I'll spare you the excuses) but it shouldn't be hard to drop in the latest release of ImageKit and have close to feature parity with Photologue 2.x.\n", "Looks like you could define another preset effect in the utils file, and then import it into models.py. Then you'd want to add it as an option to the PhotoEffect class in models.py. This would of course make your Photologue a bit custom to your needs though.\n" ]
[ 2, 1 ]
[]
[]
[ "django", "photologue", "python", "python_imaging_library" ]
stackoverflow_0001089304_django_photologue_python_python_imaging_library.txt
Q: How does Python store lists internally? How are lists in python stored internally? Is it an array? A linked list? Something else? Or does the interpreter guess at the right structure for each instance based on length, etc. If the question is implementation dependent, what about the classic CPython? A: from Core Python Containers: Under the Hood List Implementation: Fixed-length array of pointers * When the array grows or shrinks, calls realloc() and, if necessary, copies all of the items to the new space source code: Include/listobject.h and Objects/listobject.c btw: here is the video
How does Python store lists internally?
How are lists in python stored internally? Is it an array? A linked list? Something else? Or does the interpreter guess at the right structure for each instance based on length, etc. If the question is implementation dependent, what about the classic CPython?
[ "from Core Python Containers: Under the Hood\nList Implementation:\nFixed-length array of pointers\n* When the array grows or shrinks, calls realloc() and, if necessary, copies all of the items to the new space\nsource code: Include/listobject.h and Objects/listobject.c\nbtw: here is the video\n" ]
[ 31 ]
[]
[]
[ "data_structures", "python" ]
stackoverflow_0001090104_data_structures_python.txt
Q: Python: Inflate and Deflate implementations I am interfacing with a server that requires that data sent to it is compressed with Deflate algorithm (Huffman encoding + LZ77) and also sends data that I need to Inflate. I know that Python includes Zlib, and that the C libraries in Zlib support calls to Inflate and Deflate, but these apparently are not provided by the Python Zlib module. It does provide Compress and Decompress, but when I make a call such as the following: result_data = zlib.decompress( base64_decoded_compressed_string ) I receive the following error: Error -3 while decompressing data: incorrect header check Gzip does no better; when making a call such as: result_data = gzip.GzipFile( fileobj = StringIO.StringIO( base64_decoded_compressed_string ) ).read() I receive the error: IOError: Not a gzipped file which makes sense as the data is a Deflated file not a true Gzipped file. Now I know that there is a Deflate implementation available (Pyflate), but I do not know of an Inflate implementation. It seems that there are a few options: Find an existing implementation (ideal) of Inflate and Deflate in Python Write my own Python extension to the zlib c library that includes Inflate and Deflate Call something else that can be executed from the command line (such as a Ruby script, since Inflate/Deflate calls in zlib are fully wrapped in Ruby) ? I am seeking a solution, but lacking a solution I will be thankful for insights, constructive opinions, and ideas. Additional information: The result of deflating (and encoding) a string should, for the purposes I need, give the same result as the following snippet of C# code, where the input parameter is an array of UTF bytes corresponding to the data to compress: public static string DeflateAndEncodeBase64(byte[] data) { if (null == data || data.Length < 1) return null; string compressedBase64 = ""; //write into a new memory stream wrapped by a deflate stream using (MemoryStream ms = new MemoryStream()) { using (DeflateStream deflateStream = new DeflateStream(ms, CompressionMode.Compress, true)) { //write byte buffer into memorystream deflateStream.Write(data, 0, data.Length); deflateStream.Close(); //rewind memory stream and write to base 64 string byte[] compressedBytes = new byte[ms.Length]; ms.Seek(0, SeekOrigin.Begin); ms.Read(compressedBytes, 0, (int)ms.Length); compressedBase64 = Convert.ToBase64String(compressedBytes); } } return compressedBase64; } Running this .NET code for the string "deflate and encode me" gives the result 7b0HYBxJliUmL23Ke39K9UrX4HShCIBgEyTYkEAQ7MGIzeaS7B1pRyMpqyqBymVWZV1mFkDM7Z28995777333nvvvfe6O51OJ/ff/z9cZmQBbPbOStrJniGAqsgfP358Hz8iZvl5mbV5mi1nab6cVrM8XeT/Dw== When "deflate and encode me" is run through the Python Zlib.compress() and then base64 encoded, the result is "eJxLSU3LSSxJVUjMS1FIzUvOT0lVyE0FAFXHB6k=". It is clear that zlib.compress() is not an implementation of the same algorithm as the standard Deflate algorithm. More Information: The first 2 bytes of the .NET deflate data ("7b0HY..."), after b64 decoding are 0xEDBD, which does not correspond to Gzip data (0x1f8b), BZip2 (0x425A) data, or Zlib (0x789C) data. The first 2 bytes of the Python compressed data ("eJxLS..."), after b64 decoding are 0x789C. This is a Zlib header. SOLVED To handle the raw deflate and inflate, without header and checksum, the following things needed to happen: On deflate/compress: strip the first two bytes (header) and the last four bytes (checksum). On inflate/decompress: there is a second argument for window size. If this value is negative it suppresses headers. here are my methods currently, including the base64 encoding/decoding - and working properly: import zlib import base64 def decode_base64_and_inflate( b64string ): decoded_data = base64.b64decode( b64string ) return zlib.decompress( decoded_data , -15) def deflate_and_base64_encode( string_val ): zlibbed_str = zlib.compress( string_val ) compressed_string = zlibbed_str[2:-4] return base64.b64encode( compressed_string ) A: You can still use the zlib module to inflate/deflate data. The gzip module uses it internally, but adds a file-header to make it into a gzip-file. Looking at the gzip.py file, something like this could work: import zlib def deflate(data, compresslevel=9): compress = zlib.compressobj( compresslevel, # level: 0-9 zlib.DEFLATED, # method: must be DEFLATED -zlib.MAX_WBITS, # window size in bits: # -15..-8: negate, suppress header # 8..15: normal # 16..30: subtract 16, gzip header zlib.DEF_MEM_LEVEL, # mem level: 1..8/9 0 # strategy: # 0 = Z_DEFAULT_STRATEGY # 1 = Z_FILTERED # 2 = Z_HUFFMAN_ONLY # 3 = Z_RLE # 4 = Z_FIXED ) deflated = compress.compress(data) deflated += compress.flush() return deflated def inflate(data): decompress = zlib.decompressobj( -zlib.MAX_WBITS # see above ) inflated = decompress.decompress(data) inflated += decompress.flush() return inflated I don't know if this corresponds exactly to whatever your server requires, but those two functions are able to round-trip any data I tried. The parameters maps directly to what is passed to the zlib library functions. Python ⇒ C zlib.compressobj(...) ⇒ deflateInit(...) compressobj.compress(...) ⇒ deflate(...) zlib.decompressobj(...) ⇒ inflateInit(...) decompressobj.decompress(...) ⇒ inflate(...) The constructors create the structure and populate it with default values, and pass it along to the init-functions. The compress/decompress methods update the structure and pass it to inflate/deflate. A: This is an add-on to MizardX's answer, giving some explanation and background. See http://www.chiramattel.com/george/blog/2007/09/09/deflatestream-block-length-does-not-match.html According to RFC 1950, a zlib stream constructed in the default manner is composed of: a 2-byte header (e.g. 0x78 0x9C) a deflate stream -- see RFC 1951 an Adler-32 checksum of the uncompressed data (4 bytes) The C# DeflateStream works on (you guessed it) a deflate stream. MizardX's code is telling the zlib module that the data is a raw deflate stream. Observations: (1) One hopes the C# "deflation" method producing a longer string happens only with short input (2) Using the raw deflate stream without the Adler-32 checksum? Bit risky, unless replaced with something better. Updates error message Block length does not match with its complement If you are trying to inflate some compressed data with the C# DeflateStream and you get that message, then it is quite possible that you are giving it a a zlib stream, not a deflate stream. See How do you use a DeflateStream on part of a file? Also copy/paste the error message into a Google search and you will get numerous hits (including the one up the front of this answer) saying much the same thing. The Java Deflater ... used by "the website" ... C# DeflateStream "is pretty straightforward and has been tested against the Java implementation". Which of the following possible Java Deflater constructors is the website using? public Deflater(int level, boolean nowrap) Creates a new compressor using the specified compression level. If 'nowrap' is true then the ZLIB header and checksum fields will not be used in order to support the compression format used in both GZIP and PKZIP. public Deflater(int level) Creates a new compressor using the specified compression level. Compressed data will be generated in ZLIB format. public Deflater() Creates a new compressor with the default compression level. Compressed data will be generated in ZLIB format. A one-line deflater after throwing away the 2-byte zlib header and the 4-byte checksum: uncompressed_string.encode('zlib')[2:-4] # does not work in Python 3.x or zlib.compress(uncompressed_string)[2:-4]
Python: Inflate and Deflate implementations
I am interfacing with a server that requires that data sent to it is compressed with Deflate algorithm (Huffman encoding + LZ77) and also sends data that I need to Inflate. I know that Python includes Zlib, and that the C libraries in Zlib support calls to Inflate and Deflate, but these apparently are not provided by the Python Zlib module. It does provide Compress and Decompress, but when I make a call such as the following: result_data = zlib.decompress( base64_decoded_compressed_string ) I receive the following error: Error -3 while decompressing data: incorrect header check Gzip does no better; when making a call such as: result_data = gzip.GzipFile( fileobj = StringIO.StringIO( base64_decoded_compressed_string ) ).read() I receive the error: IOError: Not a gzipped file which makes sense as the data is a Deflated file not a true Gzipped file. Now I know that there is a Deflate implementation available (Pyflate), but I do not know of an Inflate implementation. It seems that there are a few options: Find an existing implementation (ideal) of Inflate and Deflate in Python Write my own Python extension to the zlib c library that includes Inflate and Deflate Call something else that can be executed from the command line (such as a Ruby script, since Inflate/Deflate calls in zlib are fully wrapped in Ruby) ? I am seeking a solution, but lacking a solution I will be thankful for insights, constructive opinions, and ideas. Additional information: The result of deflating (and encoding) a string should, for the purposes I need, give the same result as the following snippet of C# code, where the input parameter is an array of UTF bytes corresponding to the data to compress: public static string DeflateAndEncodeBase64(byte[] data) { if (null == data || data.Length < 1) return null; string compressedBase64 = ""; //write into a new memory stream wrapped by a deflate stream using (MemoryStream ms = new MemoryStream()) { using (DeflateStream deflateStream = new DeflateStream(ms, CompressionMode.Compress, true)) { //write byte buffer into memorystream deflateStream.Write(data, 0, data.Length); deflateStream.Close(); //rewind memory stream and write to base 64 string byte[] compressedBytes = new byte[ms.Length]; ms.Seek(0, SeekOrigin.Begin); ms.Read(compressedBytes, 0, (int)ms.Length); compressedBase64 = Convert.ToBase64String(compressedBytes); } } return compressedBase64; } Running this .NET code for the string "deflate and encode me" gives the result 7b0HYBxJliUmL23Ke39K9UrX4HShCIBgEyTYkEAQ7MGIzeaS7B1pRyMpqyqBymVWZV1mFkDM7Z28995777333nvvvfe6O51OJ/ff/z9cZmQBbPbOStrJniGAqsgfP358Hz8iZvl5mbV5mi1nab6cVrM8XeT/Dw== When "deflate and encode me" is run through the Python Zlib.compress() and then base64 encoded, the result is "eJxLSU3LSSxJVUjMS1FIzUvOT0lVyE0FAFXHB6k=". It is clear that zlib.compress() is not an implementation of the same algorithm as the standard Deflate algorithm. More Information: The first 2 bytes of the .NET deflate data ("7b0HY..."), after b64 decoding are 0xEDBD, which does not correspond to Gzip data (0x1f8b), BZip2 (0x425A) data, or Zlib (0x789C) data. The first 2 bytes of the Python compressed data ("eJxLS..."), after b64 decoding are 0x789C. This is a Zlib header. SOLVED To handle the raw deflate and inflate, without header and checksum, the following things needed to happen: On deflate/compress: strip the first two bytes (header) and the last four bytes (checksum). On inflate/decompress: there is a second argument for window size. If this value is negative it suppresses headers. here are my methods currently, including the base64 encoding/decoding - and working properly: import zlib import base64 def decode_base64_and_inflate( b64string ): decoded_data = base64.b64decode( b64string ) return zlib.decompress( decoded_data , -15) def deflate_and_base64_encode( string_val ): zlibbed_str = zlib.compress( string_val ) compressed_string = zlibbed_str[2:-4] return base64.b64encode( compressed_string )
[ "You can still use the zlib module to inflate/deflate data. The gzip module uses it internally, but adds a file-header to make it into a gzip-file. Looking at the gzip.py file, something like this could work:\nimport zlib\n\ndef deflate(data, compresslevel=9):\n compress = zlib.compressobj(\n compresslevel, # level: 0-9\n zlib.DEFLATED, # method: must be DEFLATED\n -zlib.MAX_WBITS, # window size in bits:\n # -15..-8: negate, suppress header\n # 8..15: normal\n # 16..30: subtract 16, gzip header\n zlib.DEF_MEM_LEVEL, # mem level: 1..8/9\n 0 # strategy:\n # 0 = Z_DEFAULT_STRATEGY\n # 1 = Z_FILTERED\n # 2 = Z_HUFFMAN_ONLY\n # 3 = Z_RLE\n # 4 = Z_FIXED\n )\n deflated = compress.compress(data)\n deflated += compress.flush()\n return deflated\n\ndef inflate(data):\n decompress = zlib.decompressobj(\n -zlib.MAX_WBITS # see above\n )\n inflated = decompress.decompress(data)\n inflated += decompress.flush()\n return inflated\n\nI don't know if this corresponds exactly to whatever your server requires, but those two functions are able to round-trip any data I tried.\nThe parameters maps directly to what is passed to the zlib library functions.\nPython ⇒ C\nzlib.compressobj(...) ⇒ deflateInit(...)\ncompressobj.compress(...) ⇒ deflate(...)\nzlib.decompressobj(...) ⇒ inflateInit(...)\ndecompressobj.decompress(...) ⇒ inflate(...)\nThe constructors create the structure and populate it with default values, and pass it along to the init-functions.\nThe compress/decompress methods update the structure and pass it to inflate/deflate.\n", "This is an add-on to MizardX's answer, giving some explanation and background.\nSee http://www.chiramattel.com/george/blog/2007/09/09/deflatestream-block-length-does-not-match.html\nAccording to RFC 1950, a zlib stream constructed in the default manner is composed of:\n\na 2-byte header (e.g. 0x78 0x9C) \na deflate stream -- see RFC 1951 \nan Adler-32 checksum of the uncompressed data (4 bytes) \n\nThe C# DeflateStream works on (you guessed it) a deflate stream. MizardX's code is telling the zlib module that the data is a raw deflate stream.\nObservations: (1) One hopes the C# \"deflation\" method producing a longer string happens only with short input (2) Using the raw deflate stream without the Adler-32 checksum? Bit risky, unless replaced with something better.\nUpdates\nerror message Block length does not match with its complement\nIf you are trying to inflate some compressed data with the C# DeflateStream and you get that message, then it is quite possible that you are giving it a a zlib stream, not a deflate stream. \nSee How do you use a DeflateStream on part of a file?\nAlso copy/paste the error message into a Google search and you will get numerous hits (including the one up the front of this answer) saying much the same thing.\nThe Java Deflater ... used by \"the website\" ... C# DeflateStream \"is pretty straightforward and has been tested against the Java implementation\". Which of the following possible Java Deflater constructors is the website using?\n\npublic Deflater(int level, boolean nowrap)\nCreates a new compressor using the specified compression level. If 'nowrap' is true then the ZLIB header and checksum fields will not be used in order to support the compression format used in both GZIP and PKZIP.\npublic Deflater(int level)\nCreates a new compressor using the specified compression level. Compressed data will be generated in ZLIB format.\npublic Deflater()\nCreates a new compressor with the default compression level. Compressed data will be generated in ZLIB format. \n\nA one-line deflater after throwing away the 2-byte zlib header and the 4-byte checksum:\n\nuncompressed_string.encode('zlib')[2:-4] # does not work in Python 3.x\n\nor\n\nzlib.compress(uncompressed_string)[2:-4]\n\n" ]
[ 29, 25 ]
[]
[]
[ "c#", "compression", "python", "zlib" ]
stackoverflow_0001089662_c#_compression_python_zlib.txt
Q: Python remove all lines which have common value in fields I have lines of data comprising of 4 fields aaaa bbb1 cccc dddd aaaa bbb2 cccc dddd aaaa bbb3 cccc eeee aaaa bbb4 cccc ffff aaaa bbb5 cccc gggg aaaa bbb6 cccc dddd Please bear with me. The first and third field is always the same - but I don't need them, the 4th field can be the same or different. The thing is, I only want 2nd and 4th fields from lines which don't share the common field. For example like this from the above data bbb3 eeee bbb4 ffff bbb5 gggg Now I don't mean deduplication as that would leave one of the entries in. If the 4th field shares a value with another line, I don't want any line which ever had that value. humblest apologies once again for asking what is probably simple. A: Here you go: from collections import defaultdict LINES = """\ aaaa bbb1 cccc dddd aaaa bbb2 cccc dddd aaaa bbb3 cccc eeee aaaa bbb4 cccc ffff aaaa bbb5 cccc gggg aaaa bbb6 cccc dddd""".split('\n') # Count how many lines each unique value of the fourth field appears in. d_counts = defaultdict(int) for line in LINES: a, b, c, d = line.split() d_counts[d] += 1 # Print only those lines with a unique value for the fourth field. for line in LINES: a, b, c, d = line.split() if d_counts[d] == 1: print b, d # Prints # bbb3 eeee # bbb4 ffff # bbb5 gggg A: For your amplified requirement, you can avoid reading the file twice or saving it in a list: LINES = """\ aaaa bbb1 cccc dddd aaaa bbb2 cccc dddd aaaa bbb3 cccc eeee aaaa bbb4 cccc ffff aaaa bbb5 cccc gggg aaaa bbb6 cccc dddd""".split('\n') import collections adict = collections.defaultdict(list) for line in LINES: # or file ... a, b, c, d = line.split() adict[d].append(b) map_b_to_d = dict((blist[0], d) for d, blist in adict.items() if len(blist) == 1) print(map_b_to_d) # alternative; saves some memory xdict = {} duplicated = object() for line in LINES: # or file ... a, b, c, d = line.split() xdict[d] = duplicated if d in xdict else b map_b_to_d2 = dict((b, d) for d, b in xdict.items() if b is not duplicated) print(map_b_to_d2)
Python remove all lines which have common value in fields
I have lines of data comprising of 4 fields aaaa bbb1 cccc dddd aaaa bbb2 cccc dddd aaaa bbb3 cccc eeee aaaa bbb4 cccc ffff aaaa bbb5 cccc gggg aaaa bbb6 cccc dddd Please bear with me. The first and third field is always the same - but I don't need them, the 4th field can be the same or different. The thing is, I only want 2nd and 4th fields from lines which don't share the common field. For example like this from the above data bbb3 eeee bbb4 ffff bbb5 gggg Now I don't mean deduplication as that would leave one of the entries in. If the 4th field shares a value with another line, I don't want any line which ever had that value. humblest apologies once again for asking what is probably simple.
[ "Here you go:\nfrom collections import defaultdict\n\nLINES = \"\"\"\\\naaaa bbb1 cccc dddd\naaaa bbb2 cccc dddd\naaaa bbb3 cccc eeee\naaaa bbb4 cccc ffff\naaaa bbb5 cccc gggg\naaaa bbb6 cccc dddd\"\"\".split('\\n')\n\n# Count how many lines each unique value of the fourth field appears in.\nd_counts = defaultdict(int)\nfor line in LINES:\n a, b, c, d = line.split()\n d_counts[d] += 1\n\n# Print only those lines with a unique value for the fourth field.\nfor line in LINES:\n a, b, c, d = line.split()\n if d_counts[d] == 1:\n print b, d\n\n# Prints\n# bbb3 eeee\n# bbb4 ffff\n# bbb5 gggg\n\n", "For your amplified requirement, you can avoid reading the file twice or saving it in a list:\nLINES = \"\"\"\\\naaaa bbb1 cccc dddd\naaaa bbb2 cccc dddd\naaaa bbb3 cccc eeee\naaaa bbb4 cccc ffff\naaaa bbb5 cccc gggg\naaaa bbb6 cccc dddd\"\"\".split('\\n')\n\nimport collections\nadict = collections.defaultdict(list)\nfor line in LINES: # or file ...\n a, b, c, d = line.split()\n adict[d].append(b)\n\nmap_b_to_d = dict((blist[0], d) for d, blist in adict.items() if len(blist) == 1)\nprint(map_b_to_d)\n\n# alternative; saves some memory\n\nxdict = {}\nduplicated = object()\nfor line in LINES: # or file ...\n a, b, c, d = line.split()\n xdict[d] = duplicated if d in xdict else b\n\nmap_b_to_d2 = dict((b, d) for d, b in xdict.items() if b is not duplicated)\nprint(map_b_to_d2)\n\n" ]
[ 6, 0 ]
[]
[]
[ "duplicate_removal", "python" ]
stackoverflow_0001089550_duplicate_removal_python.txt
Q: Special (magic) methods in Python What are all the special (magic) methods in Python? The __xxx__ methods, that is. I'm often looking for a way to override something which I know is possible to do through one of these methods, but I'm having a hard time to find how since as far as I can tell there is no definitive list of these methods, PLUS their names are not really Google friendly. So I think having a list of those here on SO would be a good idea. A: At the python level, most of them are documented in the language reference. At the C level, you can find it under the object protocol section (strictly speaking, you only have a subset here, though).
Special (magic) methods in Python
What are all the special (magic) methods in Python? The __xxx__ methods, that is. I'm often looking for a way to override something which I know is possible to do through one of these methods, but I'm having a hard time to find how since as far as I can tell there is no definitive list of these methods, PLUS their names are not really Google friendly. So I think having a list of those here on SO would be a good idea.
[ "At the python level, most of them are documented in the language reference. At the C level, you can find it under the object protocol section (strictly speaking, you only have a subset here, though).\n" ]
[ 52 ]
[]
[]
[ "python" ]
stackoverflow_0001090620_python.txt
Q: Python operators I am learning Python for the past few days and I have written this piece of code to evaluate a postfix expression. postfix_expression = "34*34*+" stack = [] for char in postfix_expression : try : char = int(char); stack.append(char); except ValueError: if char == '+' : stack.append(stack.pop() + stack.pop()) elif char == '-' : stack.append(stack.pop() - stack.pop()) elif char == '*' : stack.append(stack.pop() * stack.pop()) elif char == '/' : stack.append(stack.pop() / stack.pop()) print stack.pop() Is there a way I can avoid that huge if else block? As in, is there module that takes a mathematical operator in the string form and invokes the corresponding mathematical operator or some python idiom that makes this simple? A: The operator module has functions that implement the standard arithmetic operators. With that, you can set up a mapping like: OperatorFunctions = { '+': operator.add, '-': operator.sub, '*': operator.mul, '/': operator.div, # etc } Then your main loop can look something like this: for char in postfix_expression: if char in OperatorFunctions: stack.append(OperatorFunctions[char](stack.pop(), stack.pop())) else: stack.append(char) You will want to take care to ensure that the operands to subtraction and division are popped off the stack in the correct order.
Python operators
I am learning Python for the past few days and I have written this piece of code to evaluate a postfix expression. postfix_expression = "34*34*+" stack = [] for char in postfix_expression : try : char = int(char); stack.append(char); except ValueError: if char == '+' : stack.append(stack.pop() + stack.pop()) elif char == '-' : stack.append(stack.pop() - stack.pop()) elif char == '*' : stack.append(stack.pop() * stack.pop()) elif char == '/' : stack.append(stack.pop() / stack.pop()) print stack.pop() Is there a way I can avoid that huge if else block? As in, is there module that takes a mathematical operator in the string form and invokes the corresponding mathematical operator or some python idiom that makes this simple?
[ "The operator module has functions that implement the standard arithmetic operators. With that, you can set up a mapping like:\nOperatorFunctions = {\n '+': operator.add,\n '-': operator.sub,\n '*': operator.mul,\n '/': operator.div,\n # etc\n}\n\nThen your main loop can look something like this:\nfor char in postfix_expression:\n if char in OperatorFunctions:\n stack.append(OperatorFunctions[char](stack.pop(), stack.pop()))\n else:\n stack.append(char)\n\nYou will want to take care to ensure that the operands to subtraction and division are popped off the stack in the correct order.\n" ]
[ 17 ]
[ "Just use eval along with string generation:\npostfix_expression = \"34*34*+\"\nstack = []\nfor char in postfix_expression:\n if char in '+-*/':\n expression = '%d%s%d' % (stack.pop(), char, stack.pop())\n stack.append(eval(expression))\n else:\n stack.append(int(char))\nprint stack.pop()\n\nEDIT: made an even nicer version without the exception handling.\n", "# This code is untested\nfrom operator import add, sub, mul, div\n# read the docs; this is a tiny part of the operator module\n\ndespatcher = {\n '+': add,\n '-': sub,\n # etc\n }\n\nopfunc = despatcher[op_char]\noperand2 = stack.pop()\nstack[-1] = opfunc(stack[-1], operand2)\n\n" ]
[ -1, -1 ]
[ "operators", "python" ]
stackoverflow_0001090863_operators_python.txt
Q: Drawbacks of storing an integer as a string in a database I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings? A: Unless you really need the features of an integer (that is, the ability to do arithmetic), then it is probably better for you to store the product IDs as strings. You will never need to do anything like add two product IDs together, or compute the average of a group of product IDs, so there is no need for an actual numeric type. It is unlikely that storing product IDs as strings will cause a measurable difference in performance. While there will be a slight increase in storage size, the size of a product ID string is likely to be much smaller than the data in the rest of your database row anyway. Storing product IDs as strings today will save you much pain in the future if the data provider decides to start using alphabetic or symbol characters. There is no real downside. A: Do NOT consider performance. Consider meaning. ID "numbers" are not numeric except that they are written with an alphabet of all digits. If I have part number 12 and part number 14, what is the difference between the two? Is part number 2 or -2 meaningful? No. Part numbers (and anything that doesn't have units of measure) are not "numeric". They're just strings of digits. Zip codes in the US, for example. Phone numbers. Social security numbers. These are not numbers. In my town the difference between zip code 12345 and 12309 isn't the distance from my house to downtown. Do not conflate numbers -- with units -- where sums and differences mean something with strings of digits without sums or differences. Part ID numbers are -- properly -- strings. Not integers. They'll never be integers because they don't have sums, differences or averages. A: It really depends on what kind of id you are talking about. If it's a code like a phone number it would actually be better to use a varchar for the id and then have your own id to be a serial for the db and use for primary key. In a case where the integer have no numerical value, varchars are generally prefered. A: I've just spent the last year dealing with a database that has almost all IDs as strings, some with digits only, and others mixed. These are the problems: Grossly restricted ID space. A 4 char (digit-only) ID has capacity for 10,000 unique values. A 4 byte numeric has capacity for over 4 billion. Unpredictable ID space coverage. Once IDs start including non-digits it becomes hard to predict where you can create new IDs without collisions. Conversion and display problems in certain circumstances, when scripting or on export for instance. If the ID gets interpreted as a number and there is a leading zero, the ID gets altered. Sorting problems. You can't rely on the natural order being helpful. Of course, if you run out of IDs, or don't know how to create new IDs, your app is dead. I suggest that if you can't control the format of your incoming IDs then you need to create your own (numeric) IDs and relate the user provided ID to that. You can then ensure that your own ID is reliable and unique (and numeric) but provide a user-viewable ID that can have whatever format your users want, and doesn't even have to be unique across the whole app. This is more work, but if you'd been through what I have you'd know which way to go. Anil G A: I'm not sure how good databases are at comparing whether one string is greater than another, like it can with integers. Try a query like this: SELECT * FROM my_table WHERE integer_as_string > '100'; A: The space an integer would take up would me much less than a string. For example 2^32-1 = 4,294,967,295. This would take 10 bytes to store, where as the integer would take 4 bytes to store. For a single entry this is not very much space, but when you start in the millions... As many other posts suggest there are several other issues to consider, but this is one drawback of the string representation. A: You won't be able to do comparisons correctly. "... where x > 500" is not same as ".. where x > '500'" because "500" > "100000" Performance wise string it would be a hit especially if you use indexes as integer indexes are much faster than string indexes. On the other hand it really depends upon your situation. If you intend to store something like phone numbers or student enrollment numbers, then it makes perfect sense to use strings. A: Integers are more efficient from a storage and performance perspective. However, if there is a remote chance that alpha characters may be introduced, then you should use a string. In my opinion, the efficiency and performance benefits are likely to be negligible, whereas the time it takes to modify your code may not be. A: As answered in Integer vs String in database In my country, post-codes are also always 4 digits. But the first digit can be zero. If you store "0700" as an integer, you can get a lot of problems: It may be read as an octal value If it is read correctly as a decimal value, it gets turned into "700" When you get the value "700", you must remember to add the zero I you don't add the zero, later on, how will you know if "700" is "0700", or someone mistyped "7100"? Technically, our post codes is actual strings, even if it is always 4 digits. You can store them as integers, to save space. But remember this is a simple DB-trick, and be careful about leading zeroes. But what about for storing how many files are in a torrent? Integer or string? That's clearly an integer. If the ID would ever start with zero, store it as in interger. A: Better use independent ID and add string ID if necessary: if there's a business indicator you need to include, why make it system ID? Main drawbacks: Integer operations and indexing always show better performance on large scales of data (more than 1k rows in a table, not to speak of connected tables) You'll have to make additional checks to restrict numeric-only values in a column: these can be regex whether on client or database side. Anyway, you'll have to guarantee somehow that there's actually integer. And you will create additional context layer for developers to know, and anyway someone will always mess this up :)
Drawbacks of storing an integer as a string in a database
I have id values for products that I need store. Right now they are all integers, but I'm not sure if the data provider in the future will introduce letters or symbols into that mix, so I'm debating whether to store it now as integer or string. Are there performance or other disadvantages to saving the values as strings?
[ "Unless you really need the features of an integer (that is, the ability to do arithmetic), then it is probably better for you to store the product IDs as strings. You will never need to do anything like add two product IDs together, or compute the average of a group of product IDs, so there is no need for an actual numeric type.\nIt is unlikely that storing product IDs as strings will cause a measurable difference in performance. While there will be a slight increase in storage size, the size of a product ID string is likely to be much smaller than the data in the rest of your database row anyway.\nStoring product IDs as strings today will save you much pain in the future if the data provider decides to start using alphabetic or symbol characters. There is no real downside.\n", "Do NOT consider performance. Consider meaning.\nID \"numbers\" are not numeric except that they are written with an alphabet of all digits.\nIf I have part number 12 and part number 14, what is the difference between the two? Is part number 2 or -2 meaningful? No.\nPart numbers (and anything that doesn't have units of measure) are not \"numeric\". They're just strings of digits.\nZip codes in the US, for example. Phone numbers. Social security numbers. These are not numbers. In my town the difference between zip code 12345 and 12309 isn't the distance from my house to downtown. \nDo not conflate numbers -- with units -- where sums and differences mean something with strings of digits without sums or differences.\nPart ID numbers are -- properly -- strings. Not integers. They'll never be integers because they don't have sums, differences or averages.\n", "It really depends on what kind of id you are talking about. If it's a code like a phone number it would actually be better to use a varchar for the id and then have your own id to be a serial for the db and use for primary key. In a case where the integer have no numerical value, varchars are generally prefered. \n", "I've just spent the last year dealing with a database that has almost all IDs as strings, some with digits only, and others mixed. These are the problems:\n\nGrossly restricted ID space. A 4 char (digit-only) ID has capacity for 10,000 unique values. A 4 byte numeric has capacity for over 4 billion.\nUnpredictable ID space coverage. Once IDs start including non-digits it becomes hard to predict where you can create new IDs without collisions.\nConversion and display problems in certain circumstances, when scripting or on export for instance. If the ID gets interpreted as a number and there is a leading zero, the ID gets altered.\nSorting problems. You can't rely on the natural order being helpful.\n\nOf course, if you run out of IDs, or don't know how to create new IDs, your app is dead. I suggest that if you can't control the format of your incoming IDs then you need to create your own (numeric) IDs and relate the user provided ID to that. You can then ensure that your own ID is reliable and unique (and numeric) but provide a user-viewable ID that can have whatever format your users want, and doesn't even have to be unique across the whole app. This is more work, but if you'd been through what I have you'd know which way to go.\nAnil G\n", "I'm not sure how good databases are at comparing whether one string is greater than another, like it can with integers. Try a query like this:\nSELECT * FROM my_table WHERE integer_as_string > '100';\n\n", "The space an integer would take up would me much less than a string. For example 2^32-1 = 4,294,967,295. This would take 10 bytes to store, where as the integer would take 4 bytes to store. For a single entry this is not very much space, but when you start in the millions... As many other posts suggest there are several other issues to consider, but this is one drawback of the string representation. \n", "\nYou won't be able to do comparisons correctly. \"... where x > 500\" is not same as \".. where x > '500'\" because \"500\" > \"100000\"\nPerformance wise string it would be a hit especially if you use indexes as integer indexes are much faster than string indexes.\n\nOn the other hand it really depends upon your situation. If you intend to store something like phone numbers or student enrollment numbers, then it makes perfect sense to use strings.\n", "Integers are more efficient from a storage and performance perspective. However, if there is a remote chance that alpha characters may be introduced, then you should use a string. In my opinion, the efficiency and performance benefits are likely to be negligible, whereas the time it takes to modify your code may not be.\n", "As answered in Integer vs String in database\nIn my country, post-codes are also always 4 digits. But the first digit can be zero.\n\nIf you store \"0700\" as an integer, you can get a lot of problems:\nIt may be read as an octal value\n If it is read correctly as a decimal value, it gets turned into \"700\"\n When you get the value \"700\", you must remember to add the zero\n I you don't add the zero, later on, how will you know if \"700\" is \"0700\", or someone mistyped \"7100\"?\n Technically, our post codes is actual strings, even if it is always 4 digits.\nYou can store them as integers, to save space. But remember this is a simple DB-trick, and be careful about leading zeroes.\nBut what about for storing how many files are in a torrent? Integer or string?\nThat's clearly an integer.\n\nIf the ID would ever start with zero, store it as in interger. \n", "Better use independent ID and add string ID if necessary: if there's a business indicator you need to include, why make it system ID?\nMain drawbacks:\n\nInteger operations and indexing always show better performance on large scales of data (more than 1k rows in a table, not to speak of connected tables)\nYou'll have to make additional checks to restrict numeric-only values in a column: these can be regex whether on client or database side. Anyway, you'll have to guarantee somehow that there's actually integer.\nAnd you will create additional context layer for developers to know, and anyway someone will always mess this up :)\n\n" ]
[ 37, 18, 3, 3, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "database", "database_design", "mysql", "python" ]
stackoverflow_0001090022_database_database_design_mysql_python.txt
Q: How to communicate between Python and C# using XML-RPC? Assume I have a simple XML-RPC service that is implemented with Python: from SimpleXMLRPCServer import SimpleXMLRPCServer # Python 2 def getTest(): return 'test message' if __name__ == '__main__' : server = SimpleXMLRPCServer(('localhost', 8888)) server.register_function(getTest) server.serve_forever() Can anyone tell me how to call the getTest() function from C#? A: Not to toot my own horn, but: http://liboxide.svn.sourceforge.net/viewvc/liboxide/trunk/Oxide.Net/Rpc/ class XmlRpcTest : XmlRpcClient { private static Uri remoteHost = new Uri("http://localhost:8888/"); [RpcCall] public string GetTest() { return (string)DoRequest(remoteHost, CreateRequest("getTest", null)); } } static class Program { static void Main(string[] args) { XmlRpcTest test = new XmlRpcTest(); Console.WriteLine(test.GetTest()); } } That should do the trick... Note, the above library is LGPL, which may or may not be good enough for you. A: Thank you for answer, I try xml-rpc library from darin link. I can call getTest function with following code using CookComputing.XmlRpc; ... namespace Hello { /* proxy interface */ [XmlRpcUrl("http://localhost:8888")] public interface IStateName : IXmlRpcProxy { [XmlRpcMethod("getTest")] string getTest(); } public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { /* implement section */ IStateName proxy = (IStateName)XmlRpcProxyGen.Create(typeof(IStateName)); string message = proxy.getTest(); MessageBox.Show(message); } } } A: In order to call the getTest method from c# you will need an XML-RPC client library. XML-RPC is an example of such a library.
How to communicate between Python and C# using XML-RPC?
Assume I have a simple XML-RPC service that is implemented with Python: from SimpleXMLRPCServer import SimpleXMLRPCServer # Python 2 def getTest(): return 'test message' if __name__ == '__main__' : server = SimpleXMLRPCServer(('localhost', 8888)) server.register_function(getTest) server.serve_forever() Can anyone tell me how to call the getTest() function from C#?
[ "Not to toot my own horn, but: http://liboxide.svn.sourceforge.net/viewvc/liboxide/trunk/Oxide.Net/Rpc/\nclass XmlRpcTest : XmlRpcClient\n{\n private static Uri remoteHost = new Uri(\"http://localhost:8888/\");\n\n [RpcCall]\n public string GetTest()\n {\n return (string)DoRequest(remoteHost, \n CreateRequest(\"getTest\", null));\n }\n}\n\nstatic class Program\n{\n static void Main(string[] args)\n {\n XmlRpcTest test = new XmlRpcTest();\n Console.WriteLine(test.GetTest());\n }\n}\n\nThat should do the trick... Note, the above library is LGPL, which may or may not be good enough for you.\n", "Thank you for answer, I try xml-rpc library from darin link. I can call getTest function with following code\nusing CookComputing.XmlRpc;\n...\n\n namespace Hello\n {\n /* proxy interface */\n [XmlRpcUrl(\"http://localhost:8888\")]\n public interface IStateName : IXmlRpcProxy\n {\n [XmlRpcMethod(\"getTest\")]\n string getTest();\n }\n\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n private void button1_Click(object sender, EventArgs e)\n {\n /* implement section */\n IStateName proxy = (IStateName)XmlRpcProxyGen.Create(typeof(IStateName));\n string message = proxy.getTest();\n MessageBox.Show(message);\n }\n }\n }\n\n", "In order to call the getTest method from c# you will need an XML-RPC client library. XML-RPC is an example of such a library.\n" ]
[ 3, 3, 2 ]
[]
[]
[ "c#", "python", "xml_rpc" ]
stackoverflow_0001090792_c#_python_xml_rpc.txt