content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Error while I use math.atan in Python! When I do 1/2 in Python why does it give me zero? Even if I coerce it with float(1/2) still I get zero. Why? And how can I get around it? When I give arctan(1/2) I get 0 as answer, but when I give arctan(.5) I get the correct answer! A: Because Python 2.x uses integer division for integers, so: 1/2 == 0 evaluates to True. You want to do: 1.0/2 or do a from __future__ import division A: First, 1/2 is integer division. Until Python 3.0. >>> 1/2 0 >>> 1.0/2.0 0.5 >>> Second, use math.atan2 for this kind of thing. >>> math.atan2(1,2) 0.46364760900080609 >>> math.atan(.5) 0.46364760900080609 A: atan(float(1)/2) If you do: atan(float(1/2)) in Python 2.x, but without: from __future__ import division the 1/2 is evaluated first as 0, then 0 is converted to a float, then atan(0.0) is called. This changes in Python 3, which uses float division by default even for integers. The short portable solution is what I first gave. A: float(1)/float(2) If you divide int / int you get an int, so float(0) still gives you 0.0 A: From the standard: The / (division) and // (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result. A: As these answers are implying, 1/2 doesn't return what you are expecting. It returns zero, because 1 and 2 are integers (integer division causes numbers to round down). Python 3 changes this behavior, by the way. A: Your coercing doesn't stand a chance because the answer is already zero before you hand it to float. Try 1./2 A: In Python, dividing integers yields an integer -- 0 in this case. There are two possible solutions. One is to force them into floats: 1/2. (note the trailing dot) or float(1)/2. Another is to use "from future import division" at the top of your code, and use the behavior you need. python -c 'from future import division;import math;print math.atan(1/2)' yields the correct 0.463647609001 A: If 1/2 == 0 then float(1/2) will be 0.0. If you coerce it to float after it's been truncated it'll still be truncated. There are a few options: Add the following import: from __future__ import division. This will make the / operator divide "correctly" in that module. You can use // if you need truncating division. Coerce either of the operands to a float. eg: float(1)/2 If you're actually using constants then just add a decimal point instead of using float(), eg: 1.0/2 or 1/2.0 or 1.0/2.0
Error while I use math.atan in Python!
When I do 1/2 in Python why does it give me zero? Even if I coerce it with float(1/2) still I get zero. Why? And how can I get around it? When I give arctan(1/2) I get 0 as answer, but when I give arctan(.5) I get the correct answer!
[ "Because Python 2.x uses integer division for integers, so:\n1/2 == 0\n\nevaluates to True.\nYou want to do:\n1.0/2\n\nor do a\nfrom __future__ import division\n\n", "First, 1/2 is integer division. Until Python 3.0.\n>>> 1/2\n0\n>>> 1.0/2.0\n0.5\n>>> \n\nSecond, use math.atan2 for this kind of thing.\n>>> math.atan2(1,2)\n0.46364760900080609\n>>> math.atan(.5)\n0.46364760900080609\n\n", "atan(float(1)/2)\n\nIf you do: \natan(float(1/2))\n\nin Python 2.x, but without:\nfrom __future__ import division\n\nthe 1/2 is evaluated first as 0, then 0 is converted to a float, then atan(0.0) is called. This changes in Python 3, which uses float division by default even for integers. The short portable solution is what I first gave.\n", "float(1)/float(2)\n\nIf you divide int / int you get an int, so float(0) still gives you 0.0\n", "From the standard:\nThe / (division) and // (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result.\n", "As these answers are implying, 1/2 doesn't return what you are expecting. It returns zero, because 1 and 2 are integers (integer division causes numbers to round down). Python 3 changes this behavior, by the way.\n", "Your coercing doesn't stand a chance because the answer is already zero before you hand it to float.\nTry 1./2\n", "In Python, dividing integers yields an integer -- 0 in this case.\nThere are two possible solutions. One is to force them into floats: 1/2. (note the trailing dot) or float(1)/2.\nAnother is to use \"from future import division\" at the top of your code, and use the behavior you need.\npython -c 'from future import division;import math;print math.atan(1/2)' yields the correct 0.463647609001\n", "If 1/2 == 0 then float(1/2) will be 0.0. If you coerce it to float after it's been truncated it'll still be truncated.\nThere are a few options:\n\nAdd the following import: from __future__ import division. This will make the / operator divide \"correctly\" in that module. You can use // if you need truncating division.\nCoerce either of the operands to a float. eg: float(1)/2\nIf you're actually using constants then just add a decimal point instead of using float(), eg: 1.0/2 or 1/2.0 or 1.0/2.0\n\n" ]
[ 7, 6, 3, 2, 2, 1, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0000993274_python.txt
Q: User Authentication And Text Parsing in Python Well I am working on a multistage program... I am having trouble getting the first stage done.. What I want to do is log on to Twitter.com, and then read all the direct messages on the user's page. Eventually I am going to be reading all the direct messages looking for certain thing, but that shouldn't be hard. This is my code so far import urllib import urllib2 import httplib import sys userName = "notmyusername" password = "notmypassword" URL = "http://twitter.com/#inbox" password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() password_mgr.add_password(None, "http://twitter.com/", userName, password) handler = urllib2.HTTPBasicAuthHandler(password_mgr) pageshit = urllib2.urlopen(URL, "80").readlines() print pageshit So a little insight and and help on what I am doing wrong would be quite helpful. A: Twitter does not use HTTP Basic Authentication to authenticate its users. It would be better, in this case, to use the Twitter API. A tutorial for using Python with the Twitter API is here: [http://www.webmonkey.com/tutorial/Get_Started_With_the_Twitter_API](http://www.webmonkey.com/tutorial/Get_Started_With_the_Twitter_API() A: The regular web interface of Twitter does not use basic authentication, so requesting pages from the web interface using this method won't work. According to the Twitter API docs, you can retrieve private messages by fetching this URL: http://twitter.com/direct_messages.format Format can be xml, json, rss or atom. This URL does accept basic authentication. Also, your code does not use the handler object that it builds at all. Here is a working example that corrects both problems. It fetches private messages in json format: import urllib2 username = "USERNAME" password = "PASSWORD" URL = "http://twitter.com/direct_messages.json" password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() password_mgr.add_password(None, "http://twitter.com/", username, password) handler = urllib2.HTTPBasicAuthHandler(password_mgr) opener = urllib2.build_opener(handler) try: file_obj = opener.open(URL) messages = file_obj.read() print messages except IOError, e: print "Error: ", e
User Authentication And Text Parsing in Python
Well I am working on a multistage program... I am having trouble getting the first stage done.. What I want to do is log on to Twitter.com, and then read all the direct messages on the user's page. Eventually I am going to be reading all the direct messages looking for certain thing, but that shouldn't be hard. This is my code so far import urllib import urllib2 import httplib import sys userName = "notmyusername" password = "notmypassword" URL = "http://twitter.com/#inbox" password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() password_mgr.add_password(None, "http://twitter.com/", userName, password) handler = urllib2.HTTPBasicAuthHandler(password_mgr) pageshit = urllib2.urlopen(URL, "80").readlines() print pageshit So a little insight and and help on what I am doing wrong would be quite helpful.
[ "Twitter does not use HTTP Basic Authentication to authenticate its users. It would be better, in this case, to use the Twitter API. \nA tutorial for using Python with the Twitter API is here: [http://www.webmonkey.com/tutorial/Get_Started_With_the_Twitter_API](http://www.webmonkey.com/tutorial/Get_Started_With_the_Twitter_API()\n", "The regular web interface of Twitter does not use basic authentication, so requesting pages from the web interface using this method won't work.\nAccording to the Twitter API docs, you can retrieve private messages by fetching this URL:\nhttp://twitter.com/direct_messages.format\n\nFormat can be xml, json, rss or atom. This URL does accept basic authentication.\nAlso, your code does not use the handler object that it builds at all.\nHere is a working example that corrects both problems. It fetches private messages in json format:\nimport urllib2\n\nusername = \"USERNAME\"\npassword = \"PASSWORD\"\nURL = \"http://twitter.com/direct_messages.json\"\n\npassword_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()\npassword_mgr.add_password(None, \"http://twitter.com/\", username, password)\nhandler = urllib2.HTTPBasicAuthHandler(password_mgr)\nopener = urllib2.build_opener(handler)\ntry:\n file_obj = opener.open(URL)\n messages = file_obj.read()\n print messages\nexcept IOError, e:\n print \"Error: \", e\n\n" ]
[ 5, 3 ]
[]
[]
[ "authentication", "http", "python", "urllib2" ]
stackoverflow_0000993619_authentication_http_python_urllib2.txt
Q: How do I wait for an image to load after an ajax call using jquery? I have a Python script that is doing some manipulation on a JPEG image. I pass some parameters to this script and call it from my HTML page. The script returns an img src="newimage.jpg tag. I know how to wait for the reply from the script but I don't know how to tell when the image is fully loaded (when it is, I want to display it). What I get now is the image loading slowly so the user is seeing this "loading" process. Instead, I want to have a msg telling the user to wait while the image is loading, only then I want to display the image. A: You can dynamically create a new image, bind something to its load event, and set the source: $('<img>').bind('load', function() { $(this).appendTo('body'); }).attr('src', image_source); A: Image Loading Wait for ajaxRequest A: The other answers have mentioned how to do so with jQuery, but regardless of library that you use, ultimately you will be tying into the load event of the image. Without a library, you could do something like this: var el = document.getElementById('ImgLocation'); var img = document.createElement('img'); img.onload = function() { this.style.display = 'block'; } img.src = '/path/to/image.jpg'; img.style.display = 'none'; el.appendChild(img);
How do I wait for an image to load after an ajax call using jquery?
I have a Python script that is doing some manipulation on a JPEG image. I pass some parameters to this script and call it from my HTML page. The script returns an img src="newimage.jpg tag. I know how to wait for the reply from the script but I don't know how to tell when the image is fully loaded (when it is, I want to display it). What I get now is the image loading slowly so the user is seeing this "loading" process. Instead, I want to have a msg telling the user to wait while the image is loading, only then I want to display the image.
[ "You can dynamically create a new image, bind something to its load event, and set the source:\n$('<img>').bind('load', function() {\n $(this).appendTo('body');\n}).attr('src', image_source);\n\n", "Image Loading\nWait for ajaxRequest\n", "The other answers have mentioned how to do so with jQuery, but regardless of library that you use, ultimately you will be tying into the load event of the image.\nWithout a library, you could do something like this:\nvar el = document.getElementById('ImgLocation');\n\nvar img = document.createElement('img');\nimg.onload = function() {\n this.style.display = 'block';\n}\nimg.src = '/path/to/image.jpg';\nimg.style.display = 'none';\n\nel.appendChild(img);\n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "ajax", "jquery", "python" ]
stackoverflow_0000993712_ajax_jquery_python.txt
Q: How to tell a panel that it is being resized when using wx.aui I'm using wx.aui to build my user interface. I'm defining a class that inherits from wx.Panel and I need to change the content of that panel when its window pane is resized. I'm using code very similar to the code below (which is a modified version of sample code found here). My question is: is there a wx.Panel method being called behind the scenes by the AuiManager that I can overload? If not, how can my ControlPanel object know that it's being resized? For instance, if I run this code and drag up the horizontal divider between the upper and lower panes on the right, how is the upper right panel told that its size just changed? import wx import wx.aui class ControlPanel(wx.Panel): def __init__(self, *args, **kwargs): wx.Panel.__init__(self, *args, **kwargs) class MyFrame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.mgr = wx.aui.AuiManager(self) leftpanel = ControlPanel(self, -1, size = (200, 150)) rightpanel = ControlPanel(self, -1, size = (200, 150)) bottompanel = ControlPanel(self, -1, size = (200, 150)) self.mgr.AddPane(leftpanel, wx.aui.AuiPaneInfo().Bottom()) self.mgr.AddPane(rightpanel, wx.aui.AuiPaneInfo().Left().Layer(1)) self.mgr.AddPane(bottompanel, wx.aui.AuiPaneInfo().Center().Layer(2)) self.mgr.Update() class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '07_wxaui.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) app.MainLoop() A: According to the wx.Panel docs, wx.Panel.Layout is called "automatically by the default EVT_SIZE handler when the window is resized." EDIT: However, the above doesn't work as I would expect, so try manually binding EVT_SIZE: class ControlPanel(wx.Panel): def __init__(self, *args, **kwargs): wx.Panel.__init__(self, *args, **kwargs) self.Bind(wx.EVT_SIZE, self.OnResize) def OnResize(self, *args, **kwargs): print "Resizing"
How to tell a panel that it is being resized when using wx.aui
I'm using wx.aui to build my user interface. I'm defining a class that inherits from wx.Panel and I need to change the content of that panel when its window pane is resized. I'm using code very similar to the code below (which is a modified version of sample code found here). My question is: is there a wx.Panel method being called behind the scenes by the AuiManager that I can overload? If not, how can my ControlPanel object know that it's being resized? For instance, if I run this code and drag up the horizontal divider between the upper and lower panes on the right, how is the upper right panel told that its size just changed? import wx import wx.aui class ControlPanel(wx.Panel): def __init__(self, *args, **kwargs): wx.Panel.__init__(self, *args, **kwargs) class MyFrame(wx.Frame): def __init__(self, *args, **kwargs): wx.Frame.__init__(self, *args, **kwargs) self.mgr = wx.aui.AuiManager(self) leftpanel = ControlPanel(self, -1, size = (200, 150)) rightpanel = ControlPanel(self, -1, size = (200, 150)) bottompanel = ControlPanel(self, -1, size = (200, 150)) self.mgr.AddPane(leftpanel, wx.aui.AuiPaneInfo().Bottom()) self.mgr.AddPane(rightpanel, wx.aui.AuiPaneInfo().Left().Layer(1)) self.mgr.AddPane(bottompanel, wx.aui.AuiPaneInfo().Center().Layer(2)) self.mgr.Update() class MyApp(wx.App): def OnInit(self): frame = MyFrame(None, -1, '07_wxaui.py') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = MyApp(0) app.MainLoop()
[ "According to the wx.Panel docs, wx.Panel.Layout is called \"automatically by the default EVT_SIZE handler when the window is resized.\"\nEDIT: However, the above doesn't work as I would expect, so try manually binding EVT_SIZE:\nclass ControlPanel(wx.Panel):\n def __init__(self, *args, **kwargs):\n wx.Panel.__init__(self, *args, **kwargs)\n self.Bind(wx.EVT_SIZE, self.OnResize)\n\n def OnResize(self, *args, **kwargs):\n print \"Resizing\"\n\n" ]
[ 3 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0000993923_python_wxpython.txt
Q: Python regular expression with [:numeric:] I am having some trouble with Python giving me a result I do not expect. Here is a sample code : number = re.search(" [0-9] ", "test test2 test_ 2 333") print number.groups() number = re.search(" [[:digit:]] ", "test test2 test_ 2 333") print number.groups() In the first block I get an object returned but with nothing in it. Where I think I should get the string "2". In the second block I don't even get an object, where I am expection the string "2". While when I do this in bash everything looks fine : echo "test test2 test_ 2 333" | grep " [[:digit:]] " echo "test test2 test_ 2 333" | grep " [0-9] " Can somebody help me please? A: The groups() method returns the capture groups. It does not return group 0, in case that's what you were expecting. Use parens to indicate capture groups. eg: >>> number = re.search(" ([0-9]) ", "test test2 test_ 2 333") >>> print number.groups() ('2',) For your second example, Python's re module doesn't recognize the "[:digit:]" syntax. Use \d. eg: >>> number = re.search(r" (\d) ", "test test2 test_ 2 333") >>> print number.groups() ('2',) A: You are missing the () which capture the contents for use with the groups() (and other) function(s). number = re.search(" ([0-9]) ", "test test2 test_ 2 333") print number.groups() This however won't work because python does not support the [[:number:]] notation number = re.search(" ([[:digit:]]) ", "test test2 test_ 2 333") print number.groups() A: Is this what you're looking for? >>> re.findall(r'([0-9])', "test test2 test_ 2 333") ['2', '2', '3', '3', '3'] A: number = re.search(" [0-9] ", "test test2 test_ 2 333") print number.group(0) groups() only returns groups 1 and up (a bit odd if you're used to other languages). A: .groups() returns values inside of matched parentheses. This regex doesn't have any regions defined by parens so groups returns nothing. You want: m = re.search(" ([0-9]) ", "test test2 test_ 2 333") m.groups() ('2',)
Python regular expression with [:numeric:]
I am having some trouble with Python giving me a result I do not expect. Here is a sample code : number = re.search(" [0-9] ", "test test2 test_ 2 333") print number.groups() number = re.search(" [[:digit:]] ", "test test2 test_ 2 333") print number.groups() In the first block I get an object returned but with nothing in it. Where I think I should get the string "2". In the second block I don't even get an object, where I am expection the string "2". While when I do this in bash everything looks fine : echo "test test2 test_ 2 333" | grep " [[:digit:]] " echo "test test2 test_ 2 333" | grep " [0-9] " Can somebody help me please?
[ "The groups() method returns the capture groups. It does not return group 0, in case that's what you were expecting. Use parens to indicate capture groups. eg:\n>>> number = re.search(\" ([0-9]) \", \"test test2 test_ 2 333\")\n>>> print number.groups()\n('2',)\n\nFor your second example, Python's re module doesn't recognize the \"[:digit:]\" syntax. Use \\d. eg:\n>>> number = re.search(r\" (\\d) \", \"test test2 test_ 2 333\")\n>>> print number.groups()\n('2',)\n\n", "You are missing the () which capture the contents for use with the groups() (and other) function(s).\nnumber = re.search(\" ([0-9]) \", \"test test2 test_ 2 333\")\nprint number.groups()\n\nThis however won't work because python does not support the [[:number:]] notation\nnumber = re.search(\" ([[:digit:]]) \", \"test test2 test_ 2 333\")\nprint number.groups()\n\n", "Is this what you're looking for?\n>>> re.findall(r'([0-9])', \"test test2 test_ 2 333\")\n['2', '2', '3', '3', '3']\n\n", "number = re.search(\" [0-9] \", \"test test2 test_ 2 333\")\nprint number.group(0)\n\ngroups() only returns groups 1 and up (a bit odd if you're used to other languages).\n", ".groups() returns values inside of matched parentheses. This regex doesn't have any regions defined by parens so groups returns nothing. You want:\n\n\n\nm = re.search(\" ([0-9]) \", \"test test2 test_ 2 333\")\n m.groups()\n ('2',)\n\n\n\n" ]
[ 3, 2, 1, 1, 0 ]
[]
[]
[ "bash", "python", "regex" ]
stackoverflow_0000994178_bash_python_regex.txt
Q: How Much Traffic Can Shared Web Hosting (for a Python Django site) support? Someone in this thread How Much Traffic Can Shared Web Hosting Take? stated that a $5/mo shared hosting account on Reliablesite.net can support 10,000 - 20,000 unique users/day and 100,000 - 200,000 pageviews/day. That seems awfully high for a $5/mo account. And someone else told me it's far less than that. What's your experience? I have a site based on Python, Django, MySQL/Postgresql. It doesn't have any video or other bandwidth heavy elements, but the whole site is dynamic, each page takes about 5 to 10 DB query, 90% reads, 10% writes. Reliablesite.net is an ASP.NET hosting company. Any Python/LAMP hosting firm that can support 100-200,000 pageviews on a shared hosting account? If not, what kind of numbers am I looking at? Any suggestions for good hosting firms? Thanks A: 100,000 - 200,000 pageviews/day is on average 2 pageviews/s, at most you'll get 10-20 pageviews/s during busy hours. That's not a lot to handle, especially if you have caching. Anyways, I'd go for VPS. The problem with shared server is that you can never know the pattern of use the other ppl have. A: Webfaction hosting hosts nearly 10 sites of ours handling over 10k users each day, easily. I am also told that Slicehost is just as good. Webfaction and Slicehost are often looked upto for mod_wsgi python hosting, which is fast becoming the preferred way to host django apps. These hosts seem to be on a slightly higher side of the charges/month; but its worth it, as they are reliable. A: Most hosts support multiple sites without extra charge. Don't pick GoDaddy because of that. I never used GoDaddy hosting, but use them for domain registration, and they are absolutely terrible. Terrible UI, terrible performance. I would never trust them to host a website. The only reason I use them for domain registration is that they seem to be the cheapest option. For shared web hosting, especially Python/Django, I recommend WEBFACTION. A: I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing. Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server. I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static. One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors. A: I'm with GoDaddy.com and it's true that you can have an unlimited number of sites on the same hosting plan but you are limited to 100 unique users. This means that it doesn't matter if you have 1 website or 1000 websites on your hosting plan, you can only have 100 visitors at the same time throughout all of your sites combined. Some visitors could leave and some new ones can arrive but never more than 100 at a time on your GoDaddy hosting plan. I have 10 sites myself so if I have 100 visitors to http://www.milliondollarmysterychallenge.com then none of my other sites can get any traffic until someone leaves! this stinks! if anybody knows of a better place to host please post a link in the comments. A: If your application is optimized, you shared hosting account can handle 10k unique visitors per day. You can find a great hosting for your needs at WFT (WebHostingTalk) One of the biggest hosting provider is GoDaddy (I RECOMMEND IT). Their shared hosting plan with Python starts from $7/month. With them you can host multiple websites on the same account without extra charge. http://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009 And also take a look at this offer: http://mediatemple.net/webhosting/gs/features/ (mt) MediaTemplate company is not that big as GoDaddy but is also in good standing. Reliable.net is too small. So, here recommended options are: GoDaddy - Info - Alexa Rank: ~410 HostGator - Info - Alexa Rank: ~670 A: I am hosting with Godaddy. But I am not aware of this 100 users at the same time thing. A: That sounds like a stretch for a $5/month shared hosting service. I'd suggest looking in to MediaTemple Grid-Service which is a bit more expensive at $20/month but is more likely to be able to handle your volume and grow with you.
How Much Traffic Can Shared Web Hosting (for a Python Django site) support?
Someone in this thread How Much Traffic Can Shared Web Hosting Take? stated that a $5/mo shared hosting account on Reliablesite.net can support 10,000 - 20,000 unique users/day and 100,000 - 200,000 pageviews/day. That seems awfully high for a $5/mo account. And someone else told me it's far less than that. What's your experience? I have a site based on Python, Django, MySQL/Postgresql. It doesn't have any video or other bandwidth heavy elements, but the whole site is dynamic, each page takes about 5 to 10 DB query, 90% reads, 10% writes. Reliablesite.net is an ASP.NET hosting company. Any Python/LAMP hosting firm that can support 100-200,000 pageviews on a shared hosting account? If not, what kind of numbers am I looking at? Any suggestions for good hosting firms? Thanks
[ "100,000 - 200,000 pageviews/day is on average 2 pageviews/s, at most you'll get 10-20 pageviews/s during busy hours. That's not a lot to handle, especially if you have caching.\nAnyways, I'd go for VPS. The problem with shared server is that you can never know the pattern of use the other ppl have.\n", "Webfaction hosting hosts nearly 10 sites of ours handling over 10k users each day, easily. I am also told that Slicehost is just as good.\nWebfaction and Slicehost are often looked upto for mod_wsgi python hosting, which is fast becoming the preferred way to host django apps.\nThese hosts seem to be on a slightly higher side of the charges/month; but its worth it, as they are reliable.\n", "Most hosts support multiple sites without extra charge. Don't pick GoDaddy because of that. I never used GoDaddy hosting, but use them for domain registration, and they are absolutely terrible. Terrible UI, terrible performance. I would never trust them to host a website. The only reason I use them for domain registration is that they seem to be the cheapest option.\nFor shared web hosting, especially Python/Django, I recommend WEBFACTION.\n", "I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.\nTheoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.\nI have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.\nOne other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.\n", "I'm with GoDaddy.com and it's true that you can have an unlimited number of sites on the same hosting plan but you are limited to 100 unique users. \nThis means that it doesn't matter if you have 1 website or 1000 websites on your hosting plan, you can only have 100 visitors at the same time throughout all of your sites combined.\nSome visitors could leave and some new ones can arrive but never more than 100 at a time on your GoDaddy hosting plan. I have 10 sites myself so if I have 100 visitors to http://www.milliondollarmysterychallenge.com then none of my other sites can get any traffic until someone leaves! this stinks! if anybody knows of a better place to host please post a link in the comments.\n", "If your application is optimized, you shared hosting account can handle 10k unique visitors per day.\nYou can find a great hosting for your needs at WFT (WebHostingTalk)\nOne of the biggest hosting provider is GoDaddy (I RECOMMEND IT). Their shared hosting plan with Python starts from $7/month. With them you can host multiple websites on the same account without extra charge.\nhttp://www.godaddy.com/gdshop/hosting/shared.asp?ci=9009\nAnd also take a look at this offer: http://mediatemple.net/webhosting/gs/features/\n(mt) MediaTemplate company is not that big as GoDaddy but is also in good standing. Reliable.net is too small.\nSo, here recommended options are:\n\nGoDaddy - Info - Alexa Rank: ~410\nHostGator - Info - Alexa Rank: ~670\n\n", "I am hosting with Godaddy. But I am not aware of this 100 users at the same time thing.\n", "That sounds like a stretch for a $5/month shared hosting service. I'd suggest looking in to MediaTemple Grid-Service which is a bit more expensive at $20/month but is more likely to be able to handle your volume and grow with you.\n" ]
[ 3, 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "hosting", "python", "shared_hosting", "web_hosting" ]
stackoverflow_0000708799_hosting_python_shared_hosting_web_hosting.txt
Q: How can I explicitly disable compilation of _tkinter.c when compiling Python 2.4.3 on CentOS 5? I'm trying to explicitly disable the compilation of the _tkinter module when compiling Python 2.4.3. It's easy enough to do by modifying the makefile but I'd rather just append a configuration option to avoid supplying a patch. I do not understand the complex interplay between Modules/Setup*, setup.py and their contribution to the generation of makefile. A: Unfortunately I suspect you can't do it without editing some file or other -- it's not a configure option we wrote in as far as I recall (I hope I'm wrong and somebody else snuck it in while I wasn't looking but a quick look at the configure file seems to confirm they didnt'). Sorry -- we never thought that somebody (with all the tk libraries installed, otherwise tkinter gets skipped) would need to deliberately avoid building _tkinter:-(. In retrospect, we clearly were wrong, so I apologize.
How can I explicitly disable compilation of _tkinter.c when compiling Python 2.4.3 on CentOS 5?
I'm trying to explicitly disable the compilation of the _tkinter module when compiling Python 2.4.3. It's easy enough to do by modifying the makefile but I'd rather just append a configuration option to avoid supplying a patch. I do not understand the complex interplay between Modules/Setup*, setup.py and their contribution to the generation of makefile.
[ "Unfortunately I suspect you can't do it without editing some file or other -- it's not a configure option we wrote in as far as I recall (I hope I'm wrong and somebody else snuck it in while I wasn't looking but a quick look at the configure file seems to confirm they didnt'). Sorry -- we never thought that somebody (with all the tk libraries installed, otherwise tkinter gets skipped) would need to deliberately avoid building _tkinter:-(. In retrospect, we clearly were wrong, so I apologize.\n" ]
[ 5 ]
[]
[]
[ "compilation", "python", "tkinter" ]
stackoverflow_0000994278_compilation_python_tkinter.txt
Q: If it is decided that our system needs an overhaul, what is the best way to go about it? We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others. Currently there are two paths being discussed: Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer. Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right. This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background. Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful. A: Don't throw away your code! It's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1. You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in. For new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code. You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that. It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility. Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement. A: Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need. Start by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build. (I love to delete code.) A: It works out better than you'd believe. Recently I did a large reverse-engineering job on a hideous old collection of C code. Function by function I reallocated the features that were still relevant into classes, wrote unit tests for the classes, and built up what looked like a replacement application. It had some of the original "logic flow" through the classes, and some classes were poorly designed [Mostly this was because of a subset of the global variables that was too hard to tease apart.] It passed unit tests at the class level and at the overall application level. The legacy source was mostly used as a kind of "specification in C" to ferret out the really obscure business rules. Last year, I wrote a project plan for replacing 30-year old COBOL. The customer was leaning toward Java. I prototyped the revised data model in Python using Django as part of the planning effort. I could demo the core transactions before I was done planning. Note: It was quicker to build a the model and admin interface in Django than to plan the project as a whole. Because of the "we need to use Java" mentality, the resulting project will be larger and more expensive than finishing the Django demo. With no real value to balance that cost. Also, I did the same basic "prototype in Django" for a VB desktop application that needed to become a web application. I built the model in Django, loaded legacy data, and was up and running in a few weeks. I used that working prototype to specify the rest of the conversion effort. Note: I had a working Django implementation (model and admin pages only) that I used to plan the rest of the effort. The best part about doing this kind of prototyping in Django is that you can mess around with the model, unit tests and admin pages until you get it right. Once the model's right, you can spend the rest of your time fiddling around with the user interface until everyone's happy. A: Whatever you do, see if you can manage to follow a plan where you do not have to port the application all in one big bang. It is tempting to throw it all away and start from scratch, but if you can manage to do it gradually the mistakes you do will not cost so much and cause so much panic. A: Half a year ago I took over a large web application (fortunately already in Python) which had some major architectural deficiencies (templates and code mixed, code duplication, you name it...). My plan is to eventually have the system respond to WSGI, but I am not there yet. I found the best way to do it, is in small steps. Over the last 6 month, code reuse has gone up and progress has accelerated. General principles which have worked for me: Throw away code which is not used or commented out Throw away all comments which are not useful Define a layer hierarchy (models, business logic, view/controller logic, display logic, etc.) of your application. This has not to be very clear cut architecture but rather should help you think about the various parts of your application and help you better categorize your code. If something grossly violates this hierarchy, change the offending code. Move the code around, recode it at another place, etc. At the same time adjust the rest of your application to use this code instead of the old one. Throw the old one away if not used anymore. Keep you APIs simple! Progress can be painstakingly slow, but should be worth it. A: I would not recommend JScript as that is definitely the road less traveled. ASP.NET MVC is rapidly maturing, and I think that you could begin a migration to it, simultaneously ramping up on the ASP.NET MVC framework as its finalization comes through. Another option would be to use something like ASP.NET w/Subsonic or NHibernate. A: Don't try and go 2.0 ( more features then currently exists or scheduled) instead build your new platform with the intent of resolving the current issues with the code base (maintainability/speed/wtf) and go from there. A: A good place to begin if you're considering the move to Python is to rewrite your administrator interface in Django. This will help you get some of the kinks worked out in terms of getting Python up and running with IIS (or to migrate it to Apache). Speaking of which, I recommend isapi-wsgi. It's by far the easiest way to get up and running with IIS. A: I agree with Michael Pryor and Joel that it's almost always a better idea to continue evolving your existing code base rather than re-writing from scratch. There are typically opportunities to just re-write or re-factor certain components for performance or flexibility.
If it is decided that our system needs an overhaul, what is the best way to go about it?
We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others. Currently there are two paths being discussed: Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer. Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right. This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background. Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful.
[ "Don't throw away your code!\nIt's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1.\nYou've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel \"better\", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in.\nFor new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code.\nYou could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that. \nIt allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility.\nAlso, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement.\n", "Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need.\nStart by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build.\n(I love to delete code.)\n", "It works out better than you'd believe. \nRecently I did a large reverse-engineering job on a hideous old collection of C code. Function by function I reallocated the features that were still relevant into classes, wrote unit tests for the classes, and built up what looked like a replacement application. It had some of the original \"logic flow\" through the classes, and some classes were poorly designed [Mostly this was because of a subset of the global variables that was too hard to tease apart.]\nIt passed unit tests at the class level and at the overall application level. The legacy source was mostly used as a kind of \"specification in C\" to ferret out the really obscure business rules.\nLast year, I wrote a project plan for replacing 30-year old COBOL. The customer was leaning toward Java. I prototyped the revised data model in Python using Django as part of the planning effort. I could demo the core transactions before I was done planning.\nNote: It was quicker to build a the model and admin interface in Django than to plan the project as a whole.\nBecause of the \"we need to use Java\" mentality, the resulting project will be larger and more expensive than finishing the Django demo. With no real value to balance that cost.\nAlso, I did the same basic \"prototype in Django\" for a VB desktop application that needed to become a web application. I built the model in Django, loaded legacy data, and was up and running in a few weeks. I used that working prototype to specify the rest of the conversion effort.\nNote: I had a working Django implementation (model and admin pages only) that I used to plan the rest of the effort.\nThe best part about doing this kind of prototyping in Django is that you can mess around with the model, unit tests and admin pages until you get it right. Once the model's right, you can spend the rest of your time fiddling around with the user interface until everyone's happy.\n", "Whatever you do, see if you can manage to follow a plan where you do not have to port the application all in one big bang. It is tempting to throw it all away and start from scratch, but if you can manage to do it gradually the mistakes you do will not cost so much and cause so much panic.\n", "Half a year ago I took over a large web application (fortunately already in Python) which had some major architectural deficiencies (templates and code mixed, code duplication, you name it...).\nMy plan is to eventually have the system respond to WSGI, but I am not there yet. I found the best way to do it, is in small steps. Over the last 6 month, code reuse has gone up and progress has accelerated. \nGeneral principles which have worked for me:\n\nThrow away code which is not used or commented out\nThrow away all comments which are not useful\nDefine a layer hierarchy (models, business logic, view/controller logic, display logic, etc.) of your application. This has not to be very clear cut architecture but rather should help you think about the various parts of your application and help you better categorize your code.\nIf something grossly violates this hierarchy, change the offending code. Move the code around, recode it at another place, etc. At the same time adjust the rest of your application to use this code instead of the old one. Throw the old one away if not used anymore.\nKeep you APIs simple!\n\nProgress can be painstakingly slow, but should be worth it. \n", "I would not recommend JScript as that is definitely the road less traveled.\nASP.NET MVC is rapidly maturing, and I think that you could begin a migration to it, simultaneously ramping up on the ASP.NET MVC framework as its finalization comes through.\nAnother option would be to use something like ASP.NET w/Subsonic or NHibernate.\n", "Don't try and go 2.0 ( more features then currently exists or scheduled) instead build your new platform with the intent of resolving the current issues with the code base (maintainability/speed/wtf) and go from there. \n", "A good place to begin if you're considering the move to Python is to rewrite your administrator interface in Django. This will help you get some of the kinks worked out in terms of getting Python up and running with IIS (or to migrate it to Apache). Speaking of which, I recommend isapi-wsgi. It's by far the easiest way to get up and running with IIS.\n", "I agree with Michael Pryor and Joel that it's almost always a better idea to continue evolving your existing code base rather than re-writing from scratch. There are typically opportunities to just re-write or re-factor certain components for performance or flexibility.\n" ]
[ 7, 3, 3, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "asp_classic", "python", "vbscript" ]
stackoverflow_0000087522_asp_classic_python_vbscript.txt
Q: How to synthesize sounds? I'd like to produce sounds that would resemble audio from real instruments. The problem is that I have very little clue how to get that. What I know this far from real instruments is that sounds they output are rarely clean. But how to produce such unclean sounds? This far I've gotten to do this, it produces quite plain sound from which I'm not sure it's even using the alsa correctly. import numpy from numpy.fft import fft, ifft from numpy.random import random_sample from alsaaudio import PCM, PCM_NONBLOCK, PCM_FORMAT_FLOAT_LE pcm = PCM()#mode=PCM_NONBLOCK) pcm.setrate(44100) pcm.setformat(PCM_FORMAT_FLOAT_LE) pcm.setchannels(1) pcm.setperiodsize(4096) def sine_wave(x, freq=100): sample = numpy.arange(x*4096, (x+1)*4096, dtype=numpy.float32) sample *= numpy.pi * 2 / 44100 sample *= freq return numpy.sin(sample) for x in xrange(1000): sample = sine_wave(x, 100) pcm.write(sample.tostring()) A: Sound synthesis is a complex topic which requires many years of study to master. It is also not an entirely solved problem, although relatively recent developments (such as physical modelling synthesis) have made progress in imitating real-world instruments. There are a number of options open to you. If you are sure that you want to explore synthesis further, then I suggest you start by learning about FM synthesis. It is relatively easy to learn and implement in software, at least in basic forms, and produces a wide range of interesting sounds. Also, check out the book "The Computer Music Tutorial" by Curtis Roads. It's a bible for all things computer music, and although it's a few years old it is the book of choice for learning the fundamentals. If you want a quicker way to produce life-like sound, consider using sampling techniques: that is, record the instruments you want to reproduce (or use a pre-existing sample bank), and just play back the samples. It's a much more straightforward (and often more effective) approach. A: Cheery, if you want to generate (from scratch) something that really sounds "organic", i.e. like a physical object, you're probably best off to learn a bit about how these sounds are generated. For a solid introduction, you could have a look at a book such as Fletcher and Rossings The Physics of Musical Instruments. There's lots of stuff on the web too, you might want to have a look at a the primer James Clark has here Having at least a skim over this sort of stuff will give you an idea of what you are up against. Modeling physical instruments accurately is very difficult! If what you want to do is have something that sounds physical, rather something that sounds like instrument X, your job is a bit easier. You can build up frequencies quite easily and stack them together, add a little noise, and you'll get something that at least doesn't sound anything like a pure tone. Reading a bit about Fourier analysis in general will help, as will Frequency Modulation (FM) techniques. Have fun! A: I agree that this is very non-trivial and there's no set "right way", but you should consider starting with a (or making your own) MIDI SoundFont. A: As other people said, not a trivial topic at all. There are challenges both at the programming side of things (especially if you care about low-latency) and the synthesis part. A goldmine for sound synthesis is the page by Julius O. Smith. There is a lot of techniques for synthesis http://ccrma-www.stanford.edu/~jos/.
How to synthesize sounds?
I'd like to produce sounds that would resemble audio from real instruments. The problem is that I have very little clue how to get that. What I know this far from real instruments is that sounds they output are rarely clean. But how to produce such unclean sounds? This far I've gotten to do this, it produces quite plain sound from which I'm not sure it's even using the alsa correctly. import numpy from numpy.fft import fft, ifft from numpy.random import random_sample from alsaaudio import PCM, PCM_NONBLOCK, PCM_FORMAT_FLOAT_LE pcm = PCM()#mode=PCM_NONBLOCK) pcm.setrate(44100) pcm.setformat(PCM_FORMAT_FLOAT_LE) pcm.setchannels(1) pcm.setperiodsize(4096) def sine_wave(x, freq=100): sample = numpy.arange(x*4096, (x+1)*4096, dtype=numpy.float32) sample *= numpy.pi * 2 / 44100 sample *= freq return numpy.sin(sample) for x in xrange(1000): sample = sine_wave(x, 100) pcm.write(sample.tostring())
[ "Sound synthesis is a complex topic which requires many years of study to master. \nIt is also not an entirely solved problem, although relatively recent developments (such as physical modelling synthesis) have made progress in imitating real-world instruments.\nThere are a number of options open to you. If you are sure that you want to explore synthesis further, then I suggest you start by learning about FM synthesis. It is relatively easy to learn and implement in software, at least in basic forms, and produces a wide range of interesting sounds. Also, check out the book \"The Computer Music Tutorial\" by Curtis Roads. It's a bible for all things computer music, and although it's a few years old it is the book of choice for learning the fundamentals.\nIf you want a quicker way to produce life-like sound, consider using sampling techniques: that is, record the instruments you want to reproduce (or use a pre-existing sample bank), and just play back the samples. It's a much more straightforward (and often more effective) approach. \n", "Cheery, if you want to generate (from scratch) something that really sounds \"organic\", i.e. like a physical object, you're probably best off to learn a bit about how these sounds are generated. For a solid introduction, you could have a look at a book such as Fletcher and Rossings The Physics of Musical Instruments. There's lots of stuff on the web too, you might want to have a look at a the primer James Clark has here\nHaving at least a skim over this sort of stuff will give you an idea of what you are up against. Modeling physical instruments accurately is very difficult!\nIf what you want to do is have something that sounds physical, rather something that sounds like instrument X, your job is a bit easier. You can build up frequencies quite easily and stack them together, add a little noise, and you'll get something that at least doesn't sound anything like a pure tone.\nReading a bit about Fourier analysis in general will help, as will Frequency Modulation (FM) techniques.\nHave fun!\n", "I agree that this is very non-trivial and there's no set \"right way\", but you should consider starting with a (or making your own) MIDI SoundFont.\n", "As other people said, not a trivial topic at all. There are challenges both at the programming side of things (especially if you care about low-latency) and the synthesis part. A goldmine for sound synthesis is the page by Julius O. Smith. There is a lot of techniques for synthesis http://ccrma-www.stanford.edu/~jos/.\n" ]
[ 16, 8, 1, 0 ]
[]
[]
[ "alsa", "numpy", "python" ]
stackoverflow_0000790960_alsa_numpy_python.txt
Q: trouble with pamie I'm having some strange trouble with pamie: http://pamie.sourceforge.net/ . I have written a script to do some port (25) forwarding based on a recepie that I found on the web, Here is the code that matters: # forwardc2s(source, destination): # forwards from client to server. # Tries to post the message to ICE. def forwardc2s(source, destination): string = ' ' message = '' while string: string = source.recv(1024) if string: if string[:4] == 'DATA' or message <> '': # Put the entire text of the email into a variable: message message = message + string destination.sendall(string) else: posttotracker(message) # post message to tracker. source.shutdown(socket.SHUT_RD) destination.shutdown(socket.SHUT_WR) The 'posttotracker' function is not yet complete... all that it contains is: def posttotracker(message): ie = PAMIE('http://google.com/') This gives me an error as follows: Unhandled exception in thread started by <function forwardc2s at 0x00E6C0B0> Traceback (most recent call last): File "main.py", line 2398, in forwardc2s posttotracker(message) # post message to tracker. File "main.py", line 2420, in posttotracker ie = PAMIE('http://google.com/') File "main.py", line 58, in __init__ self._ie = win32com.client.dynamic.Dispatch('InternetExplorer.Application') File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 112, in Dispatch IDispatch, userName = _GetGoodDispatchAndUserName(IDispatch,userName,clsctx) File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 104, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 84, in _ GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.II D_IDispatch) pywintypes.com_error: (-2147221008, 'CoInitialize has not been called.', None, N one) The funny thing is that if I do the very same thing outside of this function (in the main function for example) the library works exactly as expected. Any ideas? Pardon me if this information is not sufficient, I'm just a starting python coder. A: The PAMIE object does not work within threads!!! I was originally starting forwardc2s as a thread. When I just call it as a function instead, everything works fine! Please consider this question resolved... with great thanks to the rubber duck.
trouble with pamie
I'm having some strange trouble with pamie: http://pamie.sourceforge.net/ . I have written a script to do some port (25) forwarding based on a recepie that I found on the web, Here is the code that matters: # forwardc2s(source, destination): # forwards from client to server. # Tries to post the message to ICE. def forwardc2s(source, destination): string = ' ' message = '' while string: string = source.recv(1024) if string: if string[:4] == 'DATA' or message <> '': # Put the entire text of the email into a variable: message message = message + string destination.sendall(string) else: posttotracker(message) # post message to tracker. source.shutdown(socket.SHUT_RD) destination.shutdown(socket.SHUT_WR) The 'posttotracker' function is not yet complete... all that it contains is: def posttotracker(message): ie = PAMIE('http://google.com/') This gives me an error as follows: Unhandled exception in thread started by <function forwardc2s at 0x00E6C0B0> Traceback (most recent call last): File "main.py", line 2398, in forwardc2s posttotracker(message) # post message to tracker. File "main.py", line 2420, in posttotracker ie = PAMIE('http://google.com/') File "main.py", line 58, in __init__ self._ie = win32com.client.dynamic.Dispatch('InternetExplorer.Application') File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 112, in Dispatch IDispatch, userName = _GetGoodDispatchAndUserName(IDispatch,userName,clsctx) File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 104, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "c:\Python26\lib\site-packages\win32com\client\dynamic.py", line 84, in _ GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.II D_IDispatch) pywintypes.com_error: (-2147221008, 'CoInitialize has not been called.', None, N one) The funny thing is that if I do the very same thing outside of this function (in the main function for example) the library works exactly as expected. Any ideas? Pardon me if this information is not sufficient, I'm just a starting python coder.
[ "The PAMIE object does not work within threads!!!\nI was originally starting forwardc2s as a thread. When I just call it as a function instead, everything works fine!\nPlease consider this question resolved... with great thanks to the rubber duck.\n" ]
[ 1 ]
[]
[]
[ "debugging", "pamie", "python" ]
stackoverflow_0000994627_debugging_pamie_python.txt
Q: Building a Python shared object binding with cmake, which depends upon external libraries We have a c file called dbookpy.c, which will provide a Python binding some C functions. Next we decided to build a proper .so with cmake, but it seems we are doing something wrong with regards to linking the external library 'libdbook' in the binding: The CMakeLists.txt is as follows: PROJECT(dbookpy) FIND_PACKAGE(PythonInterp) FIND_PACKAGE(PythonLibs) INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH}) INCLUDE_DIRECTORIES("/usr/local/include") LINK_DIRECTORIES(/usr/local/lib) OPTION(BUILD_SHARED_LIBS "turn OFF for .a libs" ON) ADD_LIBRARY(dbookpy dbookpy) SET_TARGET_PROPERTIES(dbookpy PROPERTIES IMPORTED_LINK_INTERFACE_LIBRARIES dbook) SET_TARGET_PROPERTIES(dbookpy PROPERTIES LINKER_LANGUAGE C) #SET_TARGET_PROPERTIES(dbookpy PROPERTIES LINK_INTERFACE_LIBRARIES dbook) #SET_TARGET_PROPERTIES(dbookpy PROPERTIES ENABLE_EXPORTS ON) #TARGET_LINK_LIBRARIES(dbookpy LINK_INTERFACE_LIBRARIES dbook) SET_TARGET_PROPERTIES(dbookpy PROPERTIES SOVERSION 0.1 VERSION 0.1 ) Then we build: x31% mkdir build x31% cd build x31% cmake .. -- Check for working C compiler: /usr/bin/gcc -- Check for working C compiler: /usr/bin/gcc -- works -- Check size of void* -- Check size of void* - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Configuring done -- Generating done -- Build files have been written to: /home/edd/dbook2/dbookpy/build x31% make Scanning dependencies of target dbookpy [100%] Building C object CMakeFiles/dbookpy.dir/dbookpy.o Linking C shared library libdbookpy.so [100%] Built target dbookpy So far so good. Test in Python: x31% python Python 2.5.4 (r254:67916, Apr 24 2009, 15:28:40) [GCC 3.3.5 (propolice)] on openbsd4 Type "help", "copyright", "credits" or "license" for more information. >>> import libdbookpy python:./libdbookpy.so: undefined symbol 'dbook_isbn_13_to_10' python:./libdbookpy.so: undefined symbol 'dbook_isbn_10_to_13' python:./libdbookpy.so: undefined symbol 'dbook_sanitize' python:./libdbookpy.so: undefined symbol 'dbook_check_isbn' python:./libdbookpy.so: undefined symbol 'dbook_get_isbn_details' Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: Cannot load specified object Hmmm. Linker error. Looks like it is not linking libdbook: x31% ldd libdbookpy.so libdbookpy.so: Start End Type Open Ref GrpRef Name 05ae8000 25aec000 dlib 1 0 0 /home/edd/dbook2/dbookpy/build/libdbookpy.so.0.1 No it is not. A proper linkage to libdbook looks like this: x31% ldd /usr/local/bin/dbook-test /usr/local/bin/dbook-test: Start End Type Open Ref GrpRef Name 1c000000 3c004000 exe 1 0 0 /usr/local/bin/dbook-test 08567000 28571000 rlib 0 2 0 /usr/lib/libm.so.5.0 09ef7000 29efb000 rlib 0 1 0 /usr/local/lib/libdbook.so.0.1 053a0000 253d8000 rlib 0 1 0 /usr/lib/libc.so.50.1 0c2bc000 0c2bc000 rtld 0 1 0 /usr/libexec/ld.so Does anyone have any ideas why this is not working? Many thanks. Edd A: You need to link dbookpy against dbook: target_link_libraries(dbookpy dbook) Adding that just after the line ADD_LIBRARY(dbookpy dbookpy) should do it. I see you are using IMPORTED - the help for IMPORTED_LINK_INTERFACE_LIBRARIES reads: Lists libraries whose interface is included when an IMPORTED library target is linked to another target. The libraries will be included on the link line for the target. Unlike the LINK_INTERFACE_LIBRARIES property, this property applies to all imported target types, including STATIC libraries. This property is ignored for non-imported targets. So that means that "dbook", which is in /usr/local/lib, should be an imported library: add_library(dbook SHARED IMPORTED) Is that really what you wanted? I mean, imported libraries are ones that are built outside CMake but are included as part of your source tree. The dbook library seems to be installed or at least expected to be installed. I don't think you need imports here - it seems to be a regular linkage problem. But this may just be a side effect of creating a minimal example to post here. By the sounds of it, in order to get the linked libraries and link directories sorted out, I would probably use find_library(), which will look in sensible default places like /usr/local/lib, and then append that to the link libraries. find_library(DBOOK_LIBRARY dbook REQUIRED) target_link_libraries(dbookpy ${DBOOK_LIBRARY}) Anyway, seems like you have it sorted now.
Building a Python shared object binding with cmake, which depends upon external libraries
We have a c file called dbookpy.c, which will provide a Python binding some C functions. Next we decided to build a proper .so with cmake, but it seems we are doing something wrong with regards to linking the external library 'libdbook' in the binding: The CMakeLists.txt is as follows: PROJECT(dbookpy) FIND_PACKAGE(PythonInterp) FIND_PACKAGE(PythonLibs) INCLUDE_DIRECTORIES(${PYTHON_INCLUDE_PATH}) INCLUDE_DIRECTORIES("/usr/local/include") LINK_DIRECTORIES(/usr/local/lib) OPTION(BUILD_SHARED_LIBS "turn OFF for .a libs" ON) ADD_LIBRARY(dbookpy dbookpy) SET_TARGET_PROPERTIES(dbookpy PROPERTIES IMPORTED_LINK_INTERFACE_LIBRARIES dbook) SET_TARGET_PROPERTIES(dbookpy PROPERTIES LINKER_LANGUAGE C) #SET_TARGET_PROPERTIES(dbookpy PROPERTIES LINK_INTERFACE_LIBRARIES dbook) #SET_TARGET_PROPERTIES(dbookpy PROPERTIES ENABLE_EXPORTS ON) #TARGET_LINK_LIBRARIES(dbookpy LINK_INTERFACE_LIBRARIES dbook) SET_TARGET_PROPERTIES(dbookpy PROPERTIES SOVERSION 0.1 VERSION 0.1 ) Then we build: x31% mkdir build x31% cd build x31% cmake .. -- Check for working C compiler: /usr/bin/gcc -- Check for working C compiler: /usr/bin/gcc -- works -- Check size of void* -- Check size of void* - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Configuring done -- Generating done -- Build files have been written to: /home/edd/dbook2/dbookpy/build x31% make Scanning dependencies of target dbookpy [100%] Building C object CMakeFiles/dbookpy.dir/dbookpy.o Linking C shared library libdbookpy.so [100%] Built target dbookpy So far so good. Test in Python: x31% python Python 2.5.4 (r254:67916, Apr 24 2009, 15:28:40) [GCC 3.3.5 (propolice)] on openbsd4 Type "help", "copyright", "credits" or "license" for more information. >>> import libdbookpy python:./libdbookpy.so: undefined symbol 'dbook_isbn_13_to_10' python:./libdbookpy.so: undefined symbol 'dbook_isbn_10_to_13' python:./libdbookpy.so: undefined symbol 'dbook_sanitize' python:./libdbookpy.so: undefined symbol 'dbook_check_isbn' python:./libdbookpy.so: undefined symbol 'dbook_get_isbn_details' Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: Cannot load specified object Hmmm. Linker error. Looks like it is not linking libdbook: x31% ldd libdbookpy.so libdbookpy.so: Start End Type Open Ref GrpRef Name 05ae8000 25aec000 dlib 1 0 0 /home/edd/dbook2/dbookpy/build/libdbookpy.so.0.1 No it is not. A proper linkage to libdbook looks like this: x31% ldd /usr/local/bin/dbook-test /usr/local/bin/dbook-test: Start End Type Open Ref GrpRef Name 1c000000 3c004000 exe 1 0 0 /usr/local/bin/dbook-test 08567000 28571000 rlib 0 2 0 /usr/lib/libm.so.5.0 09ef7000 29efb000 rlib 0 1 0 /usr/local/lib/libdbook.so.0.1 053a0000 253d8000 rlib 0 1 0 /usr/lib/libc.so.50.1 0c2bc000 0c2bc000 rtld 0 1 0 /usr/libexec/ld.so Does anyone have any ideas why this is not working? Many thanks. Edd
[ "You need to link dbookpy against dbook:\ntarget_link_libraries(dbookpy dbook)\n\nAdding that just after the line ADD_LIBRARY(dbookpy dbookpy) should do it.\nI see you are using IMPORTED - the help for IMPORTED_LINK_INTERFACE_LIBRARIES reads:\n Lists libraries whose interface is included when an IMPORTED library target is\n linked to another target. The libraries will be included on the link line for\n the target. Unlike the LINK_INTERFACE_LIBRARIES property, this property\n applies to all imported target types, including STATIC libraries. This\n property is ignored for non-imported targets.\n\nSo that means that \"dbook\", which is in /usr/local/lib, should be an imported library:\n add_library(dbook SHARED IMPORTED)\n\nIs that really what you wanted? I mean, imported libraries are ones that are built outside CMake but are included as part of your source tree. The dbook library seems to be installed or at least expected to be installed. I don't think you need imports here - it seems to be a regular linkage problem. But this may just be a side effect of creating a minimal example to post here.\nBy the sounds of it, in order to get the linked libraries and link directories sorted out, I would probably use find_library(), which will look in sensible default places like /usr/local/lib, and then append that to the link libraries.\nfind_library(DBOOK_LIBRARY dbook REQUIRED)\ntarget_link_libraries(dbookpy ${DBOOK_LIBRARY}) \n\nAnyway, seems like you have it sorted now.\n" ]
[ 4 ]
[]
[]
[ "c", "cmake", "linker", "python", "unix" ]
stackoverflow_0000992068_c_cmake_linker_python_unix.txt
Q: How to strip the 8th bit in a KOI8-R encoded character? How to strip the 8th bit in a KOI8-R encoded character so as to have translit for a Russian letter? In particular, how to make it in Python? A: Assuming s is a KOI8-R encoded string you could try this: >>> s = u'Код Обмена Информацией, 8 бит'.encode('koi8-r') >>> s >>> '\xeb\xcf\xc4 \xef\xc2\xcd\xc5\xce\xc1 \xe9\xce\xc6\xcf\xd2\xcd\xc1\xc3\xc9\xc5\xca, 8 \xc2\xc9\xd4' >>> print ''.join([chr(ord(c) & 0x7F) for c in s]) >>> kOD oBMENA iNFORMACIEJ, 8 BIT The 8th bit is stripped by the (ord(c) & 0x7F). A: I'm not exactly sure what you want, but if you want to zero the 8th bit, it can be done like this: character = character & ~(1 << 7) A: Here is one way: import array mask = ~(1 << 7) def convert(koistring): bytes = array.array('B', koistring) for i in range(len(bytes)): bytes[i] &= mask return bytes.tostring() test = u'Русский Текст'.encode('koi8-r') print convert(test) # rUSSKIJ tEKST I don't know if Python provides a cleaner way to do this kind of operations :)
How to strip the 8th bit in a KOI8-R encoded character?
How to strip the 8th bit in a KOI8-R encoded character so as to have translit for a Russian letter? In particular, how to make it in Python?
[ "Assuming s is a KOI8-R encoded string you could try this:\n>>> s = u'Код Обмена Информацией, 8 бит'.encode('koi8-r')\n>>> s\n>>> '\\xeb\\xcf\\xc4 \\xef\\xc2\\xcd\\xc5\\xce\\xc1 \\xe9\\xce\\xc6\\xcf\\xd2\\xcd\\xc1\\xc3\\xc9\\xc5\\xca, 8 \\xc2\\xc9\\xd4'\n\n>>> print ''.join([chr(ord(c) & 0x7F) for c in s])\n>>> kOD oBMENA iNFORMACIEJ, 8 BIT\n\nThe 8th bit is stripped by the (ord(c) & 0x7F).\n", "I'm not exactly sure what you want, but if you want to zero the 8th bit, it can be done like this:\ncharacter = character & ~(1 << 7)\n\n", "Here is one way:\nimport array\n\nmask = ~(1 << 7)\n\ndef convert(koistring):\n bytes = array.array('B', koistring)\n for i in range(len(bytes)):\n bytes[i] &= mask\n\n return bytes.tostring()\n\ntest = u'Русский Текст'.encode('koi8-r')\nprint convert(test) # rUSSKIJ tEKST\n\nI don't know if Python provides a cleaner way to do this kind of operations :)\n" ]
[ 3, 1, 1 ]
[]
[]
[ "encoding", "python" ]
stackoverflow_0000994710_encoding_python.txt
Q: Redirecting console output to a Python string Possible Duplicate: How can I capture the stdout output of a child process? I'm running a cat-like program in bash from Python: import os os.system('cat foo.txt') How do I get the output of the shell command back in the Python script, something like: s = somefunction('cat foo.txt') ? UPD: Here is a related thread. A: Use the subprocess module. from subprocess import Popen, PIPE (stdout, stderr) = Popen(["cat","foo.txt"], stdout=PIPE).communicate() print stdout
Redirecting console output to a Python string
Possible Duplicate: How can I capture the stdout output of a child process? I'm running a cat-like program in bash from Python: import os os.system('cat foo.txt') How do I get the output of the shell command back in the Python script, something like: s = somefunction('cat foo.txt') ? UPD: Here is a related thread.
[ "Use the subprocess module.\nfrom subprocess import Popen, PIPE\n\n(stdout, stderr) = Popen([\"cat\",\"foo.txt\"], stdout=PIPE).communicate()\nprint stdout\n\n" ]
[ 16 ]
[]
[]
[ "bash", "python" ]
stackoverflow_0000994902_bash_python.txt
Q: Any good AJAX framework for Google App Engine apps? I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine? A: As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other. jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com. Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it. A: A nice way is to use an AJAX library is to take advantage of Google's AJAX Libraries API service. This is a bit faster and cleaner than downloading the JS and putting it in your /static/ folder and doesn't eat into your disk quota. In your javascript you would just put, for example: google.load("jquery", "1.3.2"); and/or google.load(google.load("dojo", "1.3.0"); Somewhere in your header you would put something like: <script src="http://www.google.com/jsapi?key=your-key-here"></script> And that's all you need to use Google's API libraries. A: There is no reason why you shouldn't use GAE and Google Web Toolkit (GWT) together. You write your backend code in Python and the frontend code in Java (and possibly some JavaScript), which is then compiled to JavaScript. When using another AJAX framework you will also have this difference between server and client side language. GWT has features that make remote invocation of java code on the server easier, but these are entirely optional. You can just use JSON or XML interfaces, just like with other AJAX frameworks. GWT 1.5 also comes with JavaScript Overlay Types, that basically allow you to treat a piece of JSON data like a Java object when developing the client side code. You can read more about this here. Update: Now that Google has added Java support for Google App Engine, you can develop both backend and frontend code in Java on a full Google stack - if you like. There is a nice Eclipse plugin from Google that makes it very easy to develop and deploy applications that use GAE, GWT or both. A: Here is how we've implemented Ajax on the Google App Engine, but the idea can be generalized to other platforms. We have a handler script for Ajax requests that responds -mostly- with JSON responses. The structure looks something like this (this is an excerpt from a standard GAE handler script): def Get(self, user): self.handleRequest() def Post(self, user): self.handleRequest() def handleRequest(self): ''' A dictionary that maps an operation name to a command. aka: a dispatcher map. ''' operationMap = {'getfriends': [GetFriendsCommand], 'requestfriend': [RequestFriendCommand, [self.request.get('id')]], 'confirmfriend': [ConfirmFriendCommand, [self.request.get('id')]], 'ignorefriendrequest': [IgnoreFriendRequestCommand, [self.request.get('id')]], 'deletefriend': [DeleteFriendCommand, [self.request.get('id')]]} # Delegate the request to the matching command class here. The commands are a simple implementation of the command pattern: class Command(): """ A simple command pattern. """ _valid = False def validate(self): """ Validates input. Sanitize user input here. """ self._valid = True def _do_execute(self): """ Executes the command. Override this in subclasses. """ pass @property def valid(self): return self._valid def execute(self): """ Override _do_execute rather than this. """ try: self.validate() except: raise return self._do_execute() # Make it easy to invoke commands: # So command() is equivalent to command.execute() __call__ = execute On the client side, we create an Ajax delegate. Prototype.js makes this easy to write and understand. Here is an excerpt: /** * Ajax API * * You should create a new instance for every call. */ var AjaxAPI = Class.create({ /* Service URL */ url: HOME_PATH+"ajax/", /* Function to call on results */ resultCallback: null, /* Function to call on faults. Implementation not shown */ faultCallback: null, /* Constructor/Initializer */ initialize: function(resultCallback, faultCallback){ this.resultCallback = resultCallback; this.faultCallback = faultCallback; }, requestFriend: function(friendId){ return new Ajax.Request(this.url + '?op=requestFriend', {method: 'post', parameters: {'id': friendId}, onComplete: this.resultCallback }); }, getFriends: function(){ return new Ajax.Request(this.url + '?op=getfriends', {method: 'get', onComplete: this.resultCallback }); } }); to call the delegate, you do something like: new AjaxApi(resultHandlerFunction, faultHandlerFunction).getFriends() I hope this helps! A: I'd recommend looking into a pure javascript framework (probably Jquery) for your client-side code, and write JSON services in python- that seems to be the easiest / bestest way to go. Google Web Toolkit lets you write the UI in Java and compile it to javascript. As Dave says, it may be a better choice where the backend is in Java, as it has nice RPC hooks for that case. A: You may want to have a look at Pyjamas (http://pyjs.org/), which is "GWT for Python". A: try also GQuery for GWT. This is Java code: public void onModuleLoad() { $("div").css("color", "red").click(new Function() { public void f(Element e) { Window.alert("Hello"); $(e).as(Effects).fadeOut(); } }); } Being Java code resulting in somewhat expensive compile-time (Java->JavaScript) optimizations and easier refactoring. Nice, it isn't? A: jQuery is a fine library, but also check out the Prototype JavaScript framework. It really turns JavaScript from being an occasionally awkward language into a beautiful and elegant language. A: If you want to be able to invoke method calls from JavaScript to Python, JSON-RPC works well with Google App Engine. See Google's article, "Using AJAX to Enable Client RPC Requests", for details. A: I'm currently using JQuery for my GAE app and it works beautifully for me. I have a chart (google charts) that is dynamic and uses an Ajax call to grab a JSON string. It really seems to work fine for me. A: Google has recently announced the Java version of Google App Engine. This release also provides an Eclipse plugin that makes developing GAE applications with GWT easier. See details here: http://code.google.com/appengine/docs/java/overview.html Of course, it would require you to rewrite your application in Java instead of python, but as someone who's worked with GWT, let me tell you, the advantages of using a modern IDE on your AJAX codebase are totally worth it.
Any good AJAX framework for Google App Engine apps?
I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine?
[ "As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other.\njQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com.\nEdit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it.\n", "A nice way is to use an AJAX library is to take advantage of Google's AJAX Libraries API service. This is a bit faster and cleaner than downloading the JS and putting it in your /static/ folder and doesn't eat into your disk quota.\nIn your javascript you would just put, for example:\ngoogle.load(\"jquery\", \"1.3.2\");\n\nand/or\ngoogle.load(google.load(\"dojo\", \"1.3.0\");\n\nSomewhere in your header you would put something like:\n<script src=\"http://www.google.com/jsapi?key=your-key-here\"></script>\n\nAnd that's all you need to use Google's API libraries.\n", "There is no reason why you shouldn't use GAE and Google Web Toolkit (GWT) together. You write your backend code in Python and the frontend code in Java (and possibly some JavaScript), which is then compiled to JavaScript. When using another AJAX framework you will also have this difference between server and client side language.\nGWT has features that make remote invocation of java code on the server easier, but these are entirely optional. You can just use JSON or XML interfaces, just like with other AJAX frameworks.\nGWT 1.5 also comes with JavaScript Overlay Types, that basically allow you to treat a piece of JSON data like a Java object when developing the client side code. You can read more about this here.\nUpdate:\nNow that Google has added Java support for Google App Engine, you can develop both backend and frontend code in Java on a full Google stack - if you like. There is a nice Eclipse plugin from Google that makes it very easy to develop and deploy applications that use GAE, GWT or both.\n", "Here is how we've implemented Ajax on the Google App Engine, but the idea can be generalized to other platforms.\nWe have a handler script for Ajax requests that responds -mostly- with JSON responses. The structure looks something like this (this is an excerpt from a standard GAE handler script):\ndef Get(self, user):\n self.handleRequest()\n\ndef Post(self, user):\n self.handleRequest()\n\n\ndef handleRequest(self): \n '''\n A dictionary that maps an operation name to a command.\n aka: a dispatcher map.\n '''\n operationMap = {'getfriends': [GetFriendsCommand],\n 'requestfriend': [RequestFriendCommand, [self.request.get('id')]],\n 'confirmfriend': [ConfirmFriendCommand, [self.request.get('id')]],\n 'ignorefriendrequest': [IgnoreFriendRequestCommand, [self.request.get('id')]],\n 'deletefriend': [DeleteFriendCommand, [self.request.get('id')]]}\n\n # Delegate the request to the matching command class here.\n\nThe commands are a simple implementation of the command pattern:\nclass Command():\n \"\"\" A simple command pattern.\n \"\"\"\n _valid = False\n def validate(self):\n \"\"\" Validates input. Sanitize user input here.\n \"\"\"\n self._valid = True\n\n def _do_execute(self):\n \"\"\" Executes the command. \n Override this in subclasses.\n \"\"\"\n pass\n\n @property\n def valid(self):\n return self._valid\n\n def execute(self):\n \"\"\" Override _do_execute rather than this.\n \"\"\" \n try:\n self.validate()\n except:\n raise\n return self._do_execute()\n\n # Make it easy to invoke commands:\n # So command() is equivalent to command.execute()\n __call__ = execute\n\nOn the client side, we create an Ajax delegate. Prototype.js makes this easy to write and understand. Here is an excerpt:\n/** \n * Ajax API\n *\n * You should create a new instance for every call.\n */\nvar AjaxAPI = Class.create({\n /* Service URL */\n url: HOME_PATH+\"ajax/\",\n\n /* Function to call on results */\n resultCallback: null,\n\n /* Function to call on faults. Implementation not shown */\n faultCallback: null,\n\n /* Constructor/Initializer */\n initialize: function(resultCallback, faultCallback){\n this.resultCallback = resultCallback;\n this.faultCallback = faultCallback;\n },\n\n requestFriend: function(friendId){\n return new Ajax.Request(this.url + '?op=requestFriend', \n {method: 'post',\n parameters: {'id': friendId},\n onComplete: this.resultCallback\n }); \n },\n\n getFriends: function(){\n return new Ajax.Request(this.url + '?op=getfriends', \n {method: 'get',\n onComplete: this.resultCallback\n }); \n }\n\n});\n\nto call the delegate, you do something like:\nnew AjaxApi(resultHandlerFunction, faultHandlerFunction).getFriends()\n\nI hope this helps!\n", "I'd recommend looking into a pure javascript framework (probably Jquery) for your client-side code, and write JSON services in python- that seems to be the easiest / bestest way to go.\nGoogle Web Toolkit lets you write the UI in Java and compile it to javascript. As Dave says, it may be a better choice where the backend is in Java, as it has nice RPC hooks for that case.\n", "You may want to have a look at Pyjamas (http://pyjs.org/), which is \"GWT for Python\". \n", "try also GQuery for GWT. This is Java code:\npublic void onModuleLoad() { \n $(\"div\").css(\"color\", \"red\").click(new Function() { \n public void f(Element e) { \n Window.alert(\"Hello\"); \n $(e).as(Effects).fadeOut(); \n } \n }); \n} \n\nBeing Java code resulting in somewhat expensive compile-time (Java->JavaScript) optimizations and easier refactoring. \nNice, it isn't?\n", "jQuery is a fine library, but also check out the Prototype JavaScript framework. It really turns JavaScript from being an occasionally awkward language into a beautiful and elegant language.\n", "If you want to be able to invoke method calls from JavaScript to Python, JSON-RPC works well with Google App Engine. See Google's article, \"Using AJAX to Enable Client RPC Requests\", for details.\n", "I'm currently using JQuery for my GAE app and it works beautifully for me. I have a chart (google charts) that is dynamic and uses an Ajax call to grab a JSON string. It really seems to work fine for me.\n", "Google has recently announced the Java version of Google App Engine. This release also provides an Eclipse plugin that makes developing GAE applications with GWT easier.\nSee details here: http://code.google.com/appengine/docs/java/overview.html\nOf course, it would require you to rewrite your application in Java instead of python, but as someone who's worked with GWT, let me tell you, the advantages of using a modern IDE on your AJAX codebase are totally worth it.\n" ]
[ 12, 7, 4, 4, 3, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "ajax", "google_app_engine", "python" ]
stackoverflow_0000053997_ajax_google_app_engine_python.txt
Q: ctypes in python, problem calling a function in a DLL Hey! as you might have noticed I have an annoying issue with ctypes. I'm trying to communicate with an instrument and to do so I have to use ctypes to communicate with the DLL driver. so far I've managed to export the DLL by doing this: >>> from ctypes import * >>>maury = WinDLL( 'MLibTuners') >>> maury (WinDLL 'MlibTuners', handle 10000000 at 9ef9d0) >>> maury.get_tuner_driver_version() (_FuncPtr object at 0x009F6738) >>> version_string = create_string_buffer(80) >>> maury.get_tuner_driver_version(version_string) 2258920 >>> print version_string.value 'Maury Microwave MT993V04 Tuner Driver DLL, Version 1.60.00, 07/25/2007' And it works pretty well, according to the documentation it is supposed to save the Tuner Driver DLL in the 80 byte string given as a parameter. However when I try to use the function called add_tuner it fails. This is what the documentation says: short add_tuner(short tuner_number, char model[], short serial_number, short ctlr_num, short ctlr_port, short *no_of_motors, long max_range[], double *fmin, double *fmax, double *fcrossover, char error_string[]) this is how I tried to call the function above: The parameters that are changed are all the pointers and max_range[], according to the manual the values below are correct too, i just don't know why I keep getting a windows access violation writing 0x00000000 no_motors = pointer(c_short()) f_min = pointer(c_double()) f_max = pointer(c_double()) f_crossover = pointer(c_double()) maury.add_tuner(c_short(0), c_char_p('MT982EU'), c_short(serial_number), c_short(0), c_short(1),no_motors, c_long(), f_min,f_max,f_crossover, create_string_buffer(80)) The serial number is given however censored by refering to a variable. Someone know what to do?, do you see any errors with my input? Thanks /Mazdak A: I figure it's the value you pass at the long max_range[] argument. The function expects a pointer to a long integer there (it asks for an array of long integers), but you're passing a long value of zero (result of the c_long() call), which is implicitly cast to a null pointer. I suspect the function then tries to write to the address passed at max_range, ie. the null pointer, hence the access violation at address 0x00000000. To create an array of longs to pass in max_range, you first create the array type by multiplying the array data type with the size of the array (somewhat verbose for clarity): array_size = 3 ThreeLongsArrayType = c_long * array_size You can then instantiate an array like you would with any other Python class: array = ThreeLongsArrayType()
ctypes in python, problem calling a function in a DLL
Hey! as you might have noticed I have an annoying issue with ctypes. I'm trying to communicate with an instrument and to do so I have to use ctypes to communicate with the DLL driver. so far I've managed to export the DLL by doing this: >>> from ctypes import * >>>maury = WinDLL( 'MLibTuners') >>> maury (WinDLL 'MlibTuners', handle 10000000 at 9ef9d0) >>> maury.get_tuner_driver_version() (_FuncPtr object at 0x009F6738) >>> version_string = create_string_buffer(80) >>> maury.get_tuner_driver_version(version_string) 2258920 >>> print version_string.value 'Maury Microwave MT993V04 Tuner Driver DLL, Version 1.60.00, 07/25/2007' And it works pretty well, according to the documentation it is supposed to save the Tuner Driver DLL in the 80 byte string given as a parameter. However when I try to use the function called add_tuner it fails. This is what the documentation says: short add_tuner(short tuner_number, char model[], short serial_number, short ctlr_num, short ctlr_port, short *no_of_motors, long max_range[], double *fmin, double *fmax, double *fcrossover, char error_string[]) this is how I tried to call the function above: The parameters that are changed are all the pointers and max_range[], according to the manual the values below are correct too, i just don't know why I keep getting a windows access violation writing 0x00000000 no_motors = pointer(c_short()) f_min = pointer(c_double()) f_max = pointer(c_double()) f_crossover = pointer(c_double()) maury.add_tuner(c_short(0), c_char_p('MT982EU'), c_short(serial_number), c_short(0), c_short(1),no_motors, c_long(), f_min,f_max,f_crossover, create_string_buffer(80)) The serial number is given however censored by refering to a variable. Someone know what to do?, do you see any errors with my input? Thanks /Mazdak
[ "I figure it's the value you pass at the long max_range[] argument. The function expects a pointer to a long integer there (it asks for an array of long integers), but you're passing a long value of zero (result of the c_long() call), which is implicitly cast to a null pointer. I suspect the function then tries to write to the address passed at max_range, ie. the null pointer, hence the access violation at address 0x00000000.\nTo create an array of longs to pass in max_range, you first create the array type by multiplying the array data type with the size of the array (somewhat verbose for clarity):\narray_size = 3\nThreeLongsArrayType = c_long * array_size\n\nYou can then instantiate an array like you would with any other Python class:\narray = ThreeLongsArrayType()\n\n" ]
[ 3 ]
[]
[]
[ "ctypes", "dll", "pointers", "python" ]
stackoverflow_0000995332_ctypes_dll_pointers_python.txt
Q: What is the equivalent of object oriented constructs in python? How does python handle object oriented constructs such as abstract, virtual, pure virtual etc Examples and links would really be good. A: An abstract method is one that (in the base class) raises NotImplementedError. An abstract class, like in C++, is any class that has one or more abstract methods. All methods in Python are virtual (i.e., all can be overridden by subclasses). A "pure virtual" method would presumably be the same thing as an abstract one. In each case you could attempt deep black magic to fight against the language, but it would be (generally speaking) exceedingly silly to do so. I've striven to deal with the "etc" part in two books, a dozen videos, two dozen essays and PDFs and other presentations, and I can't spend the next few days summarizing it all here. Ask specific questions, and I'll be glad to try and answer! A: "How does python handle object oriented constructs such as abstract, virtual, pure virtual etc." These are language constructs more than OO constructs. One can argue that abstract is a language-agnostic concept (even though Python doesn't need it.) Virtual and Pure Virtual are implementation details for C++. There are two OO constructs that aren't necessary in Python but sometimes helpful. The notion of "Interface" makes sense when (1) you have single inheritance and (2) you have static type-checking. Since Python has multiple inheritance and no static type checking, the concept is almost irrelevant. You can, however, define "interface"-like superclasses which don't actually do anything except define the interface. It's handy for documentation. One idiom is the following. class InterfaceMixin( object ): def requiredMethod( self ): raise NotImplemntedError() class RealClass( SuperClass, InterfaceMixin ): def requiredMethod( self ): # actual implementation. The notion of "Abstract" only makes sense when you have static type checking and you need to alert the compiler that there's no body in one or more methods in this class definition. It also alerts the compiler that you can't create instances. You don't need this in Python because the methods are located dynamically at run-time. Attempting to use an undefined method is just an AttributeError. The closest you can do this kind of thing. class AbstractSuperclass( object ): def abstractMethod( self ): raise NotImplementedError() It isn't completely like Java or C++ abstract. It's a class with a method that raises an error. But it behaves enough like an abstract class to be useful. To match Java, you'd have to prevent creating instances. This requires you to override __new__. If you did this, your concrete subclasses would then need to implement __new__, which is a pain in the neck, so we rarely take active steps to prevent creating instances of something that's supposed to be abstract. The concept of "virtual" and "pure virtual" are C++ optimizations that force a method lookup. Python always does this. Edit Example of Abstract without the explicit method definition. >>> class Foo( object ): ... pass ... >>> f= Foo() >>> f.bar() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Foo' object has no attribute 'bar'
What is the equivalent of object oriented constructs in python?
How does python handle object oriented constructs such as abstract, virtual, pure virtual etc Examples and links would really be good.
[ "An abstract method is one that (in the base class) raises NotImplementedError.\nAn abstract class, like in C++, is any class that has one or more abstract methods.\nAll methods in Python are virtual (i.e., all can be overridden by subclasses).\nA \"pure virtual\" method would presumably be the same thing as an abstract one.\nIn each case you could attempt deep black magic to fight against the language, but it would be (generally speaking) exceedingly silly to do so.\nI've striven to deal with the \"etc\" part in two books, a dozen videos, two dozen essays and PDFs and other presentations, and I can't spend the next few days summarizing it all here. Ask specific questions, and I'll be glad to try and answer!\n", "\"How does python handle object oriented constructs such as abstract, virtual, pure virtual etc.\"\nThese are language constructs more than OO constructs. One can argue that abstract is a language-agnostic concept (even though Python doesn't need it.) Virtual and Pure Virtual are implementation details for C++.\nThere are two OO constructs that aren't necessary in Python but sometimes helpful.\nThe notion of \"Interface\" makes sense when (1) you have single inheritance and (2) you have static type-checking. Since Python has multiple inheritance and no static type checking, the concept is almost irrelevant.\nYou can, however, define \"interface\"-like superclasses which don't actually do anything except define the interface. It's handy for documentation. One idiom is the following.\nclass InterfaceMixin( object ):\n def requiredMethod( self ): raise NotImplemntedError()\n\nclass RealClass( SuperClass, InterfaceMixin ):\n def requiredMethod( self ):\n # actual implementation.\n\nThe notion of \"Abstract\" only makes sense when you have static type checking and you need to alert the compiler that there's no body in one or more methods in this class definition. It also alerts the compiler that you can't create instances. You don't need this in Python because the methods are located dynamically at run-time. Attempting to use an undefined method is just an AttributeError.\nThe closest you can do this kind of thing.\nclass AbstractSuperclass( object ):\n def abstractMethod( self ):\n raise NotImplementedError()\n\nIt isn't completely like Java or C++ abstract. It's a class with a method that raises an error. But it behaves enough like an abstract class to be useful.\nTo match Java, you'd have to prevent creating instances. This requires you to override __new__. If you did this, your concrete subclasses would then need to implement __new__, which is a pain in the neck, so we rarely take active steps to prevent creating instances of something that's supposed to be abstract.\nThe concept of \"virtual\" and \"pure virtual\" are C++ optimizations that force a method lookup. Python always does this.\n\nEdit \nExample of Abstract without the explicit method definition.\n>>> class Foo( object ):\n... pass\n... \n>>> f= Foo()\n>>> f.bar()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'Foo' object has no attribute 'bar'\n\n" ]
[ 31, 7 ]
[]
[]
[ "oop", "python" ]
stackoverflow_0000994476_oop_python.txt
Q: mod_python caching of variables I'm using mod_python to run Trac in Apache. I'm developing a plugin and am not sure how global variables are stored/cached. I am new to python and have googled the subject and found that mod_python caches python modules (I think). However, I would expect that cache to be reset when the web service is restarted, but it doesn't appear to be. I'm saying this becasue I have a global variable that is a list, I test the list to see if a value exists and if it doesn't then I add it. The first time I ran this, it added three entries to the list. Subsequently, the list has three entries from the start. For example: globalList = [] class globalTest: def addToTheList(itemToAdd): print(len(globalTest)) if itemToAdd not in globalList: globalList.append(itemToAdd) def doSomething(): addToTheList("I am new entry one") addToTheList("I am new entry two") addToTheList("I am new entry three") The code above is just an example of what I'm doing, not the actual code ;-). But essentially the doSomething() method is called by Trac. The first time it ran, it added all three entries. Now - even after restarting the web server the len(globalList) command is always 3. I suspect the answer may be that my session (and therefore the global variable) is being cached because Trac is remembering my login details when I refresh the page in Trac after the web server restart. If that's the case - how do I force the cache to be cleared. Note that I don't want to reset the globalList variable manually i.e. globalList.length = 0 Can anyone offer any insight as to what is happening? Thank you A: Obligatory: Switch to wsgi using mod_wsgi. Don't use mod_python. There is Help available for configuring mod_wsgi with trac. A: read the mod-python faq it says Global objects live inside mod_python for the life of the apache process, which in general is much longer than the life of a single request. This means if you expect a global variable to be initialised every time you will be surprised.... go to link http://www.modpython.org/FAQ/faqw.py?req=show&file=faq03.005.htp so question is why you want to use global variable?
mod_python caching of variables
I'm using mod_python to run Trac in Apache. I'm developing a plugin and am not sure how global variables are stored/cached. I am new to python and have googled the subject and found that mod_python caches python modules (I think). However, I would expect that cache to be reset when the web service is restarted, but it doesn't appear to be. I'm saying this becasue I have a global variable that is a list, I test the list to see if a value exists and if it doesn't then I add it. The first time I ran this, it added three entries to the list. Subsequently, the list has three entries from the start. For example: globalList = [] class globalTest: def addToTheList(itemToAdd): print(len(globalTest)) if itemToAdd not in globalList: globalList.append(itemToAdd) def doSomething(): addToTheList("I am new entry one") addToTheList("I am new entry two") addToTheList("I am new entry three") The code above is just an example of what I'm doing, not the actual code ;-). But essentially the doSomething() method is called by Trac. The first time it ran, it added all three entries. Now - even after restarting the web server the len(globalList) command is always 3. I suspect the answer may be that my session (and therefore the global variable) is being cached because Trac is remembering my login details when I refresh the page in Trac after the web server restart. If that's the case - how do I force the cache to be cleared. Note that I don't want to reset the globalList variable manually i.e. globalList.length = 0 Can anyone offer any insight as to what is happening? Thank you
[ "Obligatory:\nSwitch to wsgi using mod_wsgi. \nDon't use mod_python.\nThere is Help available for configuring mod_wsgi with trac.\n", "read the mod-python faq it says\n\nGlobal objects live inside mod_python\n for the life of the apache process,\n which in general is much longer than\n the life of a single request. This\n means if you expect a global variable\n to be initialised every time you will\n be surprised....\n\ngo to link\nhttp://www.modpython.org/FAQ/faqw.py?req=show&file=faq03.005.htp\nso question is why you want to use global variable?\n" ]
[ 4, 3 ]
[]
[]
[ "caching", "mod_python", "python", "trac" ]
stackoverflow_0000995416_caching_mod_python_python_trac.txt
Q: Which reactor should i use for qt4? I am using twisted and now i want to make some pretty ui using qt A: You want to use Glen Tarbox's qt4reactor. A: You need a qt4reactor, for example this one (but that's a sandbox and thus not good for production use -- tx @Glyph for clarifying this!). As @Glyph says, the proper one to use is the one at launchpad.
Which reactor should i use for qt4?
I am using twisted and now i want to make some pretty ui using qt
[ "You want to use Glen Tarbox's qt4reactor.\n", "You need a qt4reactor, for example this one (but that's a sandbox and thus not good for production use -- tx @Glyph for clarifying this!).\nAs @Glyph says, the proper one to use is the one at launchpad.\n" ]
[ 5, 4 ]
[]
[]
[ "python", "qt4", "twisted" ]
stackoverflow_0000992169_python_qt4_twisted.txt
Q: IOError "no such file or folder" even though files are present I wrote a script in Python 2.6.2 that scans a directory for SVG's and resizes them if they are too large. I wrote this on my home machine (Vista, Python 2.6.2) and processed a few folders with no problems. Today, I tried this on my work computer (XP SP2, Python 2.6.2) and I get IOErrors for every file, even though files are in the directory. I think I've tried everything, and am unsure where to go from here. I am a beginner so this may be something simple. Any help would be appreciated. import xml.etree.ElementTree as ET import os import tkFileDialog #-------------------------------------- #~~~variables #-------------------------------------- max_height = 500 max_width = 428 extList = ["svg"] proc_count = 0 resize_count = 0 #-------------------------------------- #~~~functions #-------------------------------------- def landscape_or_portrait(): resize_count +=1 if float(main_width_old)/float(main_height_old) >= 1.0: print "picture is landscape" resize_width() else: print "picture is not landscape" resize_height() return def resize_height(): print "picture too tall" #calculate viewBox and height viewBox_height_new = max_height scaleFactor = (float(main_height_old) - max_height)/max_height viewBox_width_new = float(main_width_old) * scaleFactor #calculate main width and height main_height_new = str(viewBox_height_new) + "px" main_width_new = str(viewBox_width_new) + "px" viewBox = "0 0 " + str(viewBox_width_new) + " " + str(viewBox_height_new) inputFile = file(tfile, 'r') data = inputFile.read() inputFile.close() data = data.replace(str(tmain_height_old), str(main_height_new)) data = data.replace(str(tmain_width_old), str(main_width_new)) #data = data.replace(str(tviewBox), str(viewBox)) outputFile = file(tfile, 'w') outputFile.write(data) outputFile.close() return def resize_width(): print "picture too wide" #calculate viewBox width and height viewBox_width_new = max_width scaleFactor = (float(main_width_old) - max_width)/max_width viewBox_height_new = float(main_height_old) * scaleFactor #calculate main width and height main_height_new = str(viewBox_height_new) + "px" main_width_new = str(viewBox_width_new) + "px" viewBox = "0 0 " + str(viewBox_width_new) + " " + str(viewBox_height_new) inputFile = file(tfile, 'r') data = inputFile.read() inputFile.close() data = data.replace(str(tmain_height_old), str(main_height_new)) data = data.replace(str(tmain_width_old), str(main_width_new)) #data = data.replace(str(tviewBox), str(viewBox)) outputFile = file(tfile, 'w') outputFile.write(data) outputFile.close() return #-------------------------------------- #~~~operations #-------------------------------------- path = tkFileDialog.askdirectory() for tfile in os.listdir(path): #print tfile t2file = tfile if tfile.find(".") >= 0: try : if t2file.split(".")[1] in extList: print "now processing " + tfile tree = ET.parse(tfile) proc_count+=1 # Get the values of the root(svg) attributes root = tree.getroot() tmain_height_old = root.get("height") tmain_width_old = root.get("width") tviewBox = root.get("viewBox") #clean up variables, remove px for float conversion main_height_old = tmain_height_old.replace("px", "", 1) main_width_old = tmain_width_old.replace("px", "", 1) #check if they are too large if float(main_height_old) > max_height or float(main_width_old) > max_width: landscape_or_portrait() except Exception,e: print e A: It looks to me like you are missing a os.path.join(path, tfile) to get the full path to the file you want to open. Currently it should only work for files in the current directory. A: Perhaps it's a security issue? Perhaps you don't have the rights to create files in the folder
IOError "no such file or folder" even though files are present
I wrote a script in Python 2.6.2 that scans a directory for SVG's and resizes them if they are too large. I wrote this on my home machine (Vista, Python 2.6.2) and processed a few folders with no problems. Today, I tried this on my work computer (XP SP2, Python 2.6.2) and I get IOErrors for every file, even though files are in the directory. I think I've tried everything, and am unsure where to go from here. I am a beginner so this may be something simple. Any help would be appreciated. import xml.etree.ElementTree as ET import os import tkFileDialog #-------------------------------------- #~~~variables #-------------------------------------- max_height = 500 max_width = 428 extList = ["svg"] proc_count = 0 resize_count = 0 #-------------------------------------- #~~~functions #-------------------------------------- def landscape_or_portrait(): resize_count +=1 if float(main_width_old)/float(main_height_old) >= 1.0: print "picture is landscape" resize_width() else: print "picture is not landscape" resize_height() return def resize_height(): print "picture too tall" #calculate viewBox and height viewBox_height_new = max_height scaleFactor = (float(main_height_old) - max_height)/max_height viewBox_width_new = float(main_width_old) * scaleFactor #calculate main width and height main_height_new = str(viewBox_height_new) + "px" main_width_new = str(viewBox_width_new) + "px" viewBox = "0 0 " + str(viewBox_width_new) + " " + str(viewBox_height_new) inputFile = file(tfile, 'r') data = inputFile.read() inputFile.close() data = data.replace(str(tmain_height_old), str(main_height_new)) data = data.replace(str(tmain_width_old), str(main_width_new)) #data = data.replace(str(tviewBox), str(viewBox)) outputFile = file(tfile, 'w') outputFile.write(data) outputFile.close() return def resize_width(): print "picture too wide" #calculate viewBox width and height viewBox_width_new = max_width scaleFactor = (float(main_width_old) - max_width)/max_width viewBox_height_new = float(main_height_old) * scaleFactor #calculate main width and height main_height_new = str(viewBox_height_new) + "px" main_width_new = str(viewBox_width_new) + "px" viewBox = "0 0 " + str(viewBox_width_new) + " " + str(viewBox_height_new) inputFile = file(tfile, 'r') data = inputFile.read() inputFile.close() data = data.replace(str(tmain_height_old), str(main_height_new)) data = data.replace(str(tmain_width_old), str(main_width_new)) #data = data.replace(str(tviewBox), str(viewBox)) outputFile = file(tfile, 'w') outputFile.write(data) outputFile.close() return #-------------------------------------- #~~~operations #-------------------------------------- path = tkFileDialog.askdirectory() for tfile in os.listdir(path): #print tfile t2file = tfile if tfile.find(".") >= 0: try : if t2file.split(".")[1] in extList: print "now processing " + tfile tree = ET.parse(tfile) proc_count+=1 # Get the values of the root(svg) attributes root = tree.getroot() tmain_height_old = root.get("height") tmain_width_old = root.get("width") tviewBox = root.get("viewBox") #clean up variables, remove px for float conversion main_height_old = tmain_height_old.replace("px", "", 1) main_width_old = tmain_width_old.replace("px", "", 1) #check if they are too large if float(main_height_old) > max_height or float(main_width_old) > max_width: landscape_or_portrait() except Exception,e: print e
[ "It looks to me like you are missing a os.path.join(path, tfile) to get the full path to the file you want to open. Currently it should only work for files in the current directory.\n", "Perhaps it's a security issue? Perhaps you don't have the rights to create files in the folder\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000995985_python.txt
Q: How can this be written on a single line? I've seen some Python list comprehensions before, but can this be done in a single line of Python? errs = {} for f in form: if f.errors: errs[f.auto_id] = f.errors A: errs = dict((f.auto_id, f.errors) for f in form if f.errors) A: Python 3.0 has dict comprehensions as a shorter/more readable form of the anser provided by Steef: errs = {f.auto_id: f.errors for f in form if f.errors} A: It probably could be, but as per the “Readability counts.” rule (PEP 20), I'd say it's a bad idea. :) On the other hand you have “Flat is better than nested.” and “Sparse is better than dense.”, so I guess it's a matter of taste :) A: Both ways are quite readable, however you should think of future maintainers of the code. Sometimes explicit is better. List comprehensions rule though :)
How can this be written on a single line?
I've seen some Python list comprehensions before, but can this be done in a single line of Python? errs = {} for f in form: if f.errors: errs[f.auto_id] = f.errors
[ "errs = dict((f.auto_id, f.errors) for f in form if f.errors)\n\n", "Python 3.0 has dict comprehensions as a shorter/more readable form of the anser provided by Steef:\nerrs = {f.auto_id: f.errors for f in form if f.errors}\n\n", "It probably could be, but as per the “Readability counts.” rule (PEP 20), I'd say it's a bad idea. :)\nOn the other hand you have “Flat is better than nested.” and “Sparse is better than dense.”, so I guess it's a matter of taste :)\n", "Both ways are quite readable, however you should think of future maintainers of the code. Sometimes explicit is better. List comprehensions rule though :)\n" ]
[ 20, 9, 4, 0 ]
[]
[]
[ "dictionary", "django", "list_comprehension", "python" ]
stackoverflow_0000995234_dictionary_django_list_comprehension_python.txt
Q: Code reuse between django and appengine Model classes I created a custom django.auth User class which works with Google Appengine, but it involves a fair amount of copied code (practically every method). It isn't possible to create a subclass because appengine and django have different database models with their own metaclass magic. So my question is this: is there an elegant way to copy methods from django.auth's User class? from google.appengine.ext import db from django.contrib.auth import models class User(db.Model): password = db.StringProperty() ... # copied method set_password = models.User.set_password.im_func A: Im not sure I understand your question right. Why would you need to define another "User" class if Django already provides the same functionality ? You could also just import the "User" class and add a ForeignKey to each model requiring a "user" attribute. A: You might want to take a look at what the django helper or app-engine-patch does. Helper: http://code.google.com/p/google-app-engine-django/ Patch: http://code.google.com/p/app-engine-patch/
Code reuse between django and appengine Model classes
I created a custom django.auth User class which works with Google Appengine, but it involves a fair amount of copied code (practically every method). It isn't possible to create a subclass because appengine and django have different database models with their own metaclass magic. So my question is this: is there an elegant way to copy methods from django.auth's User class? from google.appengine.ext import db from django.contrib.auth import models class User(db.Model): password = db.StringProperty() ... # copied method set_password = models.User.set_password.im_func
[ "Im not sure I understand your question right. Why would you need to define\nanother \"User\" class if Django already provides the same functionality ? \nYou could also just import the \"User\" class and add a ForeignKey to each model\nrequiring a \"user\" attribute.\n", "You might want to take a look at what the django helper or app-engine-patch does.\nHelper: http://code.google.com/p/google-app-engine-django/\nPatch: http://code.google.com/p/app-engine-patch/\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_authentication", "google_app_engine", "python" ]
stackoverflow_0000991611_django_django_authentication_google_app_engine_python.txt
Q: JavaScript implementation that allows access to [[Call]] The ECMA standard defines a hidden, internal property [[Call]], which, if implemented, mean the object is callable / is a function. In Python, something similar takes place, except that you can override it yourself to create your own callable objects: >>> class B: ... def __call__(self, x,y): print x,y ... >>> inst = B() >>> inst(1,2) 1, 2 Is there any similar mechanism available in standard JavaScript? If not, what about any of the current JavaScript implementations? A: As far as I know it's not possible. It is supposed to be an internal property of an object and not exposed to the script itself. The only way I know is to define a function. However, since functions is first class citizens you can add properties to them: function myfunc(){ var myself = arguments.callee; myself.anotherfunc(); } myfunc.avalue=5; myfunc.anotherfunc=function(){ alert(this.avalue); } myfunc(); //Alerts 5 myfunc.anotherfunc(); //Alerts 5 A: [[Call]] is an internal property used to describe a particular piece of functionality in the language specification. There's no guarantee that such a property is even available in an interpreter. There are many other properties and objects that are referenced in the spec, such as the Completion object, which is only necessary if you have implemented the language as an AST interpreter which is what KJS and JavaScriptCore (JSC == WebKit fork of KJS) used to do. Non-AST based interpreters (SpiderMonkey, the new KJS and JavaScriptCore execution engines FrostByte and SquirrelFish, probably the Opera JS engine, and V8) have no need a lot of these constructs as they are used primarily for the purpose of describing behaviour -- not implementation. There is another reason that such access isn't available -- A lot of these properties are so intrinsic to the interpreter that allowing custom behaviour can impact performance whether those features are used or not -- for instance the JSC API provides a mechanism for an embedder to override a number of these properties, and supporting that even at the C level actually has a measurable performance impact. [Edit: minor note, when i say "intrepreter" i mean in the general sense -- it could be interpreting an AST, bytecode, or machine code (eg. jit)] A: Applications that use the Mozilla Spidermonkey implementation of Javascript 1.5 or greater (e.g. Firefox) have access to the __noSuchMethod__ mechanism: c:\>jsdb js>x = {}; [object Object] js>x.__noSuchMethod__ = function(id,args) {writeln('you just called: '+id+'()');} function (id, args) { writeln("you just called: " + id + "()"); } js>x.foo() you just called: foo() js>x.bar() you just called: bar() js> A: It is not possible yet: [[Call]] is hidden and not directly available in all conformat implementations. It may change in the new standard of ECMAScript.
JavaScript implementation that allows access to [[Call]]
The ECMA standard defines a hidden, internal property [[Call]], which, if implemented, mean the object is callable / is a function. In Python, something similar takes place, except that you can override it yourself to create your own callable objects: >>> class B: ... def __call__(self, x,y): print x,y ... >>> inst = B() >>> inst(1,2) 1, 2 Is there any similar mechanism available in standard JavaScript? If not, what about any of the current JavaScript implementations?
[ "As far as I know it's not possible. It is supposed to be an internal property of an object and not exposed to the script itself. The only way I know is to define a function.\nHowever, since functions is first class citizens you can add properties to them:\nfunction myfunc(){\n var myself = arguments.callee;\n myself.anotherfunc();\n}\n\nmyfunc.avalue=5;\n\nmyfunc.anotherfunc=function(){\n alert(this.avalue);\n}\n\nmyfunc(); //Alerts 5\nmyfunc.anotherfunc(); //Alerts 5\n\n", "[[Call]] is an internal property used to describe a particular piece of functionality in the language specification. There's no guarantee that such a property is even available in an interpreter. There are many other properties and objects that are referenced in the spec, such as the Completion object, which is only necessary if you have implemented the language as an AST interpreter which is what KJS and JavaScriptCore (JSC == WebKit fork of KJS) used to do. Non-AST based interpreters (SpiderMonkey, the new KJS and JavaScriptCore execution engines FrostByte and SquirrelFish, probably the Opera JS engine, and V8) have no need a lot of these constructs as they are used primarily for the purpose of describing behaviour -- not implementation.\nThere is another reason that such access isn't available -- A lot of these properties are so intrinsic to the interpreter that allowing custom behaviour can impact performance whether those features are used or not -- for instance the JSC API provides a mechanism for an embedder to override a number of these properties, and supporting that even at the C level actually has a measurable performance impact.\n[Edit: minor note, when i say \"intrepreter\" i mean in the general sense -- it could be interpreting an AST, bytecode, or machine code (eg. jit)]\n", "Applications that use the Mozilla Spidermonkey implementation of Javascript 1.5 or greater (e.g. Firefox) have access to the __noSuchMethod__ mechanism:\nc:\\>jsdb\njs>x = {};\n[object Object]\njs>x.__noSuchMethod__ = function(id,args) {writeln('you just called: '+id+'()');}\nfunction (id, args) {\n writeln(\"you just called: \" + id + \"()\");\n}\njs>x.foo()\nyou just called: foo()\njs>x.bar()\nyou just called: bar()\njs>\n\n", "It is not possible yet: [[Call]] is hidden and not directly available in all conformat implementations. It may change in the new standard of ECMAScript.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "function", "javascript", "python" ]
stackoverflow_0000383189_function_javascript_python.txt
Q: Adding SSL support to Python 2.6 I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists. Any suggestions? A: Did you install the OpenSSL development libraries? I had to install openssl-devel on CentOS, for example. On Ubuntu, sudo apt-get build-dep python2.5 did the trick (even for Python 2.6).
Adding SSL support to Python 2.6
I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists. Any suggestions?
[ "Did you install the OpenSSL development libraries? I had to install openssl-devel on CentOS, for example. On Ubuntu, sudo apt-get build-dep python2.5 did the trick (even for Python 2.6).\n" ]
[ 4 ]
[ "Use the binaries provided by python.org or by your OS distributor. It's a lot easier than building it yourself, and all the features are usually compiled in.\nIf you really need to build it yourself, you'll need to provide more information here about what build options you provided, what your environment is like, and perhaps provide some logs.\n", "Use pexpect with the openssl binary.\n" ]
[ -1, -4 ]
[ "openssl", "python", "ssl" ]
stackoverflow_0000979551_openssl_python_ssl.txt
Q: py2exe windows service problem I have successfully converted my python project to a service. When using the usual options of install and start/stop, everything works correctly. However, I wish to compile the project using py2exe, which seems to work correctly until you install the EXE as a service and try and run it. You get the following error message: Starting service CherryPyService Error starting service: The service did not respond to the start or control request in a timely fashion. My compile python file (which links to the main project) is as follows: from distutils.core import setup import py2exe setup(console=['webserver.py']) Any help would be greatly appreciated. A: You setup.py file should contain setup(service=["webserver.py"]) as shown in the "old" py2exe docs A: You will find an example in the py2exe package, look in site-packages\py2exe\samples\advanced.
py2exe windows service problem
I have successfully converted my python project to a service. When using the usual options of install and start/stop, everything works correctly. However, I wish to compile the project using py2exe, which seems to work correctly until you install the EXE as a service and try and run it. You get the following error message: Starting service CherryPyService Error starting service: The service did not respond to the start or control request in a timely fashion. My compile python file (which links to the main project) is as follows: from distutils.core import setup import py2exe setup(console=['webserver.py']) Any help would be greatly appreciated.
[ "You setup.py file should contain\nsetup(service=[\"webserver.py\"])\n\nas shown in the \"old\" py2exe docs\n", "You will find an example in the py2exe package, look in site-packages\\py2exe\\samples\\advanced.\n" ]
[ 4, 1 ]
[]
[]
[ "py2exe", "python", "windows_services" ]
stackoverflow_0000996129_py2exe_python_windows_services.txt
Q: Pixmap transparency in PyGTK How can I create PyGTK pixmaps with one pixel value set to transparent? I know it has something to do with creating a pixmap of depth 1 and setting it as a mask, but all I find is that it either does nothing or totally erases my pixmap when drawn. At the moment, I make a pixmap with r = self.get_allocation() p1 = gtk.gdk.Pixmap(self.window,r.width,r.height) p1_c = p1.cairo_create() then draw black lines all over it using Cairo. What I'd like to be able to do is to have all of the area not covered by lines transparent (making white the transparent colour, say), so that when I draw it to the window with draw_drawable, it leaves everything 'underneath' intact. The FAQs and mailing list postings regarding this issue are most unhelpful as they are so outdated. Someone must know here! A: I don't think you can do what you want with a Pixmap or Pixbuf, but here are two strategies for implementing scribbling on top of an existing Widget. The most obvious one is just to catch the draw event and draw straight onto the Widget's Drawable, with no retained image in the middle: from gtk import Window, Button, main from math import pi import cairo w = Window() b = Button("Draw on\ntop of me!") def scribble_on(cr): cr.set_source_rgb(0, 0, 0) cr.rectangle(10, 10, 30, 30) cr.fill() cr.arc(50, 50, 10, 0, pi) cr.stroke() def expose_handler(widget, event): cr = widget.window.cairo_create() cr.rectangle(event.area.x, event.area.y, event.area.width, event.area.height) cr.clip() scribble_on(cr) return False b.connect_after("expose_event", expose_handler) w.add(b) w.set_size_request(100, 100) w.show_all() main() A second option, if you want to have an intermediary ARGB image that you don't have to update each time a redraw is requested, would be to pre-render the image to an ImageSurface. Here's a replacement for expose_handler, above, that only draws the image once: import cairo surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 100, 100) scribble_on(cairo.Context(surface)) def expose_image_handler(widget, event): cr = widget.window.cairo_create() cr.rectangle(event.area.x, event.area.y, event.area.width, event.area.height) cr.clip() cr.set_source_surface(surface) cr.paint() If this is the sort of thing you're looking for, I would recommend updating the title of the question to reflect your real need :). A: It looks like you want to use a Pixbuf and not a Pixmap. The Pixbuf includes an alpha setting, which will give you transparency, whereas the Pixmap does not.
Pixmap transparency in PyGTK
How can I create PyGTK pixmaps with one pixel value set to transparent? I know it has something to do with creating a pixmap of depth 1 and setting it as a mask, but all I find is that it either does nothing or totally erases my pixmap when drawn. At the moment, I make a pixmap with r = self.get_allocation() p1 = gtk.gdk.Pixmap(self.window,r.width,r.height) p1_c = p1.cairo_create() then draw black lines all over it using Cairo. What I'd like to be able to do is to have all of the area not covered by lines transparent (making white the transparent colour, say), so that when I draw it to the window with draw_drawable, it leaves everything 'underneath' intact. The FAQs and mailing list postings regarding this issue are most unhelpful as they are so outdated. Someone must know here!
[ "I don't think you can do what you want with a Pixmap or Pixbuf, but here are two strategies for implementing scribbling on top of an existing Widget. The most obvious one is just to catch the draw event and draw straight onto the Widget's Drawable, with no retained image in the middle:\nfrom gtk import Window, Button, main\nfrom math import pi\nimport cairo\n\nw = Window()\nb = Button(\"Draw on\\ntop of me!\")\n\ndef scribble_on(cr):\n cr.set_source_rgb(0, 0, 0)\n cr.rectangle(10, 10, 30, 30)\n cr.fill()\n cr.arc(50, 50, 10, 0, pi)\n cr.stroke()\n\ndef expose_handler(widget, event):\n cr = widget.window.cairo_create()\n cr.rectangle(event.area.x, event.area.y,\n event.area.width, event.area.height)\n cr.clip()\n scribble_on(cr)\n return False\n\nb.connect_after(\"expose_event\", expose_handler)\nw.add(b)\nw.set_size_request(100, 100)\nw.show_all()\nmain()\n\nA second option, if you want to have an intermediary ARGB image that you don't have to update each time a redraw is requested, would be to pre-render the image to an ImageSurface. Here's a replacement for expose_handler, above, that only draws the image once:\nimport cairo\nsurface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 100, 100)\nscribble_on(cairo.Context(surface))\n\ndef expose_image_handler(widget, event):\n cr = widget.window.cairo_create()\n cr.rectangle(event.area.x, event.area.y,\n event.area.width, event.area.height)\n cr.clip()\n cr.set_source_surface(surface)\n cr.paint()\n\nIf this is the sort of thing you're looking for, I would recommend updating the title of the question to reflect your real need :).\n", "It looks like you want to use a Pixbuf and not a Pixmap. The Pixbuf includes an alpha setting, which will give you transparency, whereas the Pixmap does not.\n" ]
[ 2, 1 ]
[]
[]
[ "pygtk", "python", "transparency" ]
stackoverflow_0000973073_pygtk_python_transparency.txt
Q: Refactor this block cipher keying function I found a simple pure python blowfish implementation that meets my needs for a particular project. There's just one part of it that bothers me: def initialize(key): """ Use key to setup subkeys -- requires 521 encryptions to set p and s boxes. key is a hex number corresponding to a string of 32 up to 448 1s and 0s -- keylen says how long """ # Note that parray and sboxes are globals that have been pre-initialized. hexkey = hex(key)[2:] if hexkey[-1]=='L': hexkey = hexkey[:-1] if len(hexkey)%2==1: hexkey = '0'+hexkey lenkey = len(hexkey)/8 if lenkey==0: pos=0 # XOR key segments with P-boxes for i in range(18): if lenkey>0: pos = (i%lenkey)*8 # offset into key gives subkey subkey = eval('0x'+hexkey[pos:pos+8]+'L') parray[i] ^= subkey # immediate XOR -- Python 2.0+ syntax # encrypt 0-data, then keep re-encrypting and reassigning P-boxes output = 0L for i in range(0,17,2): output = bfencrypt(output) parray[i], parray[i+1] = output>>32, output & 0xFFFFFFFFL # re-encrypt and reassign through all the S-boxes for i in range(4): for j in range(0,255,2): output = bfencrypt(output) sbox[i][j],sbox[i][j+1] = output>>32, output & 0xFFFFFFFFL # print "Initialization complete" subkey = eval('0x'+hexkey[pos:pos+8]+'L')? Please tell me there's a better way to do this. Isn't there a way to refactor this to use an actual integer type rather than hex values in a string? A: Yes. Use int() with a base of 16. >>> int('ffffffff',16) 4294967295L so: subkey = int(hexkey[pos:pos+8], 16) should do the same thing without needing eval. [Edit] In fact, there's generally no reason why you'd need to convert to a string representation at all, given an integer - you can simply extract out each 32 bit value by ANDing with 0xffffffff and shifting the key right by 32 bits in a loop. eg: subkeys = [] while key: subkeys.append(key & 0xffffffff) key >>= 32 if not subkeys: subkeys = [0] # Handle 0 case subkeys.reverse() # Use same order as before (BUT SEE BELOW) However, this initialization process seems a bit odd - it's using the hex digits starting from the left, with no zero padding to round to a multiple of 8 hex digits (so the number 0x123456789 would be split into 0x12345678 and 0x9, rather than the more customary 0x00000001 and 0x23456789. It also repeats these numbers, rather than treating it as a single large number. You should check that this code is actually performing the correct algorithm. A: Don't use this code, much less try to improve it. Using crypto code found on the internet is likely to cause serious security failures in your software. See Jeff Atwood's little series on the topic. It is much better to use a proven crypto library at the highest possible level of abstraction. Ideally one that implements all the key handling in C and takes care of destroying key material after use. One problem of doing crypto in Python is that you have no control over proliferation of key material in memory due to the nature of Python strings and the garbage collection process. A: An alternative is "int('0x111', 0)". int's second argument is the base. "0" means "use the usual rules: no prefix is decimal, 0 prefix is octal and 0x is hexa -- just like eval". This is the preferred way to "emulate" the eval operation for intifying strings. A: You can do this with the long function: subkey = long(hexkey[pos:pos+8], 16) From help(long): class long(object) | long(x[, base]) -> integer | | Convert a string or number to a long integer, if possible. A floating | point argument will be truncated towards zero (this does not include a | string representation of a floating point number!) When converting a | string, use the optional base. It is an error to supply a base when | converting a non-string.
Refactor this block cipher keying function
I found a simple pure python blowfish implementation that meets my needs for a particular project. There's just one part of it that bothers me: def initialize(key): """ Use key to setup subkeys -- requires 521 encryptions to set p and s boxes. key is a hex number corresponding to a string of 32 up to 448 1s and 0s -- keylen says how long """ # Note that parray and sboxes are globals that have been pre-initialized. hexkey = hex(key)[2:] if hexkey[-1]=='L': hexkey = hexkey[:-1] if len(hexkey)%2==1: hexkey = '0'+hexkey lenkey = len(hexkey)/8 if lenkey==0: pos=0 # XOR key segments with P-boxes for i in range(18): if lenkey>0: pos = (i%lenkey)*8 # offset into key gives subkey subkey = eval('0x'+hexkey[pos:pos+8]+'L') parray[i] ^= subkey # immediate XOR -- Python 2.0+ syntax # encrypt 0-data, then keep re-encrypting and reassigning P-boxes output = 0L for i in range(0,17,2): output = bfencrypt(output) parray[i], parray[i+1] = output>>32, output & 0xFFFFFFFFL # re-encrypt and reassign through all the S-boxes for i in range(4): for j in range(0,255,2): output = bfencrypt(output) sbox[i][j],sbox[i][j+1] = output>>32, output & 0xFFFFFFFFL # print "Initialization complete" subkey = eval('0x'+hexkey[pos:pos+8]+'L')? Please tell me there's a better way to do this. Isn't there a way to refactor this to use an actual integer type rather than hex values in a string?
[ "Yes. Use int() with a base of 16.\n>>> int('ffffffff',16)\n4294967295L\n\nso:\nsubkey = int(hexkey[pos:pos+8], 16)\n\nshould do the same thing without needing eval.\n[Edit] In fact, there's generally no reason why you'd need to convert to a string representation at all, given an integer - you can simply extract out each 32 bit value by ANDing with 0xffffffff and shifting the key right by 32 bits in a loop. eg:\nsubkeys = []\nwhile key:\n subkeys.append(key & 0xffffffff)\n key >>= 32\n\nif not subkeys: subkeys = [0] # Handle 0 case\nsubkeys.reverse() # Use same order as before (BUT SEE BELOW)\n\nHowever, this initialization process seems a bit odd - it's using the hex digits starting from the left, with no zero padding to round to a multiple of 8 hex digits (so the number 0x123456789 would be split into 0x12345678 and 0x9, rather than the more customary 0x00000001 and 0x23456789. It also repeats these numbers, rather than treating it as a single large number. You should check that this code is actually performing the correct algorithm.\n", "Don't use this code, much less try to improve it.\nUsing crypto code found on the internet is likely to cause serious security failures in your software. See Jeff Atwood's little series on the topic.\nIt is much better to use a proven crypto library at the highest possible level of abstraction. Ideally one that implements all the key handling in C and takes care of destroying key material after use.\nOne problem of doing crypto in Python is that you have no control over proliferation of key material in memory due to the nature of Python strings and the garbage collection process.\n", "An alternative is \"int('0x111', 0)\". int's second argument is the base. \"0\" means \"use the usual rules: no prefix is decimal, 0 prefix is octal and 0x is hexa -- just like eval\".\nThis is the preferred way to \"emulate\" the eval operation for intifying strings.\n", "You can do this with the long function:\nsubkey = long(hexkey[pos:pos+8], 16) \n\nFrom help(long):\n\nclass long(object)\n | long(x[, base]) -> integer\n |\n | Convert a string or number to a long integer, if possible. A floating\n | point argument will be truncated towards zero (this does not include a\n | string representation of a floating point number!) When converting a\n | string, use the optional base. It is an error to supply a base when\n | converting a non-string. \n\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "encryption", "python" ]
stackoverflow_0000996965_encryption_python.txt
Q: Using python regex to extract namespaces from C++ sources I am trying to extract the namespaces defined in C++ files. Basically, if my C++ file contains: namespace n1 { ... namespace n2 { ... } // end namespace n2 ... namespace n3 { ...} //end namespace n3 ... } //end namespace n1 I want to be able to retrieve: n1, n1::n2, n1::n3. Does someone have any suggestion of how I could do that using python-regex? Thanks. A: Searching for the namespace names is pretty easy with a regular expression. However, to determine the nesting level you will have to keep track of the curly bracket nesting level in the source file. This is a parsing problem, one that cannot be solved (sanely) with regular expressions. Also, you may have to deal with any C preprocessor directives in the file which can definitely affect parsing. C++ is a notoriously tricky language to parse completely, but you may be able to get by with a tokeniser and a curly bracket counter. A: The need is simple enough that you may not need a complex parser. You need to: extract the namespace names count the open/close braces to keep track of where your namespace is defined. This simple approach works if the other conditions are met: you don't get spurious namespace like strings inside comments or inside strings you don't get unmatched open/closeing braces inside comments or strings I don't think this is too much asking from your source. A: You cannot completely ignore preprocessor directives, as they may introduce additional namespaces. I have seen a lot of code like: #define __NAMESPACE_SYSTEM__ namespace system __NAMESPACE_SYSTEM__ { // actual code here... } Yet, I don't see any reason for using such directives, other than defeating regular expression parsing strategy... A: You could write a basic lexer for it. It's not that hard. A: Most of the time when someone asks how to do something with regex, they're doing something very wrong. I don't think this case is different. If you want to parse c++, you need to use a c++ parser. There are many things that can be done that will defeat a regex but still be valid c++. A: That is what I did earlier today: Extract the comment out of the C++ files Use regex to extract the namespace definition Use a simple string search to get the open & close braces positions The various sanity check added show that I am successfully processing 99.925% of my files (5 failures ouf of 6678 files). The issues are due to mismatches in numbers of { and } cause by few '{' or '}' in strings, and unclean usage of the preprocessor instruction. However, I am only dealing with header files, and I own the code. That limits the number of scenari that could cause some issues and I can manually modify the ones I don't cover. Of course I know there are plenty of cases where it would fail but it is probably enough for what I want to achieve. Thanks for your answers.
Using python regex to extract namespaces from C++ sources
I am trying to extract the namespaces defined in C++ files. Basically, if my C++ file contains: namespace n1 { ... namespace n2 { ... } // end namespace n2 ... namespace n3 { ...} //end namespace n3 ... } //end namespace n1 I want to be able to retrieve: n1, n1::n2, n1::n3. Does someone have any suggestion of how I could do that using python-regex? Thanks.
[ "Searching for the namespace names is pretty easy with a regular expression. However, to determine the nesting level you will have to keep track of the curly bracket nesting level in the source file. This is a parsing problem, one that cannot be solved (sanely) with regular expressions. Also, you may have to deal with any C preprocessor directives in the file which can definitely affect parsing.\nC++ is a notoriously tricky language to parse completely, but you may be able to get by with a tokeniser and a curly bracket counter.\n", "The need is simple enough that you may not need a complex parser. You need to:\n\nextract the namespace names\ncount the open/close braces to keep track of where your namespace is defined.\n\nThis simple approach works if the other conditions are met:\n\nyou don't get spurious namespace like strings inside comments or inside strings\nyou don't get unmatched open/closeing braces inside comments or strings\n\nI don't think this is too much asking from your source.\n", "You cannot completely ignore preprocessor directives, as they may introduce additional namespaces. I have seen a lot of code like:\n#define __NAMESPACE_SYSTEM__ namespace system\n\n__NAMESPACE_SYSTEM__ {\n // actual code here...\n}\n\nYet, I don't see any reason for using such directives, other than defeating regular expression parsing strategy...\n", "You could write a basic lexer for it. It's not that hard.\n", "Most of the time when someone asks how to do something with regex, they're doing something very wrong. I don't think this case is different.\nIf you want to parse c++, you need to use a c++ parser. There are many things that can be done that will defeat a regex but still be valid c++.\n", "That is what I did earlier today: \n\nExtract the comment out of the C++ files\nUse regex to extract the namespace definition\nUse a simple string search to get the open & close braces positions\n\nThe various sanity check added show that I am successfully processing 99.925% of my files (5 failures ouf of 6678 files). The issues are due to mismatches in numbers of { and } cause by few '{' or '}' in strings, and unclean usage of the preprocessor instruction.\nHowever, I am only dealing with header files, and I own the code. That limits the number of scenari that could cause some issues and I can manually modify the ones I don't cover.\nOf course I know there are plenty of cases where it would fail but it is probably enough for what I want to achieve.\nThanks for your answers.\n" ]
[ 6, 2, 1, 1, 0, 0 ]
[]
[]
[ "c++", "namespaces", "python", "regex" ]
stackoverflow_0000995165_c++_namespaces_python_regex.txt
Q: Hash method and UnicodeEncodeError In Python 2.5, I have the following hash function: def __hash__(self): return hash(str(self)) It works well for my needs, but now I started to get the following error message. Any idea of what is going on? return hash(str(self)) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufeff' in position 16: ordinal not in range(128) How could I fix this? Thanks! A: The problem is that you are trying to hash a string that is not convertible to ASCII. The str method takes a unicode object and, by default, converts it to ASCII. To fix this problem you need to either hash the unicode object directly, or else convert the string using the correct codec. For example, you might do this if you are reading unicode from the console on a US Windows localized system: return hash(mystring.encode("cp437")) On the other hand, data from the registry or API functions might be encoded as: return hash(mystring.encode("cp1252")) Please note that the encoding for the local system varies depending on the localization, so you will need to find out what that is using the locale library. I noticed that you were converting str(self), which means you will need to override the __str__ method to do the encoding there, and probably in __repr__ for the affected objects. http://boodebr.org/main/python/all-about-python-and-unicode Is a nice link that has a lot of useful information about Python and unicode. See in particular the section on "Why doesn't print work?" A: The error doesn't seem to be in the __hash__ function, but in the __str__ function. Try str(yourobject) in the object with the problem and you'll see what I mean. Please edit the question and add your __str__ function (and relevant data) so we can point you on how to correct it.
Hash method and UnicodeEncodeError
In Python 2.5, I have the following hash function: def __hash__(self): return hash(str(self)) It works well for my needs, but now I started to get the following error message. Any idea of what is going on? return hash(str(self)) UnicodeEncodeError: 'ascii' codec can't encode character u'\ufeff' in position 16: ordinal not in range(128) How could I fix this? Thanks!
[ "The problem is that you are trying to hash a string that is not convertible to ASCII. The str method takes a unicode object and, by default, converts it to ASCII.\nTo fix this problem you need to either hash the unicode object directly, or else convert the string using the correct codec.\nFor example, you might do this if you are reading unicode from the console on a US Windows localized system:\nreturn hash(mystring.encode(\"cp437\"))\n\nOn the other hand, data from the registry or API functions might be encoded as:\nreturn hash(mystring.encode(\"cp1252\"))\n\nPlease note that the encoding for the local system varies depending on the localization, so you will need to find out what that is using the locale library.\nI noticed that you were converting str(self), which means you will need to override the __str__ method to do the encoding there, and probably in __repr__ for the affected objects.\nhttp://boodebr.org/main/python/all-about-python-and-unicode\nIs a nice link that has a lot of useful information about Python and unicode. See in particular the section on \"Why doesn't print work?\"\n", "The error doesn't seem to be in the __hash__ function, but in the __str__ function.\nTry str(yourobject) in the object with the problem and you'll see what I mean.\nPlease edit the question and add your __str__ function (and relevant data) so we can point you on how to correct it.\n" ]
[ 2, 1 ]
[]
[]
[ "hash", "python", "string", "unicode" ]
stackoverflow_0000998302_hash_python_string_unicode.txt
Q: Why is my bubble sort in Python so slow? I have the following code thats use bubble sort to invert a list and has a worst time performance: for i in xrange(len(l)): for j in xrange(len(l)): if l[i]>l[j]: l[i], l[j] = l[j], l[i] In some cases (when len(l) = 100000) the code spend more then 2h to complete execute, I think its so strange, please correct my code or give some suggestions. numpy and numarray solutions are welcome. A: Bubble sort is a horrible algorithm to sort with. That is quite possibly the reason. If speed is necessary, I would try another algorithm like quick sort or merge sort. A: That's not quite a bubble sort... unless I've made a trivial error, this would be closer to a python bubble sort: swapped = True while swapped: swapped = False for i in xrange(len(l)-1): if l[i] > l[i+1]: l[i],l[i+1] = l[i+1],l[i] swapped = True Note that the whole idea is that the "bubble" moves along the array, swapping adjacent values until it moves through the list, with nothing swapped. There are a few optimizations that can be made (such as shrinking the size of the inner loop), but they are usually only worth bothering with when you are "homework oriented". Edit: Fixed length() -> len() A: Bubble sort may be horrible and slow etc, but would you rather have an O(N^2) algorithm over 100 items, or O(1) one that required a dial-up connection? And a list of 100 items shouldnt take 2 hours. I don't know python, but are you by any chance copying entire lists when you make those assignments? Here's a bubble sort in Python (from Google because I am lazy): def bubbleSort(theList, max): for n in range(0,max): #upper limit varies based on size of the list temp = 0 for i in range(1, max): #keep this for bounds purposes temp = theList[i] if theList[i] < theList[i-1]: theList[i] = theList[i-1] theList[i-1] = temp and another, from wikipedia: def bubblesort(l): "Sorts l in place and returns it." for passesLeft in range(len(l)-1, 0, -1): for index in range(passesLeft): if l[index] < l[index + 1]: l[index], l[index + 1] = l[index + 1], l[index] return l The order of bubble sort is N(N-1). This is essentially N^2, because for every element you require to scan the list and compare every element. By the way, you will probably find C++ to be the fastest, then Java, then Python. A: What do you mean by numpy solution ? Numpy has some sort facilities, which are instantenous for those reasonably small arrays: import numpy as np a = np.random.randn(100000) # Take a few ms on a decent computer np.sort(a) There are 3 sorts of sort algorithms available, all are Nlog(N) on average. A: I believe you mentioned that you were trying to use that as a benchmark to compare speeds. I think generally Python is a bit faster than Ruby, but not really near Java/C/C++/C#. Java is within 2x of the C's, but all the interpreted languages were around 100x slower. You might Google "Programming Language Game" for a LOT of comparisons of apps/languages/etc. Check out a Python JIT for possibly better performance. You might also compare it to Ruby to see a more fair test. Edit: Just for fun (nothing to do with the question) check this-- public class Test { public static void main(String[]s) { int size=Integer.valueOf(s[0]).intValue(); Random r=new Random(); int[] l=new int[size]; for(int i=0;i<size;i++) l[i]=r.nextInt(); long ms=(new Date()).getTime(); System.out.println("built"); if(fast) { Arrays.sort(l); else { int temp; for(int i=0;i<size;i++) for(int j=0;j<size;j++) if(l[i]>l[j]) { temp=l[i]; l[j]=l[i]; l[j]=temp; } } ms=(new Date()).getTime()-ms; System.out.println("done in "+ms/1000); } } The fun thing about this: The Java run times are on the order of: Array size Slow Time Fast time 100k 2s 0s 1M 23s 0s 10M 39m 2s 100M NO 23s Not that this addition has anything to do with the question, but holy cow the built-in impelemntation is FAST. I think it took longer to generate than sort (Guess that makes sense with calls to Random and memory allocation.) Had to go into the CLI and -Xmx1000M to get that last one to even run. A: Bubble sort makes O(N2) compare operations (or iterations). For N = 100,000, that means that there will be 10,000,000,000 iterations. If that takes 2 hours (call it 10,000 seconds), then it means you get 1,000,000 iterations per second - or 1 microsecond per iteration. That's not great speed, but it isn't too bad. And I'm waving hands and ignoring constant multiplication factors. If you used a quicksort, then you'd get Nlog(N) iterations, which would mean about 1,000,000 iterations, which would take 1 second in total. (log10(N) is 5; I rounded it up to 10 for simplicity.) So, you have just amply demonstrated why bubble sort is inappropriate for large data sets, and 100,000 items is large enough to demonstrate that. A: For one, you're doing too many loops. Your inner loop should proceed from i + 1 to the end of the list, not from 0. Secondly, as noted by others, bubble sort has a O(N^2) complexity so for 100000 elements, you are looping 10,000,000,000 times. This is compounded by the fact that looping is one of the areas where interpreted languages have the worst performance. It all adds up to incredibly poor performance. This is why any computations that require such tight looping are usually written in C/C++ and wrapped for use by languages like Python. A: Here some code I put together to compare a base bubble sort against a more streamlined version (base vs modified) - the modified is about 2-3 times faster, still a slow sort, but faster from array import * from random import * from time import * def randarray(typecode, numElements, minValue, maxValue): a = array(typecode) for i in xrange(0, numElements): a.append(randint(minValue, maxValue)) return a def basesort(l): for i in xrange(len(l)): for j in xrange(len(l)): if l[i]<l[j]: l[i], l[j] = l[j], l[i] return l def modifiedsort(l): NotComplete = True i = 0 arange = xrange(len(l)) while NotComplete: NotComplete = False for j in xrange(len(l) - i): if l[i]<l[j]: l[i], l[j] = l[j], l[i] NotComplete = True i += 1 Num = 1000 b = randarray('i', Num, 1, 100000) m = b[:] print 'perform base bubble sort' t = time() basesort(b) basetime = time() - t print basetime #print a print 'complete' print 'perform modified bubble sort' t = time() modifiedsort(m) modtime = time() - t print modtime #print a print 'complete' print 'mod sort is ', basetime / modtime,' fast then base sort' A: I think that you are basically wasting your time using bubble on such a large dataset. There are 3 reasons why it is slow: 1) Python is slow 2) Bubble sort is slow 3) The bubble sort listed is coded incorrectly/inefficiently. Regardless of how it is coded, it will be O(N^2). Why not use a merge/tree sort ..or if you want to try quicksort (also worst case O(N^2)) it might be faster for your particular dataset. I believe quicksort is empirically faster if the data already has a lot of ordering in it. A: Bubblesort in general does not scale well to most possible inputs as the number of elements in the input grows. (I.e., it's O(N^2).) As N grows, given a random input array of size N, you are much less likely to get an array that sorts quickly with bubblesort (e.g., almost sorted arrays). You are far more likely to get an array that takes a long time to sort. However, the real kicker here is that the code you posted is not a bubble sort. Traditionally, bubblesort will terminate early if no swaps were made as well as not attempt to swap values that are already sorted. (After P number of passes, the P last items will be in the correct order, so you don't need to process them.) The actual code posted will always examine every pair in the array, so it will always run the inner loop N^2 times. For 100000 elements, that's 10000000000 iterations. A: If you're interested in making your own sort, you can change a bubble sort to a comb sort with just a couple lines of code. Comb sort is nearly as good as the best sorts. Of course, making your own sort is best left as a learning exercise. Comb sort improves on bubble sort, and rivals in speed more complex algorithms like Quicksort. http://en.wikipedia.org/wiki/Comb_sort A: That doesn't look like bubble sort to me, and if it is, it's a very inefficient implementation of it. A: Because it is going execute the comparison and possibly the swap 100,000 x 100,000 times. If the computer is fast enough to execute the innermost statement 1,000,000 times per second, that still is 167 minutes which is slightly short of 3 hours. On a side note, why are there so many of these inane questions on SO? Isn't being able to do simple algebra a prerequisite for programming? ;-) A: First of all, for the purpose of this reply, I'm assuming - since you claim it yourself - that you're only doing this to benchmark different languages. So I won't go into "bubble sort is just slow" territory. The real question is why it's so much slower in Python. The answer is that Python is inherently much slower than C++ or even Java. You don't see it in a typical event-driven or I/O-bound application, since there most time is spent either idling while waiting for input, or waiting for I/O calls to complete. In your case, however, the algorithm is entirely CPU bound, and thus you are directly measuring the performance of Python bytecode interpreter. Which, by some estimates, is 20-30x slower than executing the corresponding native code, which is what happens with both C++ and Java. In general, any time you write a long-running CPU-bound loop in Python, you should expect this kind of performance. The only way to fix this is to move the entire loop into C. Moving just the body (e.g. using NumPy) won't help you much, since loop iteration itself will still be executed by Python intepreter. A: Like the other posts say, bubble sort is horrible. It pretty much should be avoided at all costs due to the bad proformance, like you're experiencing. Luckily for you there are lots of other sorting algorithms, http://en.wikipedia.org/wiki/Sorting_algorithm, for examples. In my experience in school is that quicksort and mergesort are the other two basic sorting algorithms introduced with, or shortly after, bubble sort. So I would recommend you look into those for learning more effective ways to sort. A: If you must code your own, use an insertion sort. Its about the same amount of code, but several times faster. A: I forgot to add, if you have some idea of the size of the dataset and the distribution of keys then you can use a radix sort which would be O(N). To get the idea of radix sort, consider the case where you are sorting say numbers more or less distributed between 0, 100,000. Then you just create something similar to a hash table, say an array of 100,000 lists, and add each number to the bucket. Here's an implementation I wrote in a few minutes that generates some random data, sorts it, and prints out a random segment. The time is less than 1 sec to execute for an array of 100,000 integers. Option Strict On Option Explicit On Module Module1 Private Const MAX_SIZE As Integer = 100000 Private m_input(MAX_SIZE) As Integer Private m_table(MAX_SIZE) As List(Of Integer) Private m_randomGen As New Random() Private m_operations As Integer = 0 Private Sub generateData() ' fill with random numbers between 0 and MAX_SIZE - 1 For i = 0 To MAX_SIZE - 1 m_input(i) = m_randomGen.Next(0, MAX_SIZE - 1) Next End Sub Private Sub sortData() For i As Integer = 0 To MAX_SIZE - 1 Dim x = m_input(i) If m_table(x) Is Nothing Then m_table(x) = New List(Of Integer) End If m_table(x).Add(x) ' clearly this is simply going to be MAX_SIZE -1 m_operations = m_operations + 1 Next End Sub Private Sub printData(ByVal start As Integer, ByVal finish As Integer) If start < 0 Or start > MAX_SIZE - 1 Then Throw New Exception("printData - start out of range") End If If finish < 0 Or finish > MAX_SIZE - 1 Then Throw New Exception("printData - finish out of range") End If For i As Integer = start To finish If m_table(i) IsNot Nothing Then For Each x In m_table(i) Console.WriteLine(x) Next End If Next End Sub ' run the entire sort, but just print out the first 100 for verification purposes Private Sub test() m_operations = 0 generateData() Console.WriteLine("Time started = " & Now.ToString()) sortData() Console.WriteLine("Time finished = " & Now.ToString & " Number of operations = " & m_operations.ToString()) ' print out a random 100 segment from the sorted array Dim start As Integer = m_randomGen.Next(0, MAX_SIZE - 101) printData(start, start + 100) End Sub Sub Main() test() Console.ReadLine() End Sub End Module Time started = 6/15/2009 4:04:08 PM Time finished = 6/15/2009 4:04:08 PM Number of operations = 100000 21429 21430 21430 21431 21431 21432 21433 21435 21435 21435 21436 21437 21437 21439 21441 ... A: You can do l.reverse() Script ee.py: l = [] for i in xrange(100000): l.append(i) l.reverse() lyrae@localhost:~/Desktop$ time python ee.py real 0m0.047s user 0m0.044s sys 0m0.004s
Why is my bubble sort in Python so slow?
I have the following code thats use bubble sort to invert a list and has a worst time performance: for i in xrange(len(l)): for j in xrange(len(l)): if l[i]>l[j]: l[i], l[j] = l[j], l[i] In some cases (when len(l) = 100000) the code spend more then 2h to complete execute, I think its so strange, please correct my code or give some suggestions. numpy and numarray solutions are welcome.
[ "Bubble sort is a horrible algorithm to sort with. That is quite possibly the reason. If speed is necessary, I would try another algorithm like quick sort or merge sort. \n", "That's not quite a bubble sort... unless I've made a trivial error, this would be closer to a python bubble sort:\nswapped = True\nwhile swapped:\n swapped = False\n for i in xrange(len(l)-1):\n if l[i] > l[i+1]:\n l[i],l[i+1] = l[i+1],l[i]\n swapped = True\n\nNote that the whole idea is that the \"bubble\" moves along the array, swapping adjacent values until it moves through the list, with nothing swapped. There are a few optimizations that can be made (such as shrinking the size of the inner loop), but they are usually only worth bothering with when you are \"homework oriented\".\nEdit: Fixed length() -> len()\n", "Bubble sort may be horrible and slow etc, but would you rather have an O(N^2) algorithm over 100 items, or O(1) one that required a dial-up connection?\nAnd a list of 100 items shouldnt take 2 hours. I don't know python, but are you by any chance copying entire lists when you make those assignments?\nHere's a bubble sort in Python (from Google because I am lazy):\ndef bubbleSort(theList, max):\n for n in range(0,max): #upper limit varies based on size of the list\n temp = 0\n for i in range(1, max): #keep this for bounds purposes\n temp = theList[i]\n if theList[i] < theList[i-1]:\n theList[i] = theList[i-1]\n theList[i-1] = temp\n\nand another, from wikipedia:\ndef bubblesort(l):\n \"Sorts l in place and returns it.\"\n for passesLeft in range(len(l)-1, 0, -1):\n for index in range(passesLeft):\n if l[index] < l[index + 1]:\n l[index], l[index + 1] = l[index + 1], l[index]\n return l\n\nThe order of bubble sort is N(N-1). This is essentially N^2, because for every element you require to scan the list and compare every element.\nBy the way, you will probably find C++ to be the fastest, then Java, then Python.\n", "What do you mean by numpy solution ? Numpy has some sort facilities, which are instantenous for those reasonably small arrays:\nimport numpy as np\na = np.random.randn(100000)\n# Take a few ms on a decent computer\nnp.sort(a)\n\nThere are 3 sorts of sort algorithms available, all are Nlog(N) on average.\n", "I believe you mentioned that you were trying to use that as a benchmark to compare speeds.\nI think generally Python is a bit faster than Ruby, but not really near Java/C/C++/C#. Java is within 2x of the C's, but all the interpreted languages were around 100x slower. \nYou might Google \"Programming Language Game\" for a LOT of comparisons of apps/languages/etc. Check out a Python JIT for possibly better performance.\nYou might also compare it to Ruby to see a more fair test. \nEdit: Just for fun (nothing to do with the question) check this--\npublic class Test {\n public static void main(String[]s) {\n int size=Integer.valueOf(s[0]).intValue();\n Random r=new Random();\n int[] l=new int[size];\n for(int i=0;i<size;i++)\n l[i]=r.nextInt();\n long ms=(new Date()).getTime();\n System.out.println(\"built\");\n if(fast) {\n Arrays.sort(l);\n else {\n int temp;\n for(int i=0;i<size;i++)\n for(int j=0;j<size;j++)\n if(l[i]>l[j]) { \n temp=l[i];\n l[j]=l[i];\n l[j]=temp; \n }\n }\n ms=(new Date()).getTime()-ms;\n System.out.println(\"done in \"+ms/1000);\n }\n}\n\nThe fun thing about this: The Java run times are on the order of:\n\nArray size Slow Time Fast time\n 100k 2s 0s\n 1M 23s 0s\n 10M 39m 2s\n100M NO 23s\n\nNot that this addition has anything to do with the question, but holy cow the built-in impelemntation is FAST. I think it took longer to generate than sort (Guess that makes sense with calls to Random and memory allocation.)\nHad to go into the CLI and -Xmx1000M to get that last one to even run.\n", "Bubble sort makes O(N2) compare operations (or iterations). For N = 100,000, that means that there will be 10,000,000,000 iterations. If that takes 2 hours (call it 10,000 seconds), then it means you get 1,000,000 iterations per second - or 1 microsecond per iteration. That's not great speed, but it isn't too bad. And I'm waving hands and ignoring constant multiplication factors.\nIf you used a quicksort, then you'd get Nlog(N) iterations, which would mean about 1,000,000 iterations, which would take 1 second in total. (log10(N) is 5; I rounded it up to 10 for simplicity.)\nSo, you have just amply demonstrated why bubble sort is inappropriate for large data sets, and 100,000 items is large enough to demonstrate that.\n", "For one, you're doing too many loops. Your inner loop should proceed from i + 1 to the end of the list, not from 0. Secondly, as noted by others, bubble sort has a O(N^2) complexity so for 100000 elements, you are looping 10,000,000,000 times. This is compounded by the fact that looping is one of the areas where interpreted languages have the worst performance. It all adds up to incredibly poor performance. This is why any computations that require such tight looping are usually written in C/C++ and wrapped for use by languages like Python.\n", "Here some code I put together to compare a base bubble sort against a more streamlined version (base vs modified) - the modified is about 2-3 times faster, still a slow sort, but faster\nfrom array import *\nfrom random import *\nfrom time import *\n\ndef randarray(typecode, numElements, minValue, maxValue):\n a = array(typecode)\n for i in xrange(0, numElements):\n a.append(randint(minValue, maxValue))\n return a\n\ndef basesort(l):\n for i in xrange(len(l)):\n for j in xrange(len(l)):\n if l[i]<l[j]:\n l[i], l[j] = l[j], l[i]\n return l\n\ndef modifiedsort(l):\n NotComplete = True\n i = 0\n arange = xrange(len(l))\n while NotComplete:\n NotComplete = False\n for j in xrange(len(l) - i):\n if l[i]<l[j]:\n l[i], l[j] = l[j], l[i]\n NotComplete = True\n i += 1\n\nNum = 1000\nb = randarray('i', Num, 1, 100000)\nm = b[:]\n\nprint 'perform base bubble sort'\nt = time()\nbasesort(b)\nbasetime = time() - t\nprint basetime\n#print a\nprint 'complete'\n\nprint 'perform modified bubble sort'\nt = time()\nmodifiedsort(m)\nmodtime = time() - t\nprint modtime\n#print a\nprint 'complete'\n\nprint 'mod sort is ', basetime / modtime,' fast then base sort'\n\n", "I think that you are basically wasting your time using bubble on such a large dataset. There are 3 reasons why it is slow:\n1) Python is slow\n2) Bubble sort is slow\n3) The bubble sort listed is coded incorrectly/inefficiently.\nRegardless of how it is coded, it will be O(N^2). Why not use a merge/tree sort ..or if you want to try quicksort (also worst case O(N^2)) it might be faster for your particular dataset. I believe quicksort is empirically faster if the data already has a lot of ordering in it.\n", "Bubblesort in general does not scale well to most possible inputs as the number of elements in the input grows. (I.e., it's O(N^2).)\nAs N grows, given a random input array of size N, you are much less likely to get an array that sorts quickly with bubblesort (e.g., almost sorted arrays). You are far more likely to get an array that takes a long time to sort.\nHowever, the real kicker here is that the code you posted is not a bubble sort. Traditionally, bubblesort will terminate early if no swaps were made as well as not attempt to swap values that are already sorted. (After P number of passes, the P last items will be in the correct order, so you don't need to process them.) The actual code posted will always examine every pair in the array, so it will always run the inner loop N^2 times. For 100000 elements, that's 10000000000 iterations.\n", "If you're interested in making your own sort, you can change a bubble sort to a comb sort with just a couple lines of code. Comb sort is nearly as good as the best sorts. Of course, making your own sort is best left as a learning exercise.\n\nComb sort improves on bubble sort, and\n rivals in speed more complex\n algorithms like Quicksort.\n\nhttp://en.wikipedia.org/wiki/Comb_sort\n", "That doesn't look like bubble sort to me, and if it is, it's a very inefficient implementation of it.\n", "Because it is going execute the comparison and possibly the swap 100,000 x 100,000 times. If the computer is fast enough to execute the innermost statement 1,000,000 times per second, that still is 167 minutes which is slightly short of 3 hours.\nOn a side note, why are there so many of these inane questions on SO? Isn't being able to do simple algebra a prerequisite for programming? ;-)\n", "First of all, for the purpose of this reply, I'm assuming - since you claim it yourself - that you're only doing this to benchmark different languages. So I won't go into \"bubble sort is just slow\" territory. The real question is why it's so much slower in Python.\nThe answer is that Python is inherently much slower than C++ or even Java. You don't see it in a typical event-driven or I/O-bound application, since there most time is spent either idling while waiting for input, or waiting for I/O calls to complete. In your case, however, the algorithm is entirely CPU bound, and thus you are directly measuring the performance of Python bytecode interpreter. Which, by some estimates, is 20-30x slower than executing the corresponding native code, which is what happens with both C++ and Java.\nIn general, any time you write a long-running CPU-bound loop in Python, you should expect this kind of performance. The only way to fix this is to move the entire loop into C. Moving just the body (e.g. using NumPy) won't help you much, since loop iteration itself will still be executed by Python intepreter.\n", "Like the other posts say, bubble sort is horrible. It pretty much should be avoided at all costs due to the bad proformance, like you're experiencing.\nLuckily for you there are lots of other sorting algorithms, http://en.wikipedia.org/wiki/Sorting_algorithm, for examples.\nIn my experience in school is that quicksort and mergesort are the other two basic sorting algorithms introduced with, or shortly after, bubble sort. So I would recommend you look into those for learning more effective ways to sort.\n", "If you must code your own, use an insertion sort. Its about the same amount of code, but several times faster.\n", "I forgot to add, if you have some idea of the size of the dataset and the distribution of keys then you can use a radix sort which would be O(N). To get the idea of radix sort, consider the case where you are sorting say numbers more or less distributed between 0, 100,000. Then you just create something similar to a hash table, say an array of 100,000 lists, and add each number to the bucket. Here's an implementation I wrote in a few minutes that generates some random data, sorts it, and prints out a random segment. The time is less than 1 sec to execute for an array of 100,000 integers.\nOption Strict On\nOption Explicit On\nModule Module1\nPrivate Const MAX_SIZE As Integer = 100000\nPrivate m_input(MAX_SIZE) As Integer\nPrivate m_table(MAX_SIZE) As List(Of Integer)\nPrivate m_randomGen As New Random()\nPrivate m_operations As Integer = 0\n\nPrivate Sub generateData()\n ' fill with random numbers between 0 and MAX_SIZE - 1\n For i = 0 To MAX_SIZE - 1\n m_input(i) = m_randomGen.Next(0, MAX_SIZE - 1)\n Next\n\nEnd Sub\n\nPrivate Sub sortData()\n For i As Integer = 0 To MAX_SIZE - 1\n Dim x = m_input(i)\n If m_table(x) Is Nothing Then\n m_table(x) = New List(Of Integer)\n End If\n m_table(x).Add(x)\n ' clearly this is simply going to be MAX_SIZE -1\n m_operations = m_operations + 1\n Next\nEnd Sub\n\n Private Sub printData(ByVal start As Integer, ByVal finish As Integer)\n If start < 0 Or start > MAX_SIZE - 1 Then\n Throw New Exception(\"printData - start out of range\")\n End If\n If finish < 0 Or finish > MAX_SIZE - 1 Then\n Throw New Exception(\"printData - finish out of range\")\n End If\n For i As Integer = start To finish\n If m_table(i) IsNot Nothing Then\n For Each x In m_table(i)\n Console.WriteLine(x)\n Next\n End If\n Next\nEnd Sub\n\n' run the entire sort, but just print out the first 100 for verification purposes\nPrivate Sub test()\n m_operations = 0\n generateData()\n Console.WriteLine(\"Time started = \" & Now.ToString())\n sortData()\n Console.WriteLine(\"Time finished = \" & Now.ToString & \" Number of operations = \" & m_operations.ToString())\n ' print out a random 100 segment from the sorted array\n Dim start As Integer = m_randomGen.Next(0, MAX_SIZE - 101)\n printData(start, start + 100)\nEnd Sub\n\nSub Main()\n test()\n Console.ReadLine()\nEnd Sub\n\nEnd Module\nTime started = 6/15/2009 4:04:08 PM\nTime finished = 6/15/2009 4:04:08 PM Number of operations = 100000\n21429\n21430\n21430\n21431\n21431\n21432\n21433\n21435\n21435\n21435\n21436\n21437\n21437\n21439\n21441\n...\n", "You can do\nl.reverse()\n\nScript ee.py:\nl = []\nfor i in xrange(100000):\n l.append(i)\n\nl.reverse()\n\nlyrae@localhost:~/Desktop$ time python ee.py\nreal 0m0.047s\nuser 0m0.044s\nsys 0m0.004s\n\n" ]
[ 25, 13, 6, 5, 4, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "bubble_sort", "python" ]
stackoverflow_0000997322_bubble_sort_python.txt
Q: How can I get the order of an element attribute list using Python xml.sax? How can I get the order of an element attribute list? It's not totally necessary for the final processing, but it's nice to: in a filter, not to gratuitously reorder the attribute list while debugging, print the data in the same order as it appears in the input Here's my current attribute processor which does a dictionary-like pass over the attributes. class MySaxDocumentHandler(xml.sax.handler.ContentHandler): def startElement(self, name, attrs): for attrName in attrs.keys(): ... A: I don't think it can be done with SAX (at least as currently supported by Python). It could be done with expat, setting the ordered_attributes attribute of the parser object to True (the attributes are then two parallel lists, one of names and one of values, in the same order as in the XML source). A: Unfortunately, it's impossible in the Python implementation of Sax. This code from the Python library (v2.5) tells you all you need to know: class AttributesImpl: def __init__(self, attrs): """Non-NS-aware implementation. attrs should be of the form {name : value}.""" self._attrs = attrs The StartElement handler is passed an object implementing the AttributeImpl specification, which uses a plain ol' Python dict type to store key/value pairs. Python dict types do not guarantee order of keys.
How can I get the order of an element attribute list using Python xml.sax?
How can I get the order of an element attribute list? It's not totally necessary for the final processing, but it's nice to: in a filter, not to gratuitously reorder the attribute list while debugging, print the data in the same order as it appears in the input Here's my current attribute processor which does a dictionary-like pass over the attributes. class MySaxDocumentHandler(xml.sax.handler.ContentHandler): def startElement(self, name, attrs): for attrName in attrs.keys(): ...
[ "I don't think it can be done with SAX (at least as currently supported by Python). It could be done with expat, setting the ordered_attributes attribute of the parser object to True (the attributes are then two parallel lists, one of names and one of values, in the same order as in the XML source).\n", "Unfortunately, it's impossible in the Python implementation of Sax.\nThis code from the Python library (v2.5) tells you all you need to know:\nclass AttributesImpl:\n\n def __init__(self, attrs):\n \"\"\"Non-NS-aware implementation.\n attrs should be of the form {name : value}.\"\"\"\n\n self._attrs = attrs\n\nThe StartElement handler is passed an object implementing the AttributeImpl specification, which uses a plain ol' Python dict type to store key/value pairs. Python dict types do not guarantee order of keys.\n" ]
[ 1, 1 ]
[]
[]
[ "python", "sax", "xml" ]
stackoverflow_0000998514_python_sax_xml.txt
Q: Is  a valid character in XML? On this data: <row Id="37501" PostId="135577" Text="...uses though.&#x10;"/> I'm getting an error with the Python sax parser: xml.sax._exceptions.SAXParseException: comments.xml:29776:332: reference to invalid character number I trimmed the example; 332 points to "&#x10;". Is the parser correct in rejecting this character? A: As others have stated, you probably meant &#10;. The reason why &#x10; (0x10 = 10h = 16) is invalid is that it's explicitly excluded by the XML 1.0 standard: (http://www.w3.org/TR/xml/#NT-Char) Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF] A: &#10; is the linefeed character, which seems to be the intent. &#x10; would be the same as &#16; (10 hex is 16 decimal) and would refer to the DLE (data link escape) character. DLE is a transmission control character used to control the interpretation of data being transmitted.
Is  a valid character in XML?
On this data: <row Id="37501" PostId="135577" Text="...uses though.&#x10;"/> I'm getting an error with the Python sax parser: xml.sax._exceptions.SAXParseException: comments.xml:29776:332: reference to invalid character number I trimmed the example; 332 points to "&#x10;". Is the parser correct in rejecting this character?
[ "As others have stated, you probably meant &#10;. The reason why &#x10; (0x10 = 10h = 16) is invalid is that it's explicitly excluded by the XML 1.0 standard: (http://www.w3.org/TR/xml/#NT-Char)\nChar ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]\n\n", "&#10; is the linefeed character, which seems to be the intent.\n&#x10; would be the same as &#16; (10 hex is 16 decimal) and would refer to the DLE (data link escape) character.\nDLE is a transmission control character used to control the interpretation of data being transmitted.\n" ]
[ 14, 6 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0000998950_python_xml.txt
Q: Python urllib2 timeout when using Tor as proxy? I am using Python's urllib2 with Tor as a proxy to access a website. When I open the site's main page it works fine but when I try to view the login page (not actually log-in but just view it) I get the following error... URLError: <urlopen error (10060, 'Operation timed out')> To counteract this I did the following: import socket socket.setdefaulttimeout(None). I still get the same timeout error. Does this mean the website is timing out on the server side? (I don't know much about http processes so sorry if this is a dumb question) Is there any way I can correct it so that Python is able to view the page? Thanks, Rob A: According to the Python Socket Documentation the default is no timeout so specifying a value of "None" is redundant. There are a number of possible reasons that your connection is dropping. One could be that your user-agent is "Python-urllib" which may very well be blocked. To change your user agent: request = urllib2.Request('site.com/login') request.add_header('User-Agent','Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5') You may also want to try overriding the proxy settings before you try and open the url using something along the lines of: proxy = urllib2.ProxyHandler({"http":"http://127.0.0.1:8118"}) opener = urllib2.build_opener(proxy) urllib2.install_opener(opener) A: I don't know enough about Tor to be sure, but the timeout may not happen on the server side, but on one of the Tor nodes somewhere between you and the server. In that case there is nothing you can do other than to retry the connection. A: urllib2.urlopen(url[, data][, timeout]) The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This actually only works for HTTP, HTTPS, FTP and FTPS connections. http://docs.python.org/library/urllib2.html
Python urllib2 timeout when using Tor as proxy?
I am using Python's urllib2 with Tor as a proxy to access a website. When I open the site's main page it works fine but when I try to view the login page (not actually log-in but just view it) I get the following error... URLError: <urlopen error (10060, 'Operation timed out')> To counteract this I did the following: import socket socket.setdefaulttimeout(None). I still get the same timeout error. Does this mean the website is timing out on the server side? (I don't know much about http processes so sorry if this is a dumb question) Is there any way I can correct it so that Python is able to view the page? Thanks, Rob
[ "According to the Python Socket Documentation the default is no timeout so specifying a value of \"None\" is redundant. \nThere are a number of possible reasons that your connection is dropping. One could be that your user-agent is \"Python-urllib\" which may very well be blocked. To change your user agent:\nrequest = urllib2.Request('site.com/login')\nrequest.add_header('User-Agent','Mozilla/5.0 (X11; U; Linux i686; it-IT; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5')\n\nYou may also want to try overriding the proxy settings before you try and open the url using something along the lines of:\nproxy = urllib2.ProxyHandler({\"http\":\"http://127.0.0.1:8118\"}) \nopener = urllib2.build_opener(proxy)\nurllib2.install_opener(opener)\n\n", "I don't know enough about Tor to be sure, but the timeout may not happen on the server side, but on one of the Tor nodes somewhere between you and the server. In that case there is nothing you can do other than to retry the connection.\n", "\nurllib2.urlopen(url[, data][, timeout])\nThe optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). This actually only works for HTTP, HTTPS, FTP and FTPS connections.\n\nhttp://docs.python.org/library/urllib2.html\n" ]
[ 3, 0, 0 ]
[]
[]
[ "python", "timeout", "tor", "urllib2" ]
stackoverflow_0000997969_python_timeout_tor_urllib2.txt
Q: System theme icons and PyQt4 I'm writing a basic program in python using the PyQt4 module. I'd like to be able to use my system theme's icons for things like the preference dialog's icon, but i have no idea how to do this. So my question is, how do you get the location of an icon, but make sure it changes with the system's icon theme? If it matters, i'm developing this under ubuntu 9.04, so i am using the gnome desktop. A: Unfortunately, It appears that Qt does not support getting icons for a specific theme. There are ways to do this for both KDE and Gnome. The KDE way is quite elegant, which makes sense considering that Qt is KDE's toolkit. Instead of using the PyQt4.QtGui class QIcon, you instead use the PyKDE4.kdeui class KIcon. An example of this is: from PyKDE4.kdeui import * icon = KIcon("*The Icon Name*") see the PyKDE documentation for this class, here. One way to gain support for this for gnome is to use the python gtk package. It is not as nice as the kde way, but it works none the less. It can be used like this: from PyQt4 import QtGui from gtk import icon_theme_get_default iconTheme = icon_theme_get_default() iconInfo = iconTheme.lookup_icon("*The Icon Name*", *Int of the icon size*, 0) icon = QtGui.QIcon(iconInfo.get_filename()) See the documentation for the Icon Theme class and Icon Info class. EDIT: thanks for the correction CesarB A: Use the PyKDE4 KIcon class: http://api.kde.org/pykde-4.2-api/kdeui/KIcon.html A: I spent a decent amount of researching this myself not long ago, and my conclusion was that, unfortunately, Qt doesn't provide this functionality in a cross-platform fashion. Ideally the QIcon class would have defaults for file open, save, '+', '-', preferences, etc, but considering it doesn't you'll have to grab the appropriate icon for your desktop environment.
System theme icons and PyQt4
I'm writing a basic program in python using the PyQt4 module. I'd like to be able to use my system theme's icons for things like the preference dialog's icon, but i have no idea how to do this. So my question is, how do you get the location of an icon, but make sure it changes with the system's icon theme? If it matters, i'm developing this under ubuntu 9.04, so i am using the gnome desktop.
[ "Unfortunately, It appears that Qt does not support getting icons for a specific theme. There are ways to do this for both KDE and Gnome.\nThe KDE way is quite elegant, which makes sense considering that Qt is KDE's toolkit. Instead of using the PyQt4.QtGui class QIcon, you instead use the PyKDE4.kdeui class KIcon. An example of this is:\nfrom PyKDE4.kdeui import *\nicon = KIcon(\"*The Icon Name*\")\n\nsee the PyKDE documentation for this class, here.\nOne way to gain support for this for gnome is to use the python gtk package. It is not as nice as the kde way, but it works none the less. It can be used like this:\nfrom PyQt4 import QtGui\nfrom gtk import icon_theme_get_default\n\niconTheme = icon_theme_get_default()\niconInfo = iconTheme.lookup_icon(\"*The Icon Name*\", *Int of the icon size*, 0)\nicon = QtGui.QIcon(iconInfo.get_filename())\n\nSee the documentation for the Icon Theme class and Icon Info class.\nEDIT: thanks for the correction CesarB\n", "Use the PyKDE4 KIcon class:\nhttp://api.kde.org/pykde-4.2-api/kdeui/KIcon.html\n", "I spent a decent amount of researching this myself not long ago, and my conclusion was that, unfortunately, Qt doesn't provide this functionality in a cross-platform fashion. Ideally the QIcon class would have defaults for file open, save, '+', '-', preferences, etc, but considering it doesn't you'll have to grab the appropriate icon for your desktop environment.\n" ]
[ 7, 0, 0 ]
[]
[]
[ "icons", "pyqt4", "python" ]
stackoverflow_0000997904_icons_pyqt4_python.txt
Q: Looking for Windows Text Editor which supports GIT I am looking for a Text Editor on Windows which is integrated with GIT (check out, check in from the UI). Also, it would be nice is this editor could also support Python syntax highlighting. Is there anything like that available? Thanks! A: Here's a list of Editors and IDEs that integrate GIT, not sure if there's something that fits your need. The most fitting would be the Eclipse Plugin. A: Eclipse should be able to fit the bill. I know there's a decent python plugin, and I'm sure there's one for git by now. A: E Text Editor is the text editor for windows that i fell in love with. It "has support" for GIT via the project pane, because the project pane works with TortoiseGit. Depending on your preferences this may or may not be the optimal setup. A: Have you tried vim with this plugin? http://github.com/motemen/git-vim
Looking for Windows Text Editor which supports GIT
I am looking for a Text Editor on Windows which is integrated with GIT (check out, check in from the UI). Also, it would be nice is this editor could also support Python syntax highlighting. Is there anything like that available? Thanks!
[ "Here's a list of Editors and IDEs that integrate GIT, not sure if there's something that fits your need. The most fitting would be the Eclipse Plugin.\n", "Eclipse should be able to fit the bill. I know there's a decent python plugin, and I'm sure there's one for git by now.\n", "E Text Editor is the text editor for windows that i fell in love with. It \"has support\" for GIT via the project pane, because the project pane works with TortoiseGit. Depending on your preferences this may or may not be the optimal setup.\n", "Have you tried vim with this plugin?\nhttp://github.com/motemen/git-vim\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "editor", "git", "ide", "python" ]
stackoverflow_0000997485_editor_git_ide_python.txt
Q: KOI8-R: Having trouble translating a string This Python script gets translit for Russian letters: s = u'Код Обмена Информацией, 8 бит'.encode('koi8-r') print ''.join([chr(ord(c) & 0x7F) for c in s]) # kOD oBMENA iNFORMACIEJ, 8 BIT That works. But I want to modify it so as to get user input. Now I'm stuck at this: s = raw_input("Enter a string you want to translit: ") s = unicode(s) s = s.encode('koi8-r') print ''.join([chr(ord(c) & 0x7F) for c in s]) Ending up with this: UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128) What's wrong? A: s = unicode(s) expects ascii encoding by default. You need to supply it an encoding your input is in, e.g. s = unicode(s, 'utf-8'). A: try unicode(s, encoding) where encoding is whatever your terminal is in. A: Looking at the error messages that you are seeing, it seems to me that your terminal encoding is probably set to KOI8-R, in which case you don't need to perform any decoding on the input data. If this is the case then all you need is: >>> s = raw_input("Enter a string you want to translit: ") >>> print ''.join([chr(ord(c) & 0x7F) for c in s]) kOD oBMENA iNFORMACIEJ, 8 BIT You can double check this by s.decode('koi8-r') which should succeed and return the equivalent unicode string.
KOI8-R: Having trouble translating a string
This Python script gets translit for Russian letters: s = u'Код Обмена Информацией, 8 бит'.encode('koi8-r') print ''.join([chr(ord(c) & 0x7F) for c in s]) # kOD oBMENA iNFORMACIEJ, 8 BIT That works. But I want to modify it so as to get user input. Now I'm stuck at this: s = raw_input("Enter a string you want to translit: ") s = unicode(s) s = s.encode('koi8-r') print ''.join([chr(ord(c) & 0x7F) for c in s]) Ending up with this: UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 0: ordinal not in range(128) What's wrong?
[ "s = unicode(s) expects ascii encoding by default. You need to supply it an encoding your input is in, e.g. s = unicode(s, 'utf-8').\n", "try unicode(s, encoding) where encoding is whatever your terminal is in.\n", "Looking at the error messages that you are seeing, it seems to me that your terminal encoding is probably set to KOI8-R, in which case you don't need to perform any decoding on the input data. If this is the case then all you need is:\n>>> s = raw_input(\"Enter a string you want to translit: \")\n>>> print ''.join([chr(ord(c) & 0x7F) for c in s])\nkOD oBMENA iNFORMACIEJ, 8 BIT\n\nYou can double check this by s.decode('koi8-r') which should succeed and return the equivalent unicode string.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "encoding", "python" ]
stackoverflow_0000995531_encoding_python.txt
Q: xml.dom.minidom Document() in Python/django outputting memory location I'm learning Python and django at the same time. I'm trying to create an xml document to return some XML from a view. I'm using the django development server at the moment and I keep getting this information spitting out in my views instead of the document I tried to create. Here's my code from django.http import HttpResponse from mypoject.myapp.models import Username from django.core import serializers from xml.dom.minidom import Document import datetime def authenticate(request, username): if request.method == "GET": #Try to get the username try: checkUser = Username.objects.get(username__exact = username) user = userCheck.get(username__exact = username) userXML = serializers.serialize("xml", checkUser) except Username.DoesNotExist: #return XML with status "Failed" return HttpResponse(xml, mimetype="text/xml") except: #return XML with status "Failed" xmlFailed = Document() meta = xmlFailed.createElement("meta") xmlFailed.appendChild(meta) status = xmlFailed.createElement("status") meta.appendChild(status) statusText = xmlFailed.createTextNode("Failed") status.appendChild(statusText) message = xmlFailed.createElement("message") meta.appendChild(message) totalRecords = xmlFailed.createElement("totalRecords") meta.appendChild(totalRecords) executionTime = xmlFailed.createElement("executionTime") meta.appendChild(executionTime) return HttpResponse(xmlFailed, mimetype="text/xml") else: #return happy XML code with status "Success" And here's what's going to the screen when I view it in my browser... <xml.dom.minidom.Document instance at 0x993192c> If I comment out the Document() creation that goes away. So I'm think I just need it to not spit out the information. I've been searching all over and I can't find a strait answer which leads me to believe I'm missing something blatantly obvious. Thanks for any help! A: You'll need to call xmlFailed.toxml() or the like in order to get XML out of your object -- looks like that's not what you're doing (in the code you didn't show us).
xml.dom.minidom Document() in Python/django outputting memory location
I'm learning Python and django at the same time. I'm trying to create an xml document to return some XML from a view. I'm using the django development server at the moment and I keep getting this information spitting out in my views instead of the document I tried to create. Here's my code from django.http import HttpResponse from mypoject.myapp.models import Username from django.core import serializers from xml.dom.minidom import Document import datetime def authenticate(request, username): if request.method == "GET": #Try to get the username try: checkUser = Username.objects.get(username__exact = username) user = userCheck.get(username__exact = username) userXML = serializers.serialize("xml", checkUser) except Username.DoesNotExist: #return XML with status "Failed" return HttpResponse(xml, mimetype="text/xml") except: #return XML with status "Failed" xmlFailed = Document() meta = xmlFailed.createElement("meta") xmlFailed.appendChild(meta) status = xmlFailed.createElement("status") meta.appendChild(status) statusText = xmlFailed.createTextNode("Failed") status.appendChild(statusText) message = xmlFailed.createElement("message") meta.appendChild(message) totalRecords = xmlFailed.createElement("totalRecords") meta.appendChild(totalRecords) executionTime = xmlFailed.createElement("executionTime") meta.appendChild(executionTime) return HttpResponse(xmlFailed, mimetype="text/xml") else: #return happy XML code with status "Success" And here's what's going to the screen when I view it in my browser... <xml.dom.minidom.Document instance at 0x993192c> If I comment out the Document() creation that goes away. So I'm think I just need it to not spit out the information. I've been searching all over and I can't find a strait answer which leads me to believe I'm missing something blatantly obvious. Thanks for any help!
[ "You'll need to call xmlFailed.toxml() or the like in order to get XML out of your object -- looks like that's not what you're doing (in the code you didn't show us).\n" ]
[ 1 ]
[]
[]
[ "django", "minidom", "python", "xml" ]
stackoverflow_0000999462_django_minidom_python_xml.txt
Q: Problems with SQLAlchemy and VirtualEnv I'm trying to use SQLAlchemy under a virtualenv on OS X 10.5, but cannot seem to get it to load whatsoever. Here's what I've done mkvirtualenv --no-site-packages test easy_install sqlalchemy I try to import sqlalchemy from the interpreter and everything works fine, but if i try to import sqlalchemy from a python script, I get the following error: Here's the tutorial script from IBM from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey Base = declarative_base() class Filesystem(Base): __tablename__ = 'filesystem' path = Column(String, primary_key=True) name = Column(String) def __init__(self, path,name): self.path = path self.name = name def __repr__(self): return "<Metadata('%s','%s')>" % (self.path,self.name) I try running 'python test.py' and this is the result: $ python test.py Traceback (most recent call last): File "test.py", line 4, in <module> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey File "/Users/grant/Development/Aircraft/sqlalchemy.py", line 3, in <module> from sqlalchemy.ext.declarative import declarative_base ImportError: No module named ext.declarative Here's what's in my sys.path >>> import sys >>> print '\n'.join(sys.path) /Users/grant/Development/Python/test/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg /Users/grant/Development/Python/test/lib/python2.6/site-packages/SQLAlchemy-0.5.4p2-py2.6.egg /Users/grant/Development/Python/test/lib/python26.zip /Users/grant/Development/Python/test/lib/python2.6 /Users/grant/Development/Python/test/lib/python2.6/plat-darwin /Users/grant/Development/Python/test/lib/python2.6/plat-mac /Users/grant/Development/Python/test/lib/python2.6/plat-mac/lib-scriptpackages /Users/grant/Development/Python/test/lib/python2.6/lib-tk /Users/grant/Development/Python/test/lib/python2.6/lib-old /Users/grant/Development/Python/test/lib/python2.6/lib-dynload /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages /Users/grant/Development/Python/test/lib/python2.6/site-packages Any ideas on what's going on?? A: I fixed my own problem... I had another script named sqlalchemy.py in the same folder i was working in that was mucking everything up.
Problems with SQLAlchemy and VirtualEnv
I'm trying to use SQLAlchemy under a virtualenv on OS X 10.5, but cannot seem to get it to load whatsoever. Here's what I've done mkvirtualenv --no-site-packages test easy_install sqlalchemy I try to import sqlalchemy from the interpreter and everything works fine, but if i try to import sqlalchemy from a python script, I get the following error: Here's the tutorial script from IBM from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey Base = declarative_base() class Filesystem(Base): __tablename__ = 'filesystem' path = Column(String, primary_key=True) name = Column(String) def __init__(self, path,name): self.path = path self.name = name def __repr__(self): return "<Metadata('%s','%s')>" % (self.path,self.name) I try running 'python test.py' and this is the result: $ python test.py Traceback (most recent call last): File "test.py", line 4, in <module> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey File "/Users/grant/Development/Aircraft/sqlalchemy.py", line 3, in <module> from sqlalchemy.ext.declarative import declarative_base ImportError: No module named ext.declarative Here's what's in my sys.path >>> import sys >>> print '\n'.join(sys.path) /Users/grant/Development/Python/test/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg /Users/grant/Development/Python/test/lib/python2.6/site-packages/SQLAlchemy-0.5.4p2-py2.6.egg /Users/grant/Development/Python/test/lib/python26.zip /Users/grant/Development/Python/test/lib/python2.6 /Users/grant/Development/Python/test/lib/python2.6/plat-darwin /Users/grant/Development/Python/test/lib/python2.6/plat-mac /Users/grant/Development/Python/test/lib/python2.6/plat-mac/lib-scriptpackages /Users/grant/Development/Python/test/lib/python2.6/lib-tk /Users/grant/Development/Python/test/lib/python2.6/lib-old /Users/grant/Development/Python/test/lib/python2.6/lib-dynload /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages /Users/grant/Development/Python/test/lib/python2.6/site-packages Any ideas on what's going on??
[ "I fixed my own problem... I had another script named sqlalchemy.py in the same folder i was working in that was mucking everything up.\n" ]
[ 8 ]
[]
[]
[ "python", "sqlalchemy", "virtualenv" ]
stackoverflow_0000999677_python_sqlalchemy_virtualenv.txt
Q: Writing tests for Django's admin actions I'm using Django 1.1 beta and hoping to use admin actions. I have to write unit tests for those, but I don't get it how to write tests for them. For normal view handler functions, I can use Django's TestClient to simulate http request/response, but how should it be done with admin actions? A: Testing django admin is currently pain, because of admin's tight coupling. AFAIK, You can still use request/response, but I gave up and use only functional tests (Selenium, but you can use Windmill as well) and unit testing only our admin extensions. There is a GSoC project for covering admin with Windmill tests, and windmill is now featuring plugin for Django integration. If You're more interested in Selenium, I've written integration library for it, too (http://devel.almad.net/trac/django-sane-testing/).
Writing tests for Django's admin actions
I'm using Django 1.1 beta and hoping to use admin actions. I have to write unit tests for those, but I don't get it how to write tests for them. For normal view handler functions, I can use Django's TestClient to simulate http request/response, but how should it be done with admin actions?
[ "Testing django admin is currently pain, because of admin's tight coupling. AFAIK, You can still use request/response, but I gave up and use only functional tests (Selenium, but you can use Windmill as well) and unit testing only our admin extensions.\nThere is a GSoC project for covering admin with Windmill tests, and windmill is now featuring plugin for Django integration.\nIf You're more interested in Selenium, I've written integration library for it, too (http://devel.almad.net/trac/django-sane-testing/).\n" ]
[ 4 ]
[]
[]
[ "django", "django_admin", "python", "testing", "unit_testing" ]
stackoverflow_0000999452_django_django_admin_python_testing_unit_testing.txt
Q: Question on importing a GPL'ed Python library in commercial code We're evaluating a couple of Python libraries for Graph manipulation. We tried 'networkx' (http://networkx.lanl.gov/) and 'igraph' (http://igraph.sourceforge.net/). While both are excellent modules, igraph is faster due to its nature - it's a Python wrapper over libigraph - a blistering fast graph C library (uses LAPACK etc). Now, the igraph library is GPL licensed. My question is: Can I import igraph and use it in my commercial Python script? (This is a general question, not just limited to igraph. Apologies if the answer is obvious - I'm a license-newb!) Thanks, Raj EDIT: More specifically, does simply importing a GPL Python module make my commercial code liable to be released to the public? A: IANAL, etc etc, but: The Free Software Foundation has consistently claimed that software linked to a library covered by GPL is a derived work, and thus needs to be covered by GPL itself (indeed, that's the main difference of the LGPL license). I don't know how the situation stands in court precedents in various jurisdiction, &c, but if you don't want to risk having to litigate on the issue [which would no doubt bring costs and bad PR even if it were to ultimately succeed], it may be more prudent to avoid linking to GPL libraries (including dynamic linking) if you don't want to distribute the sources to your code. A: IANAL, but: Now, the igraph library is GPL licensed. My question is: Can I import igraph and use it in my commercial Python script? YES. You can write commercial software and distribute it under the GPL. Nothing on GPL prevents commerce. It even explicity says that you can SELL your software at will, More specifically, does simply importing a GPL Python module make my commercial code liable to be released to the public? NO. You don't have to release anything. You don't even have to distribute anything. If you ever distribute your program to someone, you must give (to this person only) the source code, and give full freedom to modify and distribute it under the same license. Distributing something under GPL or using GPL libraries in your code doesn't force you to create a website and put your program for everybody in the world. A: Some suggestions: Seek proper legal advice. Contact the authors of the libraries. Ask them: Their opinion of you using their software in your application; If they'd enter a commerical agreement with you for your application; About other ways that they may be prepared to work with you. A: If your software can be used without any loss of functionality without the use of the GPLed code, then you are in pretty good shape. Many non-free programs make use of the readline library, where available, but do not have it enabled by default, so that they can benefit from it's presence but not run afoul of its license. If those projects had chosen to require the readline library for line editing, then they would fall under the scope of the GPL and would be subject to its terms.
Question on importing a GPL'ed Python library in commercial code
We're evaluating a couple of Python libraries for Graph manipulation. We tried 'networkx' (http://networkx.lanl.gov/) and 'igraph' (http://igraph.sourceforge.net/). While both are excellent modules, igraph is faster due to its nature - it's a Python wrapper over libigraph - a blistering fast graph C library (uses LAPACK etc). Now, the igraph library is GPL licensed. My question is: Can I import igraph and use it in my commercial Python script? (This is a general question, not just limited to igraph. Apologies if the answer is obvious - I'm a license-newb!) Thanks, Raj EDIT: More specifically, does simply importing a GPL Python module make my commercial code liable to be released to the public?
[ "IANAL, etc etc, but:\nThe Free Software Foundation has consistently claimed that software linked to a library covered by GPL is a derived work, and thus needs to be covered by GPL itself (indeed, that's the main difference of the LGPL license). I don't know how the situation stands in court precedents in various jurisdiction, &c, but if you don't want to risk having to litigate on the issue [which would no doubt bring costs and bad PR even if it were to ultimately succeed], it may be more prudent to avoid linking to GPL libraries (including dynamic linking) if you don't want to distribute the sources to your code.\n", "IANAL, but:\n\nNow, the igraph library is GPL licensed. My question is: Can I import igraph and use it in my commercial Python script?\n\nYES. You can write commercial software and distribute it under the GPL. Nothing on GPL prevents commerce. It even explicity says that you can SELL your software at will,\n\nMore specifically, does simply importing a GPL Python module make my commercial code liable to be released to the public?\n\nNO. You don't have to release anything. You don't even have to distribute anything.\nIf you ever distribute your program to someone, you must give (to this person only) the source code, and give full freedom to modify and distribute it under the same license.\nDistributing something under GPL or using GPL libraries in your code doesn't force you to create a website and put your program for everybody in the world.\n", "Some suggestions:\n\nSeek proper legal advice.\nContact the authors of the libraries. Ask them:\n\n\nTheir opinion of you using their software in your application;\nIf they'd enter a commerical agreement with you for your application;\nAbout other ways that they may be prepared to work with you.\n\n\n", "If your software can be used without any loss of functionality without the use of the GPLed code, then you are in pretty good shape. Many non-free programs make use of the readline library, where available, but do not have it enabled by default, so that they can benefit from it's presence but not run afoul of its license. If those projects had chosen to require the readline library for line editing, then they would fall under the scope of the GPL and would be subject to its terms.\n" ]
[ 32, 13, 3, 2 ]
[ "As far as I know the GPL license is free for open sourced projects.\nMost libraries provide the option to buy a commercial license for commercial use.\nContact the library's author.\nThis is taken from Wt's website:\n\nWt may be used using either the GPL or a Commercial License.\nIf you wish to use the library using the GNU General Public License (GPL), you may build a web application with Wt, and deploy it to your own intranet or Internet web server, for any purpose, without the requirement to make the source code freely available.\nNote that if you are passing on your web application in binary form, be it selling or giving away for free, then you must include the source code, as per terms of the GPL. This also applies to redistribution of the Wt library, in original or modified form.\nThe Commercial License has no such limitations. Please visit our Licensing information page for license terms, pricing and ordering.\n\n", "You might want to check HOWTO: Pick an open source license and its second installment. It gives you a decision tree that suggests a license for programmers, and gives details about specific situations. These articles are also quite clear.\n" ]
[ -1, -1 ]
[ "gpl", "licensing", "python" ]
stackoverflow_0000999468_gpl_licensing_python.txt
Q: list() doesn't work in Google App Engine? I am trying to use set function in App Engine, to prepare a list with unique elements. I hit a snag when i wrote a Python code which works fine in the Python Shell but not in App Engine + Django This is what i intend to do(ran this script in IDLE): import re value=' [email protected], dash@ben,, , [email protected] ' value = value.lower() value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) if (value[0] == ''): value.remove('') print value The desired output is(got this output in IDLE): ['dash@ben', '[email protected]', '[email protected]'] Now when I do something equivalent in my views.py file in App Engine: import os import re import django from django.http import HttpResponse from django.shortcuts import render_to_response # host of other imports also there def add(request): value=' [email protected], dash@ben,, , [email protected] ' value = value.lower() value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) if (value[0] == ''): value.remove('') return render_to_response('sc-actonform.html', { 'value': value, }) I get this error while going to the appropriate page(pasting the traceback): Traceback (most recent call last): File "G:\Dhushyanth\Google\google_appengine\lib\django\django\core\handlers\base.py" in get_response 77. response = callback(request, *callback_args, **callback_kwargs) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in add 148. value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in list 208. return respond(request, None, 'sc-base', {'content': responseText}) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in respond 115. params['sign_in'] = users.create_login_url(request.path) AttributeError at /sanjhachoolha/acton/add 'set' object has no attribute 'path' on commenting out: #value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) I get the desired output in the appropriate webpage: [email protected], dash@ben,, , [email protected] I am sure the list() is the root of my troubles. Can anyone suggest why this is happening. Please also suggest alternatives. The aim is to remove duplicates from the list. Thanks! A: It seems like you implemented your own list() function. Its return statements should be at line 208 of your file (views.py). You should rename your list() function to something else (even list_()). EDIT: Also you can change you regexp, like this: import re value=' [email protected], dash@ben,, , [email protected] ' value = value.lower() #value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) #if (value[0] == ''): # value.remove('') value = set(re.findall(r'[\w\d\.\-_]+@[\w\d\.\-_]+', value)) print value re.findall() returns a list of all matched occurences.
list() doesn't work in Google App Engine?
I am trying to use set function in App Engine, to prepare a list with unique elements. I hit a snag when i wrote a Python code which works fine in the Python Shell but not in App Engine + Django This is what i intend to do(ran this script in IDLE): import re value=' [email protected], dash@ben,, , [email protected] ' value = value.lower() value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) if (value[0] == ''): value.remove('') print value The desired output is(got this output in IDLE): ['dash@ben', '[email protected]', '[email protected]'] Now when I do something equivalent in my views.py file in App Engine: import os import re import django from django.http import HttpResponse from django.shortcuts import render_to_response # host of other imports also there def add(request): value=' [email protected], dash@ben,, , [email protected] ' value = value.lower() value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) if (value[0] == ''): value.remove('') return render_to_response('sc-actonform.html', { 'value': value, }) I get this error while going to the appropriate page(pasting the traceback): Traceback (most recent call last): File "G:\Dhushyanth\Google\google_appengine\lib\django\django\core\handlers\base.py" in get_response 77. response = callback(request, *callback_args, **callback_kwargs) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in add 148. value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in list 208. return respond(request, None, 'sc-base', {'content': responseText}) File "G:\Dhushyanth\development\AppengineProjects\trunk_sanjhachoolha\sandbox\dushyanth\sanjhachoolha\views.py" in respond 115. params['sign_in'] = users.create_login_url(request.path) AttributeError at /sanjhachoolha/acton/add 'set' object has no attribute 'path' on commenting out: #value = list(set(re.split('^\s*|\s*,+\s*|\s*$', value))) I get the desired output in the appropriate webpage: [email protected], dash@ben,, , [email protected] I am sure the list() is the root of my troubles. Can anyone suggest why this is happening. Please also suggest alternatives. The aim is to remove duplicates from the list. Thanks!
[ "It seems like you implemented your own list() function. Its return statements should be at line 208 of your file (views.py). You should rename your list() function to something else (even list_()).\nEDIT: Also you can change you regexp, like this:\nimport re\nvalue=' [email protected], dash@ben,, , [email protected] '\nvalue = value.lower()\n\n#value = list(set(re.split('^\\s*|\\s*,+\\s*|\\s*$', value)))\n#if (value[0] == ''):\n# value.remove('')\n\nvalue = set(re.findall(r'[\\w\\d\\.\\-_]+@[\\w\\d\\.\\-_]+', value))\n\nprint value\n\nre.findall() returns a list of all matched occurences.\n" ]
[ 8 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001000448_django_google_app_engine_python.txt
Q: How do I load entry-points for a defined set of eggs with Python setuptools? I would like to use the entry point functionality in setuptools. There are a number of occasions where I would like to tightly control the list of eggs that are run, and thence the extensions that contribute to a set of entry points: egg integration testing, where I want to run multiple test suites on different combinations of eggs. scanning a single directory of eggs/plugins so as to run two different instances of the same program, but with different eggs. development time, where I am developing one or more egg, and would like to run the program as part of the normal edit-run cycle. I have looked through the setuptools documentation, and while it doesn't say that this is not possible, I must have missed something saying how to do it. What is the best way to approach deploying plugins differently to the default system-wide discovery? A: We're solving something similar, ability to use setup.py develop if You're mere user without access to global site-packages. So far, we solved it with virtualenv. I'd say it will help for your case too: have minimal system-wide install (or explicitly exclude it), create virtual environment with eggs you want and test there. (Or, for integration tests, create clean environment, install egg and test all dependencies are installed). For 2, I'm not sure, but it should work too, with multiple virtualenvs. For 3, setup.py develop is the way to go.
How do I load entry-points for a defined set of eggs with Python setuptools?
I would like to use the entry point functionality in setuptools. There are a number of occasions where I would like to tightly control the list of eggs that are run, and thence the extensions that contribute to a set of entry points: egg integration testing, where I want to run multiple test suites on different combinations of eggs. scanning a single directory of eggs/plugins so as to run two different instances of the same program, but with different eggs. development time, where I am developing one or more egg, and would like to run the program as part of the normal edit-run cycle. I have looked through the setuptools documentation, and while it doesn't say that this is not possible, I must have missed something saying how to do it. What is the best way to approach deploying plugins differently to the default system-wide discovery?
[ "We're solving something similar, ability to use setup.py develop if You're mere user without access to global site-packages. So far, we solved it with virtualenv.\nI'd say it will help for your case too: have minimal system-wide install (or explicitly exclude it), create virtual environment with eggs you want and test there.\n(Or, for integration tests, create clean environment, install egg and test all dependencies are installed).\nFor 2, I'm not sure, but it should work too, with multiple virtualenvs. For 3, setup.py develop is the way to go.\n" ]
[ 0 ]
[]
[]
[ "distutils", "egg", "python", "setuptools" ]
stackoverflow_0000769766_distutils_egg_python_setuptools.txt
Q: How to get the "python setup.py" submit information on freshmeat? This can submit information about your software on pypi: python setup.py register But there is not a similar command for submitting information to freshmeat. How could I write a distutils.Command that would let me do the following? python setup.py freshmeat-submit A: It should be fairly easy; I'd say freshmeat API will be straightforward. For python site, for setup() function in setup.py, give this argument: entry_points = { 'distutils.commands' : [ 'freshmeat-submit = freshsubmitter.submit:SubmitToFreshMeat', ], }, where freshsubmitter is your new pakcage, submit is module inside it and SubmitToFreshMeat is from distutils.command.config.config subclass. Please be aware that entry_points are global, so you should distribute your command as separate package; bundling it with every package will cause conflicts.
How to get the "python setup.py" submit information on freshmeat?
This can submit information about your software on pypi: python setup.py register But there is not a similar command for submitting information to freshmeat. How could I write a distutils.Command that would let me do the following? python setup.py freshmeat-submit
[ "It should be fairly easy; I'd say freshmeat API will be straightforward.\nFor python site, for setup() function in setup.py, give this argument:\nentry_points = {\n 'distutils.commands' : [\n 'freshmeat-submit = freshsubmitter.submit:SubmitToFreshMeat',\n ],\n},\n\nwhere freshsubmitter is your new pakcage, submit is module inside it and SubmitToFreshMeat is from distutils.command.config.config subclass.\nPlease be aware that entry_points are global, so you should distribute your command as separate package; bundling it with every package will cause conflicts.\n" ]
[ 0 ]
[]
[]
[ "python", "setuptools" ]
stackoverflow_0000422717_python_setuptools.txt
Q: Generic Views from the object_id or the parent object I have a model that represents a position at a company: class Position(models.Model): preferred_q = ForeignKey("Qualifications", blank=True, null=True, related_name="pref") base_q = ForeignKey("Qualifications", blank=True, null=True, related_name="base") #[...] It has two "inner objects", which represent minimum qualifications, and "preferred" qualifications for the position. I have a generic view set up to edit/view a Position instance. Within that page, I have a link that goes to another page where the user can edit each type of qualification. The problem is that I can't just pass the primary key of the qualification, because that object may be empty (blank and null being True, which is by design). Instead I'd like to just pass the position primary key and the keyword preferred_qualification or base_qualification in the URL like so: (r'^edit/preferred_qualifications/(?P<parent_id>\d{1,4})/$', some_view), (r'^edit/base_qualifications/(?P<parent_id>\d{1,4})/$', some_view), Is there any way to do this using generic views, or will I have to make my own view? This is simple as cake using regular views, but I'm trying to migrate everything I can over to generic views for the sake of simplicity. A: If you want the edit form to be for one of the related instances of InnerModel, but you want to pass in the PK for ParentModel in the URL (as best I can tell this is what you're asking, though it isn't very clear), you will have to use a wrapper view. Otherwise how is Django's generic view supposed to magically know which relateed object you want to edit? Depending how consistent the related object attributes are for the "many models" you want to edit this way, there's a good chance you could make this work with just one wrapper view rather than many. Hard to say without seeing more of the code.
Generic Views from the object_id or the parent object
I have a model that represents a position at a company: class Position(models.Model): preferred_q = ForeignKey("Qualifications", blank=True, null=True, related_name="pref") base_q = ForeignKey("Qualifications", blank=True, null=True, related_name="base") #[...] It has two "inner objects", which represent minimum qualifications, and "preferred" qualifications for the position. I have a generic view set up to edit/view a Position instance. Within that page, I have a link that goes to another page where the user can edit each type of qualification. The problem is that I can't just pass the primary key of the qualification, because that object may be empty (blank and null being True, which is by design). Instead I'd like to just pass the position primary key and the keyword preferred_qualification or base_qualification in the URL like so: (r'^edit/preferred_qualifications/(?P<parent_id>\d{1,4})/$', some_view), (r'^edit/base_qualifications/(?P<parent_id>\d{1,4})/$', some_view), Is there any way to do this using generic views, or will I have to make my own view? This is simple as cake using regular views, but I'm trying to migrate everything I can over to generic views for the sake of simplicity.
[ "If you want the edit form to be for one of the related instances of InnerModel, but you want to pass in the PK for ParentModel in the URL (as best I can tell this is what you're asking, though it isn't very clear), you will have to use a wrapper view. Otherwise how is Django's generic view supposed to magically know which relateed object you want to edit?\nDepending how consistent the related object attributes are for the \"many models\" you want to edit this way, there's a good chance you could make this work with just one wrapper view rather than many. Hard to say without seeing more of the code.\n" ]
[ 0 ]
[ "As explained in the documentation for the update_object generic view, if you have ParentModel as value for the 'model' key in the options_dict in your URL definition, you should be all set. \n" ]
[ -1 ]
[ "django", "django_generic_views", "python" ]
stackoverflow_0000999291_django_django_generic_views_python.txt
Q: How to flatten a fish eye picture (with python)? I've found programs to turn fish eye pictures into flat ones. I'd like to learn the process behind the scenes. Can someone share their knowledge about the technique? A: My understanding is that fish eye effect is basically a projection on a semi-sphere, right? To reverse that you need to use equations for projecting a semi-sphere into a plane. A quick search revealed those Fisheye Projection equations, reversing them should be easy. I hope that puts you in the right direction.
How to flatten a fish eye picture (with python)?
I've found programs to turn fish eye pictures into flat ones. I'd like to learn the process behind the scenes. Can someone share their knowledge about the technique?
[ "My understanding is that fish eye effect is basically a projection on a semi-sphere, right? To reverse that you need to use equations for projecting a semi-sphere into a plane. A quick search revealed those Fisheye Projection equations, reversing them should be easy. I hope that puts you in the right direction.\n" ]
[ 1 ]
[]
[]
[ "fisheye", "image_processing", "photography", "python" ]
stackoverflow_0001000806_fisheye_image_processing_photography_python.txt
Q: Python C API: how to get string representation of exception? If I do (e.g.) open("/snafu/fnord") in Python (and the file does not exist), I get a traceback and the message IOError: [Errno 2] No such file or directory: '/snafu/fnord' I would like to get the above string with Python's C API (i.e., a Python interpreter embedded in a C program). I need it as a string, not output to the console. With PyErr_Fetch() I can get the type object of the exception and the value. For the above example, the value is a tuple: (2, 'No such file or directory', '/snafu/fnord') Is there an easy way from the information I get from PyErr_Fetch() to the string the Python interpreter shows? (One that does not involve to construct such strings for each exception type yourself.) A: I think that Python exceptions are printed by running "str()" on the exception instance, which will return the formatted string you're interested in. You can get this from C by calling the PyObject_Str() method described here: https://docs.python.org/c-api/object.html Good luck! Update: I'm a bit confused why the second element being returned to you by PyErr_Fetch() is a string. My guess is that you are receiving an "unnormalized exception" and need to call PyErr_NormalizeException() to turn that tuple into a "real" Exception that can format itself as a string like you want it to.
Python C API: how to get string representation of exception?
If I do (e.g.) open("/snafu/fnord") in Python (and the file does not exist), I get a traceback and the message IOError: [Errno 2] No such file or directory: '/snafu/fnord' I would like to get the above string with Python's C API (i.e., a Python interpreter embedded in a C program). I need it as a string, not output to the console. With PyErr_Fetch() I can get the type object of the exception and the value. For the above example, the value is a tuple: (2, 'No such file or directory', '/snafu/fnord') Is there an easy way from the information I get from PyErr_Fetch() to the string the Python interpreter shows? (One that does not involve to construct such strings for each exception type yourself.)
[ "I think that Python exceptions are printed by running \"str()\" on the exception instance, which will return the formatted string you're interested in. You can get this from C by calling the PyObject_Str() method described here:\nhttps://docs.python.org/c-api/object.html\nGood luck!\nUpdate: I'm a bit confused why the second element being returned to you by PyErr_Fetch() is a string. My guess is that you are receiving an \"unnormalized exception\" and need to call PyErr_NormalizeException() to turn that tuple into a \"real\" Exception that can format itself as a string like you want it to.\n" ]
[ 8 ]
[]
[]
[ "exception", "python", "python_c_api" ]
stackoverflow_0001001216_exception_python_python_c_api.txt
Q: Why are there extra blank lines in my python program output? I'm not particularly experienced with python, so may be doing something silly below. I have the following program: import os import re import linecache LINENUMBER = 2 angles_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/angles.txt") lines = angles_file.readlines() for line in lines: splitted_line = line.split(";") DN = float(linecache.getline(splitted_line[0], LINENUMBER)) Zenith = splitted_line[2] output_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/DNandZenith.txt", "a") output_file.write("0\t" + str(DN) + "\t" + Zenith + "\n") #print >> output_file, str(DN) + "\t" + Zenith #print DN, Zenith output_file.close() When I look at the output to the file I get the following: 0 105.5 0.0 0 104.125 18.0 0 104.0 36.0 0 104.625 54.0 0 104.25 72.0 0 104.0 90.0 0 104.75 108.0 0 104.125 126.0 0 104.875 144.0 0 104.375 162.0 0 104.125 180.0 Which is the right numbers, it just has blank lines between each line. I've tried and tried to remove them, but I can't seem to. What am I doing wrong? Robin A: For a GENERAL solution, remove the trailing newline from your INPUT: splitted_line = line.rstrip("\n").split(";") Removing the extraneous newline from your output "works" in this case but it's a kludge. ALSO: (1) it's not a good idea to open your output file in the middle of a loop; do it once, otherwise you are just wasting resources. With a long enough loop, you will run out of file handles and crash (2) It's not a good idea to hard-wire file names like that, especially hidden in the middle of your script; try to make your scripts reusable. A: Change this: output_file.write("0\t" + str(DN) + "\t" + Zenith + "\n") to this: output_file.write("0\t" + str(DN) + "\t" + Zenith) The Zenith string already contains the trailing \n from the original file when you read it in. A: Alternative solution (handy if you are processing lines from file) is to strip the whitespace: Zenith = Zenith.strip(); A: EDIT: See comments for details, but there's definitely a better way. [:-1] isn't the best choice, no matter how cool it looks. Use line.rstrip('\n') instead. The problem is that, unlike file_text.split('\n'), file.readlines() does not remove the \n from the end of each line of input. My default pattern for parsing lines of text goes like this: with open(filename) as f: for line in f.readlines(): parse_line(line[:-1]) # funny face trims the '\n' A: If you want to make sure there's no whitespace on any of your tokens (not just the first and last), try this: splitted_line = map (str.strip, line.split (';'))
Why are there extra blank lines in my python program output?
I'm not particularly experienced with python, so may be doing something silly below. I have the following program: import os import re import linecache LINENUMBER = 2 angles_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/angles.txt") lines = angles_file.readlines() for line in lines: splitted_line = line.split(";") DN = float(linecache.getline(splitted_line[0], LINENUMBER)) Zenith = splitted_line[2] output_file = open("d:/UserData/Robin Wilson/AlteredData/ncaveo/16-June/scan1_high/000/DNandZenith.txt", "a") output_file.write("0\t" + str(DN) + "\t" + Zenith + "\n") #print >> output_file, str(DN) + "\t" + Zenith #print DN, Zenith output_file.close() When I look at the output to the file I get the following: 0 105.5 0.0 0 104.125 18.0 0 104.0 36.0 0 104.625 54.0 0 104.25 72.0 0 104.0 90.0 0 104.75 108.0 0 104.125 126.0 0 104.875 144.0 0 104.375 162.0 0 104.125 180.0 Which is the right numbers, it just has blank lines between each line. I've tried and tried to remove them, but I can't seem to. What am I doing wrong? Robin
[ "For a GENERAL solution, remove the trailing newline from your INPUT:\nsplitted_line = line.rstrip(\"\\n\").split(\";\")\n\nRemoving the extraneous newline from your output \"works\" in this case but it's a kludge.\nALSO: (1) it's not a good idea to open your output file in the middle of a loop; do it once, otherwise you are just wasting resources. With a long enough loop, you will run out of file handles and crash (2) It's not a good idea to hard-wire file names like that, especially hidden in the middle of your script; try to make your scripts reusable.\n", "Change this:\noutput_file.write(\"0\\t\" + str(DN) + \"\\t\" + Zenith + \"\\n\")\n\nto this:\noutput_file.write(\"0\\t\" + str(DN) + \"\\t\" + Zenith)\n\nThe Zenith string already contains the trailing \\n from the original file when you read it in.\n", "Alternative solution (handy if you are processing lines from file) is to strip the whitespace: \nZenith = Zenith.strip();\n\n", "EDIT: See comments for details, but there's definitely a better way. [:-1] isn't the best choice, no matter how cool it looks. Use line.rstrip('\\n') instead.\nThe problem is that, unlike file_text.split('\\n'), file.readlines() does not remove the \\n from the end of each line of input. My default pattern for parsing lines of text goes like this:\nwith open(filename) as f:\n for line in f.readlines():\n parse_line(line[:-1]) # funny face trims the '\\n'\n\n", "If you want to make sure there's no whitespace on any of your tokens (not just the first and last), try this:\nsplitted_line = map (str.strip, line.split (';'))\n\n" ]
[ 16, 8, 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001001601_python.txt
Q: How To Reversibly Store Password With Python On Linux? First, my question is not about password hashing, but password encryption. I'm building a desktop application that needs to authentificate the user to a third party service. To speed up the login process, I want to give the user the option to save his credentials. Since I need the password to authentificate him to the service, it can't be hashed. I thought of using the pyCrypto module and its Blowfish or AES implementation to encrypt the credentials. The problem is where to store the key. I know some applications store the key directly in the source code, but since I am coding an open source application, this doesn't seem like a very efficient solution. So I was wondering how, on Linux, you would implement user specific or system specific keys to increase password storing security. If you have a better solution to this problem than using pyCrypto and system/user specific keys, don't hesitate to share it. As I said before, hashing is not a solution and I know password encryption is vulnerable, but I want to give the option to the user. Using Gnome-Keyring is not an option either, since a lot of people (including myself) don't use it. A: Try using PAM. You can make a module that automatically un-encrypts the key when the user logs in. This is internally how GNOME-Keyring works (if possible). You can even write PAM modules in Python with pam_python. A: Encrypting the passwords doesn't really buy you a whole lot more protection than storing in plaintext. Anyone capable of accessing the database probably also has full access to your webserver machines. However, if the loss of security is acceptable, and you really need this, I'd generate a new keyfile (from a good source of random data) as part of the installation process and use this. Obviously store this key as securely as possible (locked down file permissions etc). Using a single key embedded in the source is not a good idea - there's no reason why seperate installations should have the same keys. A: Password Safe is designed by Bruce Schneier and open source. It's for Windows, but you should be able to see what they are doing and possibly reuse it. http://www.schneier.com/passsafe.html http://passwordsafe.sourceforge.net/ Read this: If you type A-E-S into your code, you're doing it wrong.
How To Reversibly Store Password With Python On Linux?
First, my question is not about password hashing, but password encryption. I'm building a desktop application that needs to authentificate the user to a third party service. To speed up the login process, I want to give the user the option to save his credentials. Since I need the password to authentificate him to the service, it can't be hashed. I thought of using the pyCrypto module and its Blowfish or AES implementation to encrypt the credentials. The problem is where to store the key. I know some applications store the key directly in the source code, but since I am coding an open source application, this doesn't seem like a very efficient solution. So I was wondering how, on Linux, you would implement user specific or system specific keys to increase password storing security. If you have a better solution to this problem than using pyCrypto and system/user specific keys, don't hesitate to share it. As I said before, hashing is not a solution and I know password encryption is vulnerable, but I want to give the option to the user. Using Gnome-Keyring is not an option either, since a lot of people (including myself) don't use it.
[ "Try using PAM. You can make a module that automatically un-encrypts the key when the user logs in. This is internally how GNOME-Keyring works (if possible). You can even write PAM modules in Python with pam_python.\n", "Encrypting the passwords doesn't really buy you a whole lot more protection than storing in plaintext. Anyone capable of accessing the database probably also has full access to your webserver machines.\nHowever, if the loss of security is acceptable, and you really need this, I'd generate a new keyfile (from a good source of random data) as part of the installation process and use this. Obviously store this key as securely as possible (locked down file permissions etc). Using a single key embedded in the source is not a good idea - there's no reason why seperate installations should have the same keys.\n", "Password Safe is designed by Bruce Schneier and open source. It's for Windows, but you should be able to see what they are doing and possibly reuse it.\nhttp://www.schneier.com/passsafe.html\nhttp://passwordsafe.sourceforge.net/\nRead this: If you type A-E-S into your code, you're doing it wrong.\n" ]
[ 5, 5, 0 ]
[]
[]
[ "encryption", "linux", "passwords", "python" ]
stackoverflow_0001001744_encryption_linux_passwords_python.txt
Q: Python + MySQLdb executemany I'm using Python and its MySQLdb module to import some measurement data into a Mysql database. The amount of data that we have is quite high (currently about ~250 MB of csv files and plenty of more to come). Currently I use cursor.execute(...) to import some metadata. This isn't problematic as there are only a few entries for these. The problem is that when I try to use cursor.executemany() to import larger quantities of the actual measurement data, MySQLdb raises a TypeError: not all arguments converted during string formatting My current code is def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into values (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() where values is a list of tuples containing three strings each. Any ideas what could be wrong with this? Edit: The values are generated by yield (prefix + row['id'], row['value'], sample_id) and then read into a list one thousand at a time where row is and iterator coming from csv.DictReader. A: In retrospective this was a really stupid but hard to spot mistake. Values is a keyword in sql so the table name values needs quotes around it. def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into `values` (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() A: The message you get indicates that inside the executemany() method, one of the conversions failed. Check your values list for a tuple longer than 3. For a quick verification: max(map(len, values)) If the result is higher than 3, locate your bad tuple with a filter: [t for t in values if len(t) != 3] or, if you need the index: [(i,t) for i,t in enumerate(values) if len(t) != 3]
Python + MySQLdb executemany
I'm using Python and its MySQLdb module to import some measurement data into a Mysql database. The amount of data that we have is quite high (currently about ~250 MB of csv files and plenty of more to come). Currently I use cursor.execute(...) to import some metadata. This isn't problematic as there are only a few entries for these. The problem is that when I try to use cursor.executemany() to import larger quantities of the actual measurement data, MySQLdb raises a TypeError: not all arguments converted during string formatting My current code is def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into values (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() where values is a list of tuples containing three strings each. Any ideas what could be wrong with this? Edit: The values are generated by yield (prefix + row['id'], row['value'], sample_id) and then read into a list one thousand at a time where row is and iterator coming from csv.DictReader.
[ "In retrospective this was a really stupid but hard to spot mistake. Values is a keyword in sql so the table name values needs quotes around it.\ndef __insert_values(self, values):\n cursor = self.connection.cursor()\n cursor.executemany(\"\"\"\n insert into `values` (ensg, value, sampleid)\n values (%s, %s, %s)\"\"\", values)\n cursor.close()\n\n", "The message you get indicates that inside the executemany() method, one of the conversions failed. Check your values list for a tuple longer than 3.\nFor a quick verification:\nmax(map(len, values))\n\nIf the result is higher than 3, locate your bad tuple with a filter:\n[t for t in values if len(t) != 3]\n\nor, if you need the index:\n[(i,t) for i,t in enumerate(values) if len(t) != 3]\n\n" ]
[ 8, 3 ]
[]
[]
[ "executemany", "mysql", "python" ]
stackoverflow_0000974702_executemany_mysql_python.txt
Q: Showing progress of python's XML parser when loading a huge file Im using Python's built in XML parser to load a 1.5 gig XML file and it takes all day. from xml.dom import minidom xmldoc = minidom.parse('events.xml') I need to know how to get inside that and measure its progress so I can show a progress bar. any ideas? minidom has another method called parseString() that returns a DOM tree assuming the string you pass it is valid XML, If I were to split up the file myself into chunks and pass them to parseString one at a time, could I possibly merge all the DOM trees back together at the end? A: Did you consider to use other means of parsing XML? Building a tree of such big XML files will always be slow and memory intensive. If you don't need the whole tree in memory, stream based parsing will be much faster. It can be a little daunting if you're used to tree based XML manipulation, but it will pay of in form of a huge speed increase (minutes instead of hours). http://docs.python.org/library/xml.sax.html A: you usecase requires that you use sax parser instead of dom, dom loads everything in memory , sax instead will do line by line parsing and you write handlers for events as you need so could be effective and you would be able to write progress indicator also I also recommend trying expat parser sometime it is very useful http://docs.python.org/library/pyexpat.html for progress using sax: as sax reads file incrementally you can wrap the file object you pass with your own and keep track how much have been read. edit: I also don't like idea of splitting file yourselves and joining DOM at end, that way you are better writing your own xml parser, i recommend instead using sax parser I also wonder what your purpose of reading 1.5 gig file in DOM tree? look like sax would be better here A: I have something very similar for PyGTK, not PyQt, using the pulldom api. It gets called a little bit at a time using Gtk idle events (so the GUI doesn't lock up) and Python generators (to save the parsing state). def idle_handler (fn): fh = open (fn) # file handle doc = xml.dom.pulldom.parse (fh) fsize = os.stat (fn)[stat.ST_SIZE] position = 0 for event, node in doc: if position != fh.tell (): position = fh.tell () # update status: position * 100 / fsize if event == .... yield True # idle handler stays until False is returned yield False def main: add_idle_handler (idle_handler, filename) A: Merging the tree at the end would be pretty easy. You could just create a new DOM, and basically append the individual trees to it one by one. This would give you pretty finely tuned control over the progress of the parsing too. You could even parallelize it if you wanted by spawning different processes to parse each section. You just have to make sure you split it intelligently (not splitting in the middle of a tag, etc.).
Showing progress of python's XML parser when loading a huge file
Im using Python's built in XML parser to load a 1.5 gig XML file and it takes all day. from xml.dom import minidom xmldoc = minidom.parse('events.xml') I need to know how to get inside that and measure its progress so I can show a progress bar. any ideas? minidom has another method called parseString() that returns a DOM tree assuming the string you pass it is valid XML, If I were to split up the file myself into chunks and pass them to parseString one at a time, could I possibly merge all the DOM trees back together at the end?
[ "Did you consider to use other means of parsing XML? Building a tree of such big XML files will always be slow and memory intensive. If you don't need the whole tree in memory, stream based parsing will be much faster. It can be a little daunting if you're used to tree based XML manipulation, but it will pay of in form of a huge speed increase (minutes instead of hours).\nhttp://docs.python.org/library/xml.sax.html\n", "you usecase requires that you use sax parser instead of dom, dom loads everything in memory , sax instead will do line by line parsing and you write handlers for events as you need\nso could be effective and you would be able to write progress indicator also\nI also recommend trying expat parser sometime it is very useful\nhttp://docs.python.org/library/pyexpat.html\nfor progress using sax:\nas sax reads file incrementally you can wrap the file object you pass with your own and keep track how much have been read.\nedit:\nI also don't like idea of splitting file yourselves and joining DOM at end, that way you are better writing your own xml parser, i recommend instead using sax parser \nI also wonder what your purpose of reading 1.5 gig file in DOM tree?\nlook like sax would be better here\n", "I have something very similar for PyGTK, not PyQt, using the pulldom api. It gets called a little bit at a time using Gtk idle events (so the GUI doesn't lock up) and Python generators (to save the parsing state).\ndef idle_handler (fn):\n fh = open (fn) # file handle\n doc = xml.dom.pulldom.parse (fh)\n fsize = os.stat (fn)[stat.ST_SIZE]\n position = 0\n\n for event, node in doc:\n if position != fh.tell ():\n position = fh.tell ()\n # update status: position * 100 / fsize\n\n if event == ....\n\n yield True # idle handler stays until False is returned\n\n yield False\n\ndef main:\n add_idle_handler (idle_handler, filename)\n\n", "Merging the tree at the end would be pretty easy. You could just create a new DOM, and basically append the individual trees to it one by one. This would give you pretty finely tuned control over the progress of the parsing too. You could even parallelize it if you wanted by spawning different processes to parse each section. You just have to make sure you split it intelligently (not splitting in the middle of a tag, etc.).\n" ]
[ 5, 5, 3, 2 ]
[]
[]
[ "pyqt", "python", "xml" ]
stackoverflow_0001001871_pyqt_python_xml.txt
Q: PYTHONPATH ignored Environment: debian 4.0 Python 2.4 My 'project' is installed in: /usr/lib/python2.4/site-packages/project. But I want to use my working copy instead of the installed one which is located in: /home/me/dev/project/src So what I do is: export PYTHONPATH=/home/me/dev/project/src ipython import foo # which is in src foo.__file__ */usr/lib/python2.4/site-packages/project/foo.py* instead of : /home/me/dev/project/src/project/foo.py How come? I try to check the pathes (having done the export above) and what I see is: import sys,os sys.path ['', '/usr/bin', '/usr/lib/python2.4/site-packages', '/home/me/dev/project/src', '/usr/lib/python24.zip', '/usr/lib/python2.4', '/usr/lib/python2.4/plat-linux2', '/usr/lib/python2.4/lib-tk', '/usr/lib/python2.4/lib-dynload', '/usr/local/lib/python2.4/site-packages', '/usr/lib/python2.4/site-packages/PIL', '/var/lib/python-support/python2.4', '/usr/lib/python2.4/site-packages/IPython/Extensions', '/home/me/.ipython'] os.environ['PYTHONPATH'] /home/me/dev/project/src A: According to python documentation, this is expected behavior: https://docs.python.org/2.4/lib/module-sys.html: Notice that the script directory is inserted before the entries inserted as a result of PYTHONPATH. Under python-2.6 it is different: http://docs.python.org/tutorial/modules.html#the-module-search-path A: I found the problem (I've missed early on when somebody pointed me to Where is Python's sys.path initialized from?). It seems that easy_install creates a pth file /usr/lib/python2.4/site-packages/easy-install.pth which is then loaded by site.py. This inserts the site-packages path in the sys path before the PYTHONPATH. Not nice. A: I don't believe you have any control over where the PYTHONPATH gets inserted into the actual path list. But that's not the only way to modify the path - you can update sys.path yourself, before you try to import the project. Edit: In your specific case, you can modify the path with import sys sys.path.insert(2, '/home/me/dev/project/src') A: I see '/usr/lib/python2.4/site-packages' in your path prior to '/home/me/dev/project/src', does that matter? What happens when you switch the two? From the docs: When PYTHONPATH is not set, or when the file is not found there, the search continues in an installation-dependent default path So perhaps you didn't find your working copy on your PYTHONPATH as you had thought? A: Not a direct answer to you question, but you could also use a virtualenv to create a development environment. In that virtualenv you can then install your product in /home/me/dev/project/src as a development package: "python setup.py develop". This way you don't have to change your PYTHONPATH manually. Just activate the virtualenv if you want to use the development code. A: I think you set up PYTHONPATH to /home/me/build/project/src since /home/me/dev/project/src does not appear in sys.path, but /home/me/build/project/src does. A: It sounds like the src directory doesn't have an __init__.py file. It's not a proper package.
PYTHONPATH ignored
Environment: debian 4.0 Python 2.4 My 'project' is installed in: /usr/lib/python2.4/site-packages/project. But I want to use my working copy instead of the installed one which is located in: /home/me/dev/project/src So what I do is: export PYTHONPATH=/home/me/dev/project/src ipython import foo # which is in src foo.__file__ */usr/lib/python2.4/site-packages/project/foo.py* instead of : /home/me/dev/project/src/project/foo.py How come? I try to check the pathes (having done the export above) and what I see is: import sys,os sys.path ['', '/usr/bin', '/usr/lib/python2.4/site-packages', '/home/me/dev/project/src', '/usr/lib/python24.zip', '/usr/lib/python2.4', '/usr/lib/python2.4/plat-linux2', '/usr/lib/python2.4/lib-tk', '/usr/lib/python2.4/lib-dynload', '/usr/local/lib/python2.4/site-packages', '/usr/lib/python2.4/site-packages/PIL', '/var/lib/python-support/python2.4', '/usr/lib/python2.4/site-packages/IPython/Extensions', '/home/me/.ipython'] os.environ['PYTHONPATH'] /home/me/dev/project/src
[ "According to python documentation, this is expected behavior: https://docs.python.org/2.4/lib/module-sys.html:\n\nNotice that the script directory is\n inserted before the entries inserted\n as a result of PYTHONPATH.\n\nUnder python-2.6 it is different: http://docs.python.org/tutorial/modules.html#the-module-search-path\n", "I found the problem (I've missed early on when somebody pointed me to Where is Python's sys.path initialized from?).\nIt seems that easy_install creates a pth file /usr/lib/python2.4/site-packages/easy-install.pth which is then loaded by site.py. This inserts the site-packages path in the sys path before the PYTHONPATH. Not nice.\n", "I don't believe you have any control over where the PYTHONPATH gets inserted into the actual path list. But that's not the only way to modify the path - you can update sys.path yourself, before you try to import the project.\nEdit: In your specific case, you can modify the path with\nimport sys\nsys.path.insert(2, '/home/me/dev/project/src')\n\n", "I see '/usr/lib/python2.4/site-packages' in your path prior to '/home/me/dev/project/src', does that matter? What happens when you switch the two?\nFrom the docs: \n\nWhen PYTHONPATH is not set, or when the file is not found there, the search continues in an installation-dependent default path\n\nSo perhaps you didn't find your working copy on your PYTHONPATH as you had thought?\n", "Not a direct answer to you question, but you could also use a virtualenv to create a development environment. In that virtualenv you can then install your product in /home/me/dev/project/src as a development package: \"python setup.py develop\".\nThis way you don't have to change your PYTHONPATH manually. Just activate the virtualenv if you want to use the development code.\n", "I think you set up PYTHONPATH to /home/me/build/project/src since /home/me/dev/project/src does not appear in sys.path, but /home/me/build/project/src does.\n", "It sounds like the src directory doesn't have an __init__.py file. It's not a proper package.\n" ]
[ 6, 5, 4, 1, 1, 0, 0 ]
[]
[]
[ "debian", "path", "python" ]
stackoverflow_0001001851_debian_path_python.txt
Q: How to put variables on the stack/context in Python In essence, I want to put a variable on the stack, that will be reachable by all calls below that part on the stack until the block exits. In Java I would solve this using a static thread local with support methods, that then could be accessed from methods. Typical example: you get a request, and open a database connection. Until the request is complete, you want all code to use this database connection. After finishing and closing the request, you close the database connection. What I need this for, is a report generator. Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation. As I don't want to repeat heavy calculations, I need to cache them. My idea is to decorate methods with a cache decorator. The cache creates an id based on the method name and module, and it's arguments, looks if it has this allready calculated in a stack variable, and executes the method if not. I will try and clearify by showing my current implementation. Want I want to do is to simplify the code for those implementing calculations. First, I have the central cache access object, which I call MathContext: class MathContext(object): def __init__(self, fn): self.fn = fn self.cache = dict() def get(self, calc_config): id = create_id(calc_config) if id not in self.cache: self.cache[id] = calc_config.exec(self) return self.cache[id] The fn argument is the filename the context is created in relation to, from where data can be read to be calculated. Then we have the Calculation class: class CalcBase(object): def exec(self, math_context): raise NotImplementedError And here is a stupid Fibonacci example. Non of the methods are actually recursive, they work on large sets of data instead, but it works to demonstrate how you would depend on other calculations: class Fibonacci(CalcBase): def __init__(self, n): self.n = n def exec(self, math_context): if self.n < 2: return 1 a = math_context.get(Fibonacci(self.n-1)) b = math_context.get(Fibonacci(self.n-2)) return a+b What I want Fibonacci to be instead, is just a decorated method: @cache def fib(n): if n<2: return 1 return fib(n-1)+fib(n-2) With the math_context example, when math_context goes out of scope, so does all it's cached values. I want the same thing for the decorator. Ie. at point X, everything cached by @cache is dereferrenced to be gced. A: I went ahead and made something that might just do what you want. It can be used as both a decorator and a context manager: from __future__ import with_statement try: import cPickle as pickle except ImportError: import pickle class cached(object): """Decorator/context manager for caching function call results. All results are cached in one dictionary that is shared by all cached functions. To use this as a decorator: @cached def function(...): ... The results returned by a decorated function are not cleared from the cache until decorated_function.clear_my_cache() or cached.clear_cache() is called To use this as a context manager: with cached(function) as function: ... function(...) ... The function's return values will be cleared from the cache when the with block ends To clear all cached results, call the cached.clear_cache() class method """ _CACHE = {} def __init__(self, fn): self._fn = fn def __call__(self, *args, **kwds): key = self._cache_key(*args, **kwds) function_cache = self._CACHE.setdefault(self._fn, {}) try: return function_cache[key] except KeyError: function_cache[key] = result = self._fn(*args, **kwds) return result def clear_my_cache(self): """Clear the cache for a decorated function """ try: del self._CACHE[self._fn] except KeyError: pass # no cached results def __enter__(self): return self def __exit__(self, type, value, traceback): self.clear_my_cache() def _cache_key(self, *args, **kwds): """Create a cache key for the given positional and keyword arguments. pickle.dumps() is used because there could be unhashable objects in the arguments, but passing them to pickle.dumps() will result in a string, which is always hashable. I used this to make the cached class as generic as possible. Depending on your requirements, other key generating techniques may be more efficient """ return pickle.dumps((args, sorted(kwds.items())), pickle.HIGHEST_PROTOCOL) @classmethod def clear_cache(cls): """Clear everything from all functions from the cache """ cls._CACHE = {} if __name__ == '__main__': # used as decorator @cached def fibonacci(n): print "calculating fibonacci(%d)" % n if n == 0: return 0 if n == 1: return 1 return fibonacci(n - 1) + fibonacci(n - 2) for n in xrange(10): print 'fibonacci(%d) = %d' % (n, fibonacci(n)) def lucas(n): print "calculating lucas(%d)" % n if n == 0: return 2 if n == 1: return 1 return lucas(n - 1) + lucas(n - 2) # used as context manager with cached(lucas) as lucas: for i in xrange(10): print 'lucas(%d) = %d' % (i, lucas(i)) for n in xrange(9, -1, -1): print 'fibonacci(%d) = %d' % (n, fibonacci(n)) cached.clear_cache() for n in xrange(9, -1, -1): print 'fibonacci(%d) = %d' % (n, fibonacci(n)) A: this question seems to be two question a) sharing db connection b) caching/Memoizing b) you have answered yourselves a) I don't seem to understand why you need to put it on stack? you can do one of these you can use a class and connection could be attribute of it you can decorate all your function so that they get a connection from central location each function can explicitly use a global connection method you can create a connection and pass around it, or create a context object and pass around context,connection can be a part of context etc, etc A: You could use a global variable wrapped in a getter function: def getConnection(): global connection if connection: return connection connection=createConnection() return connection A: "you get a request, and open a database connection.... you close the database connection." This is what objects are for. Create the connection object, pass it to other objects, and then close it when you're done. Globals are not appropriate. Simply pass the value around as a parameter to the other objects that are doing the work. "Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation.... I need to cache them" This is what objects are for. Create a dictionary with useful calculation results and pass that around from report part to report part. You don't need to mess with "stack variables", "static thread local" or anything like that. Just pass ordinary variable arguments to ordinary method functions. You'll be a lot happier. class MemoizedCalculation( object ): pass class Fibonacci( MemoizedCalculation ): def __init__( self ): self.cache= { 0: 1, 1: 1 } def __call__( self, arg ): if arg not in self.cache: self.cache[arg]= self(arg-1) + self(arg-2) return self.cache[arg] class MathContext( object ): def __init__( self ): self.fibonacci = Fibonacci() You can use it like this >>> mc= MathContext() >>> mc.fibonacci( 4 ) 5 You can define any number of calculations and fold them all into a single container object. If you want, you can make the MathContext into a formal Context Manager so that it work with the with statement. Add these two methods to MathContext. def __enter__( self ): print "Initialize" return self def __exit__( self, type_, value, traceback ): print "Release" Then you can do this. with MathContext() as mc: print mc.fibonacci( 4 ) At the end of the with statement, you can guaranteed that the __exit__ method was called.
How to put variables on the stack/context in Python
In essence, I want to put a variable on the stack, that will be reachable by all calls below that part on the stack until the block exits. In Java I would solve this using a static thread local with support methods, that then could be accessed from methods. Typical example: you get a request, and open a database connection. Until the request is complete, you want all code to use this database connection. After finishing and closing the request, you close the database connection. What I need this for, is a report generator. Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation. As I don't want to repeat heavy calculations, I need to cache them. My idea is to decorate methods with a cache decorator. The cache creates an id based on the method name and module, and it's arguments, looks if it has this allready calculated in a stack variable, and executes the method if not. I will try and clearify by showing my current implementation. Want I want to do is to simplify the code for those implementing calculations. First, I have the central cache access object, which I call MathContext: class MathContext(object): def __init__(self, fn): self.fn = fn self.cache = dict() def get(self, calc_config): id = create_id(calc_config) if id not in self.cache: self.cache[id] = calc_config.exec(self) return self.cache[id] The fn argument is the filename the context is created in relation to, from where data can be read to be calculated. Then we have the Calculation class: class CalcBase(object): def exec(self, math_context): raise NotImplementedError And here is a stupid Fibonacci example. Non of the methods are actually recursive, they work on large sets of data instead, but it works to demonstrate how you would depend on other calculations: class Fibonacci(CalcBase): def __init__(self, n): self.n = n def exec(self, math_context): if self.n < 2: return 1 a = math_context.get(Fibonacci(self.n-1)) b = math_context.get(Fibonacci(self.n-2)) return a+b What I want Fibonacci to be instead, is just a decorated method: @cache def fib(n): if n<2: return 1 return fib(n-1)+fib(n-2) With the math_context example, when math_context goes out of scope, so does all it's cached values. I want the same thing for the decorator. Ie. at point X, everything cached by @cache is dereferrenced to be gced.
[ "I went ahead and made something that might just do what you want. It can be used as both a decorator and a context manager:\nfrom __future__ import with_statement\ntry:\n import cPickle as pickle\nexcept ImportError:\n import pickle\n\n\nclass cached(object):\n \"\"\"Decorator/context manager for caching function call results.\n All results are cached in one dictionary that is shared by all cached\n functions.\n\n To use this as a decorator:\n @cached\n def function(...):\n ...\n\n The results returned by a decorated function are not cleared from the\n cache until decorated_function.clear_my_cache() or cached.clear_cache()\n is called\n\n To use this as a context manager:\n\n with cached(function) as function:\n ...\n function(...)\n ...\n\n The function's return values will be cleared from the cache when the\n with block ends\n\n To clear all cached results, call the cached.clear_cache() class method\n \"\"\"\n\n _CACHE = {}\n\n def __init__(self, fn):\n self._fn = fn\n\n def __call__(self, *args, **kwds):\n key = self._cache_key(*args, **kwds)\n function_cache = self._CACHE.setdefault(self._fn, {})\n try:\n return function_cache[key]\n except KeyError:\n function_cache[key] = result = self._fn(*args, **kwds)\n return result\n\n def clear_my_cache(self):\n \"\"\"Clear the cache for a decorated function\n \"\"\"\n try:\n del self._CACHE[self._fn]\n except KeyError:\n pass # no cached results\n\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self.clear_my_cache()\n\n def _cache_key(self, *args, **kwds):\n \"\"\"Create a cache key for the given positional and keyword\n arguments. pickle.dumps() is used because there could be\n unhashable objects in the arguments, but passing them to \n pickle.dumps() will result in a string, which is always hashable.\n\n I used this to make the cached class as generic as possible. Depending\n on your requirements, other key generating techniques may be more\n efficient\n \"\"\"\n return pickle.dumps((args, sorted(kwds.items())), pickle.HIGHEST_PROTOCOL)\n\n @classmethod\n def clear_cache(cls):\n \"\"\"Clear everything from all functions from the cache\n \"\"\"\n cls._CACHE = {}\n\n\nif __name__ == '__main__':\n # used as decorator\n @cached\n def fibonacci(n):\n print \"calculating fibonacci(%d)\" % n\n if n == 0:\n return 0\n if n == 1:\n return 1\n return fibonacci(n - 1) + fibonacci(n - 2)\n\n for n in xrange(10):\n print 'fibonacci(%d) = %d' % (n, fibonacci(n))\n\n\n def lucas(n):\n print \"calculating lucas(%d)\" % n\n if n == 0:\n return 2\n if n == 1:\n return 1\n return lucas(n - 1) + lucas(n - 2)\n\n # used as context manager\n with cached(lucas) as lucas:\n for i in xrange(10):\n print 'lucas(%d) = %d' % (i, lucas(i))\n\n for n in xrange(9, -1, -1):\n print 'fibonacci(%d) = %d' % (n, fibonacci(n))\n\n cached.clear_cache()\n\n for n in xrange(9, -1, -1):\n print 'fibonacci(%d) = %d' % (n, fibonacci(n))\n\n", "this question seems to be two question\n\na) sharing db connection\nb) caching/Memoizing\n\nb) you have answered yourselves\na) I don't seem to understand why you need to put it on stack?\nyou can do one of these\n\nyou can use a class and connection\n could be attribute of it\nyou can decorate all your function\nso that they get a connection from\ncentral location\neach function can explicitly use a\nglobal connection method\nyou can create a connection and pass\naround it, or create a context\nobject and pass around\ncontext,connection can be a part of\ncontext\n\netc, etc\n", "You could use a global variable wrapped in a getter function:\ndef getConnection():\n global connection\n if connection:\n return connection\n connection=createConnection()\n return connection\n\n", "\"you get a request, and open a database connection.... you close the database connection.\"\nThis is what objects are for. Create the connection object, pass it to other objects, and then close it when you're done. Globals are not appropriate. Simply pass the value around as a parameter to the other objects that are doing the work.\n\"Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation.... I need to cache them\"\nThis is what objects are for. Create a dictionary with useful calculation results and pass that around from report part to report part. \nYou don't need to mess with \"stack variables\", \"static thread local\" or anything like that.\nJust pass ordinary variable arguments to ordinary method functions. You'll be a lot happier.\n\nclass MemoizedCalculation( object ):\n pass\n\nclass Fibonacci( MemoizedCalculation ):\n def __init__( self ):\n self.cache= { 0: 1, 1: 1 }\n def __call__( self, arg ):\n if arg not in self.cache:\n self.cache[arg]= self(arg-1) + self(arg-2)\n return self.cache[arg]\n\nclass MathContext( object ):\n def __init__( self ):\n self.fibonacci = Fibonacci()\n\nYou can use it like this\n>>> mc= MathContext()\n>>> mc.fibonacci( 4 )\n5\n\nYou can define any number of calculations and fold them all into a single container object.\nIf you want, you can make the MathContext into a formal Context Manager so that it work with the with statement. Add these two methods to MathContext.\ndef __enter__( self ):\n print \"Initialize\"\n return self\ndef __exit__( self, type_, value, traceback ):\n print \"Release\"\n\nThen you can do this.\nwith MathContext() as mc:\n print mc.fibonacci( 4 )\n\nAt the end of the with statement, you can guaranteed that the __exit__ method was called.\n" ]
[ 6, 2, 0, 0 ]
[]
[]
[ "contextmanager", "python", "thread_local" ]
stackoverflow_0001001784_contextmanager_python_thread_local.txt
Q: Python Video Framework I'm looking for a Python framework that will enable me to play video as well as draw on that video (for labeling purposes). I've tried Pyglet, but this doesn't seem to work particularly well - when drawing on an existing video, there is flicker (even with double buffering and all of that good stuff), and there doesn't seem to be a way to get the frame index in the video during the per-frame callback (only elapsed time since the last frame). A: Qt (PyQt) has Phonon, which might help out. PyQt is available as GPL or payware. (Qt has LGPL too, but the PyQt wrappers don't) A: Try the Python bindings for GStreamer. A: Try a Python wrapper for OpenCV such as ctypes-opencv. The C API reference is here, and the wrapper is very close (see docstrings for any changes). I have used it to draw on video without any flicker, so you should have no problems with that. A rough outline of calls you need: Load video with cvCreateFileCapture, load font with cvFont. Grab frame with cvQueryFrame, increment your frame counter. Draw on the frame with cvPutText, cvEllipse, etc etc. Display to user with cvShowImage.
Python Video Framework
I'm looking for a Python framework that will enable me to play video as well as draw on that video (for labeling purposes). I've tried Pyglet, but this doesn't seem to work particularly well - when drawing on an existing video, there is flicker (even with double buffering and all of that good stuff), and there doesn't seem to be a way to get the frame index in the video during the per-frame callback (only elapsed time since the last frame).
[ "Qt (PyQt) has Phonon, which might help out. PyQt is available as GPL or payware. (Qt has LGPL too, but the PyQt wrappers don't)\n", "Try the Python bindings for GStreamer.\n", "Try a Python wrapper for OpenCV such as ctypes-opencv. The C API reference is here, and the wrapper is very close (see docstrings for any changes).\nI have used it to draw on video without any flicker, so you should have no problems with that.\nA rough outline of calls you need:\n\nLoad video with cvCreateFileCapture, load font with cvFont.\nGrab frame with cvQueryFrame, increment your frame counter.\nDraw on the frame with cvPutText, cvEllipse, etc etc.\nDisplay to user with cvShowImage.\n\n" ]
[ 2, 2, 2 ]
[]
[]
[ "pyglet", "python", "video" ]
stackoverflow_0001003376_pyglet_python_video.txt
Q: Is there a python ftp library for uploading whole directories (including subdirectories)? So I know about ftplib, but that's a bit too low for me as it still requires me to handle uploading files one at a time as well as determining if there are subdirectories, creating the equivalent subdirectories on the server, cd'ing into those subdirectories and then finally uploading the correct files into those subdirectories. It's an annoying task that I'd rather avoid if I can, what with writing tests, setting up test ftp servers etc etc.. Any of you know of a library (or mb some code scrawled on the bathroom wall..) that takes care of this for me or should I just accept my fate and roll my own? Thanks A: The ftputil Python library is a high-level interface to the ftplib module. Looks like this could help. ftputil website A: If wget is installed on your system, you could have your script call it to do the ftp'ing for you. It supports recursive transfers, site mirroring, and many other features.
Is there a python ftp library for uploading whole directories (including subdirectories)?
So I know about ftplib, but that's a bit too low for me as it still requires me to handle uploading files one at a time as well as determining if there are subdirectories, creating the equivalent subdirectories on the server, cd'ing into those subdirectories and then finally uploading the correct files into those subdirectories. It's an annoying task that I'd rather avoid if I can, what with writing tests, setting up test ftp servers etc etc.. Any of you know of a library (or mb some code scrawled on the bathroom wall..) that takes care of this for me or should I just accept my fate and roll my own? Thanks
[ "\nThe ftputil Python library is a high-level interface to the ftplib module.\n\nLooks like this could help. ftputil website\n", "If wget is installed on your system, you could have your script call it to do the ftp'ing for you. It supports recursive transfers, site mirroring, and many other features.\n" ]
[ 11, 3 ]
[]
[]
[ "ftp", "python" ]
stackoverflow_0001003968_ftp_python.txt
Q: Getting the name of document that used to launch the application bundle on OS X When writing an OS X Bundle application (.app), how can I get the name of the document that caused the application to be launched? Say I've associated .abcd with MyApp, when I click on foo.abcd MyApp is launched. How can I get the foo.abcd from inside MyApp? (Command line arguments only contain the process id). A: In general, these are handled through Apple Events. Specifically, your application will receive an open document event. How you would handle it depends on what type of application you are writing. If you're writing a document-based app, this is easy: the document controller receives an openDocumentWithContentsOfURL:display:error: message (or openDocumentWithContentsOfFile:display: for pre-Tiger systems), and will handle this accordingly. For Cocoa apps that aren't document based, the application delegate will be sent an application:openFiles: message. If the delegate doesn't respond to that, it will try sending other messages until the delegate responds to one (openTempFile:, openFiles:, and openFile:, in that order). Here's the documentation for handling Open Apple Events in Cocoa. For Carbon apps, I can't really remember the details (been a while since I've written Carbon code), but if I recall correctly, you would install an Apple Event handler for kAEOpenDocuments events with AEInstallEventHandler(). See the documentation for more details. A: It looks like you need a GUI toolkit for that, there is an example in idlelib/macosxSupport.py def doOpenFile(*args): for fn in args: flist.open(fn) # The command below is a hook in aquatk that is called whenever the app # receives a file open event. The callback can have multiple arguments, # one for every file that should be opened. root.createcommand("::tk::mac::OpenDocument", doOpenFile) Qt also has support for that.
Getting the name of document that used to launch the application bundle on OS X
When writing an OS X Bundle application (.app), how can I get the name of the document that caused the application to be launched? Say I've associated .abcd with MyApp, when I click on foo.abcd MyApp is launched. How can I get the foo.abcd from inside MyApp? (Command line arguments only contain the process id).
[ "In general, these are handled through Apple Events. Specifically, your application will receive an open document event. How you would handle it depends on what type of application you are writing.\nIf you're writing a document-based app, this is easy: the document controller receives an openDocumentWithContentsOfURL:display:error: message (or openDocumentWithContentsOfFile:display: for pre-Tiger systems), and will handle this accordingly.\nFor Cocoa apps that aren't document based, the application delegate will be sent an application:openFiles: message. If the delegate doesn't respond to that, it will try sending other messages until the delegate responds to one (openTempFile:, openFiles:, and openFile:, in that order).\nHere's the documentation for handling Open Apple Events in Cocoa.\nFor Carbon apps, I can't really remember the details (been a while since I've written Carbon code), but if I recall correctly, you would install an Apple Event handler for kAEOpenDocuments events with AEInstallEventHandler(). See the documentation for more details.\n", "It looks like you need a GUI toolkit for that, there is an example in idlelib/macosxSupport.py\ndef doOpenFile(*args):\n for fn in args:\n flist.open(fn)\n\n# The command below is a hook in aquatk that is called whenever the app\n# receives a file open event. The callback can have multiple arguments,\n# one for every file that should be opened.\nroot.createcommand(\"::tk::mac::OpenDocument\", doOpenFile)\n\nQt also has support for that.\n" ]
[ 1, 1 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0000849172_macos_python.txt
Q: Know any creative ways to interface Python with Tcl? Here's the situation. The company I work for has quite a bit of existing Tcl code, but some of them want to start using python. It would nice to be able to reuse some of the existing Tcl code, because that's money already spent. Besides, some of the test equipment only has Tcl API's. So, one of the ways I thought of was using the subprocess module to call into some Tcl scripts. Is subprocess my best bet? Has anyone used this fairly new piece of code: Plumage? If so what is your experience (not just for Tk)? Any other possible ways that I have not considered? A: I hope you're ready for this. Standard Python import Tkinter tclsh = Tkinter.Tcl() tclsh.eval(""" proc unknown args {puts "Hello World!"} }"!dlroW olleH" stup{ sgra nwonknu corp """) Edit in Re to comment: Python's tcl interpreter is not aware of other installed tcl components. You can deal with that by adding extensions in the usual way to the tcl python actually uses. Here's a link with some detail How Tkinter can exploit Tcl/Tk extensions A: This can be done. http://wiki.tcl.tk/13312 Specificly look at the typcl extension. Typcl is a bit weird... It's a an extension to use Tcl from Python. It doesn't really require CriTcl and could have been done in standard C. This code demonstrates using Tcl as shared library, and hooking into it at run time (Tcl's stubs architecture makes this delightfully simple). Furthermore, Typcl avoids string conversions where possible (both ways). A: I've not used it myself, but SWIG might help you out: http://www.swig.org/Doc1.1/HTML/Tcl.html
Know any creative ways to interface Python with Tcl?
Here's the situation. The company I work for has quite a bit of existing Tcl code, but some of them want to start using python. It would nice to be able to reuse some of the existing Tcl code, because that's money already spent. Besides, some of the test equipment only has Tcl API's. So, one of the ways I thought of was using the subprocess module to call into some Tcl scripts. Is subprocess my best bet? Has anyone used this fairly new piece of code: Plumage? If so what is your experience (not just for Tk)? Any other possible ways that I have not considered?
[ "I hope you're ready for this. Standard Python\nimport Tkinter\ntclsh = Tkinter.Tcl()\ntclsh.eval(\"\"\"\n proc unknown args {puts \"Hello World!\"}\n }\"!dlroW olleH\" stup{ sgra nwonknu corp\n\"\"\")\n\nEdit in Re to comment: Python's tcl interpreter is not aware of other installed tcl components. You can deal with that by adding extensions in the usual way to the tcl python actually uses. Here's a link with some detail \n\nHow Tkinter can exploit Tcl/Tk extensions\n\n", "This can be done.\nhttp://wiki.tcl.tk/13312\nSpecificly look at the typcl extension. \n\nTypcl is a bit weird... It's a an extension to use Tcl from Python.\n It doesn't really require CriTcl and could have been done in standard C.\nThis code demonstrates using Tcl as shared library, and hooking into it\n at run time (Tcl's stubs architecture makes this delightfully simple).\n Furthermore, Typcl avoids string conversions where possible (both ways).\n\n", "I've not used it myself, but SWIG might help you out:\nhttp://www.swig.org/Doc1.1/HTML/Tcl.html\n" ]
[ 19, 3, 0 ]
[]
[]
[ "python", "tcl" ]
stackoverflow_0001004434_python_tcl.txt
Q: Python class to merge sorted files, how can this be improved? Background: I'm cleaning large (cannot be held in memory) tab-delimited files. As I clean the input file, I build up a list in memory; when it gets to 1,000,000 entries (about 1GB in memory) I sort it (using the default key below) and write the list to a file. This class is for putting the sorted files back together. It works on the files I have encountered thus far. My largest case, so far, is merging 66 sorted files. Questions: Are there holes in my logic (where is it fragile)? Have I implemented the merge-sort algorithm correctly? Are there any obvious improvements that could be made? Example Data: This is an abstraction of a line in one of these files: 'hash_of_SomeStringId\tSome String Id\t\t\twww.somelink.com\t\tOtherData\t\n' The takeaway is that I use 'SomeStringId'.lower().replace(' ', '') as my sort key. Original Code: class SortedFileMerger(): """ A one-time use object that merges any number of smaller sorted files into one large sorted file. ARGS: paths - list of paths to sorted files output_path - string path to desired output file dedup - (boolean) remove lines with duplicate keys, default = True key - use to override sort key, default = "line.split('\t')[1].lower().replace(' ', '')" will be prepended by "lambda line: ". This should be the same key that was used to sort the files being merged! """ def __init__(self, paths, output_path, dedup=True, key="line.split('\t')[1].lower().replace(' ', '')"): self.key = eval("lambda line: %s" % key) self.dedup = dedup self.handles = [open(path, 'r') for path in paths] # holds one line from each file self.lines = [file_handle.readline() for file_handle in self.handles] self.output_file = open(output_path, 'w') self.lines_written = 0 self._mergeSortedFiles() #call the main method def __del__(self): """ Clean-up file handles. """ for handle in self.handles: if not handle.closed: handle.close() if self.output_file and (not self.output_file.closed): self.output_file.close() def _mergeSortedFiles(self): """ Merge the small sorted files to 'self.output_file'. This can and should only be called once. Called from __init__(). """ previous_comparable = '' min_line = self._getNextMin() while min_line: index = self.lines.index(min_line) comparable = self.key(min_line) if not self.dedup: #not removing duplicates self._writeLine(index) elif comparable != previous_comparable: #removing duplicates and this isn't one self._writeLine(index) else: #removing duplicates and this is one self._readNextLine(index) previous_comparable = comparable min_line = self._getNextMin() #finished merging self.output_file.close() def _getNextMin(self): """ Returns the next "smallest" line in sorted order. Returns None when there are no more values to get. """ while '' in self.lines: index = self.lines.index('') if self._isLastLine(index): # file.readline() is returning '' because # it has reached the end of a file. self._closeFile(index) else: # an empty line got mixed in self._readNextLine(index) if len(self.lines) == 0: return None return min(self.lines, key=self.key) def _writeLine(self, index): """ Write line to output file and update self.lines """ self.output_file.write(self.lines[index]) self.lines_written += 1 self._readNextLine(index) def _readNextLine(self, index): """ Read the next line from handles[index] into lines[index] """ self.lines[index] = self.handles[index].readline() def _closeFile(self, index): """ If there are no more lines to get in a file, it needs to be closed and removed from 'self.handles'. It's entry in 'self.lines' also need to be removed. """ handle = self.handles.pop(index) if not handle.closed: handle.close() # remove entry from self.lines to preserve order _ = self.lines.pop(index) def _isLastLine(self, index): """ Check that handles[index] is at the eof. """ handle = self.handles[index] if handle.tell() == os.path.getsize(handle.name): return True return False Edit: Implementing the suggestions from Brian I came up with the following solution: Second Edit: Updated the code per John Machin's suggestion: def decorated_file(f, key): """ Yields an easily sortable tuple. """ for line in f: yield (key(line), line) def standard_keyfunc(line): """ The standard key function in my application. """ return line.split('\t', 2)[1].replace(' ', '').lower() def mergeSortedFiles(paths, output_path, dedup=True, keyfunc=standard_keyfunc): """ Does the same thing SortedFileMerger class does. """ files = map(open, paths) #open defaults to mode='r' output_file = open(output_path, 'w') lines_written = 0 previous_comparable = '' for line in heapq26.merge(*[decorated_file(f, keyfunc) for f in files]): comparable = line[0] if previous_comparable != comparable: output_file.write(line[1]) lines_written += 1 previous_comparable = comparable return lines_written Rough Test Using the same input files (2.2 GB of data): SortedFileMerger class took 51 minutes (3068.4 seconds) Brian's solution took 40 minutes (2408.5 seconds) After adding John Machin's suggestions, the solution code took 36 minutes (2214.0 seconds) A: Note that in python2.6, heapq has a new merge function which will do this for you. To handle the custom key function, you can just wrap the file iterator with something that decorates it so that it compares based on the key, and strip it out afterwards: def decorated_file(f, key): for line in f: yield (key(line), line) filenames = ['file1.txt','file2.txt','file3.txt'] files = map(open, filenames) outfile = open('merged.txt') for line in heapq.merge(*[decorated_file(f, keyfunc) for f in files]): outfile.write(line[1]) [Edit] Even in earlier versions of python, it's probably worthwhile simply to take the implementation of merge from the later heapq module. It's pure python, and runs unmodified in python2.5, and since it uses a heap to get the next minimum should be very efficient when merging large numbers of files. You should be able to simply copy the heapq.py from a python2.6 installation, copy it to your source as "heapq26.py" and use "from heapq26 import merge" - there are no 2.6 specific features used in it. Alternatively, you could just copy the merge function (rewriting the heappop etc calls to reference the python2.5 heapq module). A: << This "answer" is a comment on the original questioner's resultant code >> Suggestion: using eval() is ummmm and what you are doing restricts the caller to using lambda -- key extraction may require more than a one-liner, and in any case don't you need the same function for the preliminary sort step? So replace this: def mergeSortedFiles(paths, output_path, dedup=True, key="line.split('\t')[1].lower().replace(' ', '')"): keyfunc = eval("lambda line: %s" % key) with this: def my_keyfunc(line): return line.split('\t', 2)[1].replace(' ', '').lower() # minor tweaks may speed it up a little def mergeSortedFiles(paths, output_path, keyfunc, dedup=True):
Python class to merge sorted files, how can this be improved?
Background: I'm cleaning large (cannot be held in memory) tab-delimited files. As I clean the input file, I build up a list in memory; when it gets to 1,000,000 entries (about 1GB in memory) I sort it (using the default key below) and write the list to a file. This class is for putting the sorted files back together. It works on the files I have encountered thus far. My largest case, so far, is merging 66 sorted files. Questions: Are there holes in my logic (where is it fragile)? Have I implemented the merge-sort algorithm correctly? Are there any obvious improvements that could be made? Example Data: This is an abstraction of a line in one of these files: 'hash_of_SomeStringId\tSome String Id\t\t\twww.somelink.com\t\tOtherData\t\n' The takeaway is that I use 'SomeStringId'.lower().replace(' ', '') as my sort key. Original Code: class SortedFileMerger(): """ A one-time use object that merges any number of smaller sorted files into one large sorted file. ARGS: paths - list of paths to sorted files output_path - string path to desired output file dedup - (boolean) remove lines with duplicate keys, default = True key - use to override sort key, default = "line.split('\t')[1].lower().replace(' ', '')" will be prepended by "lambda line: ". This should be the same key that was used to sort the files being merged! """ def __init__(self, paths, output_path, dedup=True, key="line.split('\t')[1].lower().replace(' ', '')"): self.key = eval("lambda line: %s" % key) self.dedup = dedup self.handles = [open(path, 'r') for path in paths] # holds one line from each file self.lines = [file_handle.readline() for file_handle in self.handles] self.output_file = open(output_path, 'w') self.lines_written = 0 self._mergeSortedFiles() #call the main method def __del__(self): """ Clean-up file handles. """ for handle in self.handles: if not handle.closed: handle.close() if self.output_file and (not self.output_file.closed): self.output_file.close() def _mergeSortedFiles(self): """ Merge the small sorted files to 'self.output_file'. This can and should only be called once. Called from __init__(). """ previous_comparable = '' min_line = self._getNextMin() while min_line: index = self.lines.index(min_line) comparable = self.key(min_line) if not self.dedup: #not removing duplicates self._writeLine(index) elif comparable != previous_comparable: #removing duplicates and this isn't one self._writeLine(index) else: #removing duplicates and this is one self._readNextLine(index) previous_comparable = comparable min_line = self._getNextMin() #finished merging self.output_file.close() def _getNextMin(self): """ Returns the next "smallest" line in sorted order. Returns None when there are no more values to get. """ while '' in self.lines: index = self.lines.index('') if self._isLastLine(index): # file.readline() is returning '' because # it has reached the end of a file. self._closeFile(index) else: # an empty line got mixed in self._readNextLine(index) if len(self.lines) == 0: return None return min(self.lines, key=self.key) def _writeLine(self, index): """ Write line to output file and update self.lines """ self.output_file.write(self.lines[index]) self.lines_written += 1 self._readNextLine(index) def _readNextLine(self, index): """ Read the next line from handles[index] into lines[index] """ self.lines[index] = self.handles[index].readline() def _closeFile(self, index): """ If there are no more lines to get in a file, it needs to be closed and removed from 'self.handles'. It's entry in 'self.lines' also need to be removed. """ handle = self.handles.pop(index) if not handle.closed: handle.close() # remove entry from self.lines to preserve order _ = self.lines.pop(index) def _isLastLine(self, index): """ Check that handles[index] is at the eof. """ handle = self.handles[index] if handle.tell() == os.path.getsize(handle.name): return True return False Edit: Implementing the suggestions from Brian I came up with the following solution: Second Edit: Updated the code per John Machin's suggestion: def decorated_file(f, key): """ Yields an easily sortable tuple. """ for line in f: yield (key(line), line) def standard_keyfunc(line): """ The standard key function in my application. """ return line.split('\t', 2)[1].replace(' ', '').lower() def mergeSortedFiles(paths, output_path, dedup=True, keyfunc=standard_keyfunc): """ Does the same thing SortedFileMerger class does. """ files = map(open, paths) #open defaults to mode='r' output_file = open(output_path, 'w') lines_written = 0 previous_comparable = '' for line in heapq26.merge(*[decorated_file(f, keyfunc) for f in files]): comparable = line[0] if previous_comparable != comparable: output_file.write(line[1]) lines_written += 1 previous_comparable = comparable return lines_written Rough Test Using the same input files (2.2 GB of data): SortedFileMerger class took 51 minutes (3068.4 seconds) Brian's solution took 40 minutes (2408.5 seconds) After adding John Machin's suggestions, the solution code took 36 minutes (2214.0 seconds)
[ "Note that in python2.6, heapq has a new merge function which will do this for you.\nTo handle the custom key function, you can just wrap the file iterator with something that decorates it so that it compares based on the key, and strip it out afterwards:\ndef decorated_file(f, key):\n for line in f: \n yield (key(line), line)\n\nfilenames = ['file1.txt','file2.txt','file3.txt']\nfiles = map(open, filenames)\noutfile = open('merged.txt')\n\nfor line in heapq.merge(*[decorated_file(f, keyfunc) for f in files]):\n outfile.write(line[1])\n\n[Edit] Even in earlier versions of python, it's probably worthwhile simply to take the implementation of merge from the later heapq module. It's pure python, and runs unmodified in python2.5, and since it uses a heap to get the next minimum should be very efficient when merging large numbers of files.\nYou should be able to simply copy the heapq.py from a python2.6 installation, copy it to your source as \"heapq26.py\" and use \"from heapq26 import merge\" - there are no 2.6 specific features used in it. Alternatively, you could just copy the merge function (rewriting the heappop etc calls to reference the python2.5 heapq module).\n", "<< This \"answer\" is a comment on the original questioner's resultant code >>\nSuggestion: using eval() is ummmm and what you are doing restricts the caller to using lambda -- key extraction may require more than a one-liner, and in any case don't you need the same function for the preliminary sort step?\nSo replace this:\ndef mergeSortedFiles(paths, output_path, dedup=True, key=\"line.split('\\t')[1].lower().replace(' ', '')\"):\n keyfunc = eval(\"lambda line: %s\" % key)\n\nwith this:\ndef my_keyfunc(line):\n return line.split('\\t', 2)[1].replace(' ', '').lower()\n # minor tweaks may speed it up a little\n\ndef mergeSortedFiles(paths, output_path, keyfunc, dedup=True): \n\n" ]
[ 16, 2 ]
[]
[]
[ "large_file_support", "merge", "mergesort", "python" ]
stackoverflow_0001001569_large_file_support_merge_mergesort_python.txt
Q: Is there a Perl or Python library for ID3 metadata? Basically, I've got a bunch of music files yoinked from my brother's iPod that retain their metadata but have those absolutely horrendous four character names the iPod seems to like storing them under. I figured I'd write a nice, quick script to just rename them as I wished, but I'm curious about any good libraries for reading ID3 metadata. I'd prefer either Perl or Python. I'm comfortable with Perl since I use it at work, whereas Python I need more practice in and it'll make my Python evangelist friends happy. Anyway, shortened version: Can you name a good library/module for either Python or Perl that will allow me to easily extract ID3 metadata from a pile of mp3s? A: CPAN Search turns up several Perl modules when you search for ID3. The answer to almost any Perl question that starts with "Is there a library..." is to check CPAN. I tend to like MP3::Tag, but old people like me tend to find something suitable and ignore all advances in technology until we are forced to change. A: here are few python libs http://id3-py.sourceforge.net/ http://nedbatchelder.com/code/modules/id3reader.html http://code.google.com/p/mutagen/ [updated URL] A: http://www.id3.org/Implementations A: I've had good luck with MP3::Info. A: MP3::Tag is a also great. Again, if you are looking for a Perl module, head over to search.cpan.org first. use MP3::Tag; $mp3 = MP3::Tag->new($filename); # get some information about the file in the easiest way ($title, $track, $artist, $album, $comment, $year, $genre) = $mp3->autoinfo(); # Or: $comment = $mp3->comment(); $dedicated_to = $mp3->select_id3v2_frame_by_descr('COMM(fre,fra,eng,#0)[dedicated to]'); $mp3->title_set('New title'); # Edit in-memory copy $mp3->select_id3v2_frame_by_descr('TALB', 'New album name'); # Edit in memory $mp3->select_id3v2_frame_by_descr('RBUF', $n1, $n2, $n3); # Edit in memory $mp3->update_tags(year => 1866); # Edit in-memory, and commit to file $mp3->update_tags(); # Commit to file A: I think this snippet: Rename MP3 files from ID3 tags from python recipies might be helpful. It uses id3reader lib. A: The Python Package Index has a list of python packages matching a search for 'id3' here, the top several of which appear to be related to your needs. As with brian's answer above, the answer to any "is there a python module that..." probably starts with PyPI.
Is there a Perl or Python library for ID3 metadata?
Basically, I've got a bunch of music files yoinked from my brother's iPod that retain their metadata but have those absolutely horrendous four character names the iPod seems to like storing them under. I figured I'd write a nice, quick script to just rename them as I wished, but I'm curious about any good libraries for reading ID3 metadata. I'd prefer either Perl or Python. I'm comfortable with Perl since I use it at work, whereas Python I need more practice in and it'll make my Python evangelist friends happy. Anyway, shortened version: Can you name a good library/module for either Python or Perl that will allow me to easily extract ID3 metadata from a pile of mp3s?
[ "CPAN Search turns up several Perl modules when you search for ID3. The answer to almost any Perl question that starts with \"Is there a library...\" is to check CPAN.\nI tend to like MP3::Tag, but old people like me tend to find something suitable and ignore all advances in technology until we are forced to change.\n", "here are few python libs\nhttp://id3-py.sourceforge.net/\nhttp://nedbatchelder.com/code/modules/id3reader.html\nhttp://code.google.com/p/mutagen/ [updated URL]\n", "http://www.id3.org/Implementations\n", "I've had good luck with MP3::Info.\n", "MP3::Tag is a also great. Again, if you are looking for a Perl module, head over to search.cpan.org first.\n use MP3::Tag;\n\n $mp3 = MP3::Tag->new($filename);\n\n # get some information about the file in the easiest way\n ($title, $track, $artist, $album, $comment, $year, $genre) = $mp3->autoinfo();\n # Or:\n $comment = $mp3->comment();\n $dedicated_to\n = $mp3->select_id3v2_frame_by_descr('COMM(fre,fra,eng,#0)[dedicated to]');\n\n $mp3->title_set('New title'); # Edit in-memory copy\n $mp3->select_id3v2_frame_by_descr('TALB', 'New album name'); # Edit in memory\n $mp3->select_id3v2_frame_by_descr('RBUF', $n1, $n2, $n3); # Edit in memory\n $mp3->update_tags(year => 1866); # Edit in-memory, and commit to file\n $mp3->update_tags(); # Commit to file\n\n", "I think this snippet: Rename MP3 files from ID3 tags from python recipies might be helpful. It uses id3reader lib.\n", "The Python Package Index has a list of python packages matching a search for 'id3' here, the top several of which appear to be related to your needs. As with brian's answer above, the answer to any \"is there a python module that...\" probably starts with PyPI.\n" ]
[ 9, 5, 4, 2, 2, 1, 0 ]
[]
[]
[ "id3", "perl", "python" ]
stackoverflow_0001000132_id3_perl_python.txt
Q: Instance methods called in a separate thread than the instantiation thread I'm trying to wrap my head around what is happening in this recipe, because I'm planning on implementing a wx/twisted app similar to this (ie. wx and twisted running in separate threads). I understand that both twisted and wx event-loops need to be accessed in a thread-safe manner (ie. reactor.callFromThread, wx.PostEvent, etc). What I am questioning is the thread-safety of passing in instance methods of objects instantiated in one thread (in the case of this recipe, the GUI thread) as deferred callBack and errBack methods for a reactor running in a separate thread. Is that a good idea? There is a wxreactor available in twisted, but googling reveals that there have been numerous problems with it since it was introduced to the library. Even the person who initially came up with the wxreactor technique, advocates running wx and twisted in separate threads. I haven't been able to find any other examples of this technique, but I'd love to see some. A: The sole act of passing instance methods between threads is safe as long as you properly synchronize eventual destruction of those instances (threads share memory so it really doesn't matter which one did the allocation/initialization of a bit of it). The overall thread safety depends on what those methods actually do, so just make them "play nice" and you should be ok. A: I wouldn't say that it's a "good idea". You should just run the reactor and the GUI in the same thread with wxreactor. The timer-driven event-loop starving approach described by Mr. Schroeder is the worst possible fail-safe way to implement event-loop integration. If you use wxreactor (not wxsupport) Twisted now uses an approach where multiplexing is shunted off to a thread internally so that nothing needs to use a timer. Better yet would be for wxpython to expose wxSocket and have someone base a reactor on it. However, if you're set on using a separate thread to communicate with Twisted, the one thing to keep in mind is that while you can use objects that originate from any thread you like as the value to pass to Deferred.callback, you must call Deferred.callback only in the reactor thread itself. Deferreds are not threadsafe; thanks to some debugging utilities, not even the Deferred class is threadsafe, so you need to be very careful when you are using them to never leave the Twisted main thread. i.e. when you have a result in the UI thread, use reactor.callFromThread(myDeferred.callback, myresult).
Instance methods called in a separate thread than the instantiation thread
I'm trying to wrap my head around what is happening in this recipe, because I'm planning on implementing a wx/twisted app similar to this (ie. wx and twisted running in separate threads). I understand that both twisted and wx event-loops need to be accessed in a thread-safe manner (ie. reactor.callFromThread, wx.PostEvent, etc). What I am questioning is the thread-safety of passing in instance methods of objects instantiated in one thread (in the case of this recipe, the GUI thread) as deferred callBack and errBack methods for a reactor running in a separate thread. Is that a good idea? There is a wxreactor available in twisted, but googling reveals that there have been numerous problems with it since it was introduced to the library. Even the person who initially came up with the wxreactor technique, advocates running wx and twisted in separate threads. I haven't been able to find any other examples of this technique, but I'd love to see some.
[ "The sole act of passing instance methods between threads is safe as long as you properly synchronize eventual destruction of those instances (threads share memory so it really doesn't matter which one did the allocation/initialization of a bit of it).\nThe overall thread safety depends on what those methods actually do, so just make them \"play nice\" and you should be ok.\n", "I wouldn't say that it's a \"good idea\". You should just run the reactor and the GUI in the same thread with wxreactor.\nThe timer-driven event-loop starving approach described by Mr. Schroeder is the worst possible fail-safe way to implement event-loop integration. If you use wxreactor (not wxsupport) Twisted now uses an approach where multiplexing is shunted off to a thread internally so that nothing needs to use a timer. Better yet would be for wxpython to expose wxSocket and have someone base a reactor on it.\nHowever, if you're set on using a separate thread to communicate with Twisted, the one thing to keep in mind is that while you can use objects that originate from any thread you like as the value to pass to Deferred.callback, you must call Deferred.callback only in the reactor thread itself. Deferreds are not threadsafe; thanks to some debugging utilities, not even the Deferred class is threadsafe, so you need to be very careful when you are using them to never leave the Twisted main thread. i.e. when you have a result in the UI thread, use reactor.callFromThread(myDeferred.callback, myresult).\n" ]
[ 0, 0 ]
[]
[]
[ "multithreading", "python", "twisted", "wxpython" ]
stackoverflow_0000962323_multithreading_python_twisted_wxpython.txt
Q: pygtk loading a flow of image in only one pixbuf I'm trying to embed a chartdrawer library that can only give me a bmp image in a buffer. I'm loading this image and have to explicitly call delete on the newly created pixbuf and then call the garbage collector. The drawing method is called each 50ms calling the garbage collector is realy CPU consuming. Is there a way to have only one pixbuf for the all process and thus not have to call gc? Or am I doing everything wrong? Thx in advance for any help Raphaël code: def draw(self, drawArea): #init bmp loader loader = gtk.gdk.PixbufLoader ('bmp') #get a bmp buffer chart = drawArea.outBMP2() #load this buffer loader.write(chart) loader.close() #get the newly created pixbuf pixbuf = loader.get_pixbuf() #and load it in the gtk.Image self.img.set_from_pixbuf(pixbuf) del pixbuf gc.collect() A: You don't need to call the garbage collector. Python is automatically garbage collected. At the end of your method, pixbuf falls out of scope (you also don't need "del pixbuf") and will be automatically garbage collected. So for starters, delete the last two lines of your method. You might also want to just call your 'draw' method less often, if it's consuming too much CPU. In most applications I imagine the user could deal with updates every 200ms rather than every 50, if every 50ms means there's going to be a CPU problem.
pygtk loading a flow of image in only one pixbuf
I'm trying to embed a chartdrawer library that can only give me a bmp image in a buffer. I'm loading this image and have to explicitly call delete on the newly created pixbuf and then call the garbage collector. The drawing method is called each 50ms calling the garbage collector is realy CPU consuming. Is there a way to have only one pixbuf for the all process and thus not have to call gc? Or am I doing everything wrong? Thx in advance for any help Raphaël code: def draw(self, drawArea): #init bmp loader loader = gtk.gdk.PixbufLoader ('bmp') #get a bmp buffer chart = drawArea.outBMP2() #load this buffer loader.write(chart) loader.close() #get the newly created pixbuf pixbuf = loader.get_pixbuf() #and load it in the gtk.Image self.img.set_from_pixbuf(pixbuf) del pixbuf gc.collect()
[ "You don't need to call the garbage collector. Python is automatically garbage collected. At the end of your method, pixbuf falls out of scope (you also don't need \"del pixbuf\") and will be automatically garbage collected. So for starters, delete the last two lines of your method.\nYou might also want to just call your 'draw' method less often, if it's consuming too much CPU. In most applications I imagine the user could deal with updates every 200ms rather than every 50, if every 50ms means there's going to be a CPU problem.\n" ]
[ 1 ]
[]
[]
[ "pygtk", "python" ]
stackoverflow_0001002841_pygtk_python.txt
Q: How does one debug a fastcgi application? How does one debug a FastCGI application? I've got an app that's dying but I can't figure out why, even though it's likely throwing a stack trace on stderr. Running it from the commandline results in an error saying: RuntimeError: No FastCGI Environment: 88 - Socket operation on non-socket How do I set up a 'FastCGI Environtment' for debugging purposes? It's not my app - it's a 3rd party open source app - so I'd rather avoid adding in a bunch of logging to figure out what's going wrong. If it matters, the app is Python, but FastCGI is FastCGI, right? Is there a shim or something to let you invoke a fastcgi program from the commandline and hook it up to the terminal so you can see its stdout/stderr? A: It does matter that the application is Python; your question is really "how do I debug Python when I'm not starting the script myself". You want to use a remote debugger. The excellent WinPDB has some documentation on embedded debugging which you should be able to use to attach to your FastCGI application and step through it.
How does one debug a fastcgi application?
How does one debug a FastCGI application? I've got an app that's dying but I can't figure out why, even though it's likely throwing a stack trace on stderr. Running it from the commandline results in an error saying: RuntimeError: No FastCGI Environment: 88 - Socket operation on non-socket How do I set up a 'FastCGI Environtment' for debugging purposes? It's not my app - it's a 3rd party open source app - so I'd rather avoid adding in a bunch of logging to figure out what's going wrong. If it matters, the app is Python, but FastCGI is FastCGI, right? Is there a shim or something to let you invoke a fastcgi program from the commandline and hook it up to the terminal so you can see its stdout/stderr?
[ "It does matter that the application is Python; your question is really \"how do I debug Python when I'm not starting the script myself\".\nYou want to use a remote debugger. The excellent WinPDB has some documentation on embedded debugging which you should be able to use to attach to your FastCGI application and step through it.\n" ]
[ 2 ]
[]
[]
[ "debugging", "fastcgi", "python", "web_applications" ]
stackoverflow_0000913817_debugging_fastcgi_python_web_applications.txt
Q: Threading TCP Server as proxy between connected user and unix socket I'm writing web application where I need to push data from server to the connected clients. This data can be send from any other script from web application. For example one user make some changes on the server and other users should be notified about that. So my idea is to use unix socket (path to socket based on user #ID) to send data for corresponding user (web app scripts will connect to this socket and write data). The second part is ThreadingTCPServer which will accept user connections and push data from unix socket to user over TCP socket connection. Here is the workflow: Used connect to the TCP Server Django script open unixsocket and write data to it. TCP Server read data from unix socket and send it to open connection with user. I hope you understand my idea :) So, I have 2 questions: 1.What do you think about my idea in general? Is it good or bad solution? Any recommendations are welcome. 2.Here is my code. import SocketServer import socket import netstring import sys, os, os.path import string import time class MyRequestHandler(SocketServer.BaseRequestHandler): def handle(self): try: print "Connected:", self.client_address user = self.request.recv(1024).strip().split(":")[1] user = int(user) self.request.send('Welcome #%s' % user) self.usocket_path = '/tmp/u%s.sock' % user if os.path.exists(self.usocket_path): os.remove(self.usocket_path) self.usocket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) self.usocket.bind(self.usocket_path) self.usocket.listen(1) while 1: usocket_conn, addr = self.usocket.accept() while 1: data = usocket_conn.recv(1024) if not data: break self.request.send(data) break usocket_conn.close() time.sleep(0.1) except KeyboardInterrupt: self.request.send('close') self.request.close() myServer = SocketServer.ThreadingTCPServer(('', 8081), MyRequestHandler) myServer.serve_forever() and I got an exception File "server1.py", line 23, in handle self.usocket.listen(1) File "<string>", line 1, in listen error: (102, 'Operation not supported on socket') A: I think You should not use unix sockets. If Your app will (someday) become popular or mission-critical, You won't be able to just add another server to add scalability or to make it redundant and fail-safe. If, on the other hand, You will put the data into f.e. memcached (and user's "dataset number" as the separate key) You'll be able to put data into memcached from multiple servers and read it from multiple servers. If user will disconnect and connect back from some other server, You'll still be able to get the data for him. You could also use a database (to make it more fail-safe), or a mix of database and memcached if You like, but I have seen an app using unix sockets in the way You are trying to, and the programmer regreted it later. The table could have userid, data and timestamp columns and You could remember the last timestamp for the given user. A: This is too complex. Use the socketserver module, please. A: You should use Twisted, or, more specifically, Orbited, to push data to your clients. The sample code you posted has a number of potential problems, and it would be a lot harder to figure out what they all are than for you to just use a pre-built piece of code to do what you need.
Threading TCP Server as proxy between connected user and unix socket
I'm writing web application where I need to push data from server to the connected clients. This data can be send from any other script from web application. For example one user make some changes on the server and other users should be notified about that. So my idea is to use unix socket (path to socket based on user #ID) to send data for corresponding user (web app scripts will connect to this socket and write data). The second part is ThreadingTCPServer which will accept user connections and push data from unix socket to user over TCP socket connection. Here is the workflow: Used connect to the TCP Server Django script open unixsocket and write data to it. TCP Server read data from unix socket and send it to open connection with user. I hope you understand my idea :) So, I have 2 questions: 1.What do you think about my idea in general? Is it good or bad solution? Any recommendations are welcome. 2.Here is my code. import SocketServer import socket import netstring import sys, os, os.path import string import time class MyRequestHandler(SocketServer.BaseRequestHandler): def handle(self): try: print "Connected:", self.client_address user = self.request.recv(1024).strip().split(":")[1] user = int(user) self.request.send('Welcome #%s' % user) self.usocket_path = '/tmp/u%s.sock' % user if os.path.exists(self.usocket_path): os.remove(self.usocket_path) self.usocket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) self.usocket.bind(self.usocket_path) self.usocket.listen(1) while 1: usocket_conn, addr = self.usocket.accept() while 1: data = usocket_conn.recv(1024) if not data: break self.request.send(data) break usocket_conn.close() time.sleep(0.1) except KeyboardInterrupt: self.request.send('close') self.request.close() myServer = SocketServer.ThreadingTCPServer(('', 8081), MyRequestHandler) myServer.serve_forever() and I got an exception File "server1.py", line 23, in handle self.usocket.listen(1) File "<string>", line 1, in listen error: (102, 'Operation not supported on socket')
[ "I think You should not use unix sockets. If Your app will (someday) become popular or mission-critical, You won't be able to just add another server to add scalability or to make it redundant and fail-safe.\nIf, on the other hand, You will put the data into f.e. memcached (and user's \"dataset number\" as the separate key) You'll be able to put data into memcached from multiple servers and read it from multiple servers. If user will disconnect and connect back from some other server, You'll still be able to get the data for him.\nYou could also use a database (to make it more fail-safe), or a mix of database and memcached if You like, but I have seen an app using unix sockets in the way You are trying to, and the programmer regreted it later. The table could have userid, data and timestamp columns and You could remember the last timestamp for the given user.\n", "This is too complex. Use the socketserver module, please. \n", "You should use Twisted, or, more specifically, Orbited, to push data to your clients. The sample code you posted has a number of potential problems, and it would be a lot harder to figure out what they all are than for you to just use a pre-built piece of code to do what you need.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "multithreading", "python", "sockets", "tcp" ]
stackoverflow_0000833962_multithreading_python_sockets_tcp.txt
Q: Authentication Required - Problems Establishing AIM OSCAR Session using Python I'm writing a simple python script that will interface with the AIM servers using the OSCAR protocol. It includes a somewhat complex handshake protocol. You essentially have to send a GET request to a specific URL, receive XML or JSON encoded reply, extract a special session token and secret key, then generate a response using the token and the key. I tried to follow these steps to a tee, but the process fails in the last one. Here is my code: class simpleOSCAR: def __init__(self, username, password): self.username = username self.password = password self.open_aim_key = 'whatever' self.client_name = 'blah blah blah' self.client_version = 'yadda yadda yadda' def authenticate(self): # STEP 1 url = 'https://api.screenname.aol.com/auth/clientLogin?f=json' data = urllib.urlencode( [ ('k', self.open_aim_key), ('s', self.username), ('pwd', self.password), ('clientVersion', self.client_version), ('clientName', self.client_name)] ) response = urllib2.urlopen(url, data) json_response = simplejson.loads(urllib.unquote(response.read())) session_secret = json_response['response']['data']['sessionSecret'] host_time = json_response['response']['data']['hostTime'] self.token = json_response['response']['data']['token']['a'] # STEP 2 self.session_key = base64.b64encode(hmac.new(self.password, session_secret, sha256).digest()) #STEP 3 uri = "http://api.oscar.aol.com/aim/startOSCARSession?" data = urllib.urlencode([ ('a', self.token), ('clientName', self.client_name), ('clientVersion', self.client_version), ('f', 'json'), ('k', self.open_aim_key), ('ts', host_time), ] ) urldata = uri+data hashdata = "GET&" + urllib.quote("http://api.oscar.aol.com/aim/startOSCARSession?") + data digest = base64.b64encode(hmac.new(self.session_key, hashdata, sha256).digest()) urldata = urldata + "&sig_sha256=" + digest print urldata + "\n" response = urllib2.urlopen(urldata) json_response = urllib.unquote(response.read()) print json_response if __name__ == '__main__': so = simpleOSCAR("aimscreenname", "somepassword") so.authenticate() I get the following response from the server: { "response" : { "statusCode":401, "statusText":"Authentication Required. statusDetailCode 1014", "statusDetailCode":1014, "data":{ "ts":1235878395 } } } I tried troubleshooting it in various ways, but the URL's I generate look the same as the ones shown in the signon flow example. And yet, it fails. Any idea what I'm doing wrong here? Am I hashing the values wrong? Am I encoding something improperly? Is my session timing out? A: Try using Twisted's OSCAR support instead of writing your own? It hasn't seen a lot of maintenance, but I believe it works. A: URI Encode your digest? -moxford
Authentication Required - Problems Establishing AIM OSCAR Session using Python
I'm writing a simple python script that will interface with the AIM servers using the OSCAR protocol. It includes a somewhat complex handshake protocol. You essentially have to send a GET request to a specific URL, receive XML or JSON encoded reply, extract a special session token and secret key, then generate a response using the token and the key. I tried to follow these steps to a tee, but the process fails in the last one. Here is my code: class simpleOSCAR: def __init__(self, username, password): self.username = username self.password = password self.open_aim_key = 'whatever' self.client_name = 'blah blah blah' self.client_version = 'yadda yadda yadda' def authenticate(self): # STEP 1 url = 'https://api.screenname.aol.com/auth/clientLogin?f=json' data = urllib.urlencode( [ ('k', self.open_aim_key), ('s', self.username), ('pwd', self.password), ('clientVersion', self.client_version), ('clientName', self.client_name)] ) response = urllib2.urlopen(url, data) json_response = simplejson.loads(urllib.unquote(response.read())) session_secret = json_response['response']['data']['sessionSecret'] host_time = json_response['response']['data']['hostTime'] self.token = json_response['response']['data']['token']['a'] # STEP 2 self.session_key = base64.b64encode(hmac.new(self.password, session_secret, sha256).digest()) #STEP 3 uri = "http://api.oscar.aol.com/aim/startOSCARSession?" data = urllib.urlencode([ ('a', self.token), ('clientName', self.client_name), ('clientVersion', self.client_version), ('f', 'json'), ('k', self.open_aim_key), ('ts', host_time), ] ) urldata = uri+data hashdata = "GET&" + urllib.quote("http://api.oscar.aol.com/aim/startOSCARSession?") + data digest = base64.b64encode(hmac.new(self.session_key, hashdata, sha256).digest()) urldata = urldata + "&sig_sha256=" + digest print urldata + "\n" response = urllib2.urlopen(urldata) json_response = urllib.unquote(response.read()) print json_response if __name__ == '__main__': so = simpleOSCAR("aimscreenname", "somepassword") so.authenticate() I get the following response from the server: { "response" : { "statusCode":401, "statusText":"Authentication Required. statusDetailCode 1014", "statusDetailCode":1014, "data":{ "ts":1235878395 } } } I tried troubleshooting it in various ways, but the URL's I generate look the same as the ones shown in the signon flow example. And yet, it fails. Any idea what I'm doing wrong here? Am I hashing the values wrong? Am I encoding something improperly? Is my session timing out?
[ "Try using Twisted's OSCAR support instead of writing your own? It hasn't seen a lot of maintenance, but I believe it works.\n", "URI Encode your digest?\n-moxford\n" ]
[ 1, 0 ]
[]
[]
[ "aim", "json", "python" ]
stackoverflow_0000599218_aim_json_python.txt
Q: How to distinguish field that requires null=True when blank=True is set in Django models? Some model fields such as DateTimeField require null=True option when blank=True option is set. I'd like to know which fields require that (maybe dependent on backend DBMS), and there is any way to do this automatically. A: null=True is used to tell that in DB value can be NULL blank=True is only for django , so django doesn't raise error if field is blank e.g. in admin interface so blank=True has nothing to do with DB NULL requirement will vary from DB to DB, and it is upto you to decide if you want some column NULL or not A: This post should help to understand the difference on blank and null in Django
How to distinguish field that requires null=True when blank=True is set in Django models?
Some model fields such as DateTimeField require null=True option when blank=True option is set. I'd like to know which fields require that (maybe dependent on backend DBMS), and there is any way to do this automatically.
[ "null=True is used to tell that in DB value can be NULL\nblank=True is only for django , so django doesn't raise error if field is blank e.g. in admin interface\nso blank=True has nothing to do with DB\nNULL requirement will vary from DB to DB, and it is upto you to decide if you want some column NULL or not\n", "This post should help to understand the difference on blank and null in Django\n" ]
[ 2, 2 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001005187_django_django_models_python.txt
Q: What are some successful methods for deploying a Django application on the desktop? I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application. My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution. A: I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips: IIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C "asm" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions. You'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application. In your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-) If your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging. If you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications. You'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue. You'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging. Be careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup. Test actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests. That's all I can think of. Hope it helps. A: If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific. I would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand. And of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).
What are some successful methods for deploying a Django application on the desktop?
I have a Django application that I would like to deploy to the desktop. I have read a little on this and see that one way is to use freeze. I have used this with varying success in the past for Python applications, but am not convinced it is the best approach for a Django application. My questions are: what are some successful methods you have used for deploying Django applications? Is there a de facto standard method? Have you hit any dead ends? I need a cross platform solution.
[ "I did this a couple years ago for a Django app running as a local daemon. It was launched by Twisted and wrapped by py2app for Mac and py2exe for Windows. There was both a browser as well as an Air front-end hitting it. It worked pretty well for the most part but I didn't get to deploy it out in the wild because the larger project got postponed. It's been a while and I'm a bit rusty on the details, but here are a few tips:\n\nIIRC, the most problematic thing was Python loading C extensions. I had an Intel assembler module written with C \"asm\" commands that I needed to load to get low-level system data. That took a while to get working across both platforms. If you can, try to avoid C extensions.\nYou'll definitely need an installer. Most likely the app will end up running in the background, so you'll need to mark it as a Windows service, Unix daemon, or Mac launchd application.\nIn your installer you'll want to provide a way to locate a free local TCP port. You may have to write a little stub routine that the installer runs or use the installer's built-in scripting facility to find a port that hasn't been taken and save it to a config file. You then load the config file inside your settings.py and whatever front-end you're going to deploy. That's the shared port. Or you could just pick a random number and hope no other service on the desktop steps on your toes :-)\nIf your front-end and back-end are separate apps then you'll need to design an API for them to talk to each other. Make sure you provide a flag to return the data in both raw and human-readable form. It really helps in debugging.\nIf you want Django to be able to send notifications to the user, you'll want to integrate with something like Growl or get Python for Windows extensions so you can bring up toaster pop-up notifications.\nYou'll probably want to stick with SQLite for database in which case you'll want to make sure you use semaphores to tackle multiple requests vying for the database (or any other shared resource). If your app is accessed via a browser users can have multiple windows open and hit the app at the same time. If using a custom front-end (native, Air, etc...) then you can control how many instances are running at a given time so it won't be as much of an issue.\nYou'll also want some sort of access to local system logging facilities since the app will be running in the background and make sure you trap all your exceptions and route it into the syslog. A big hassle was debugging Windows service startup issues. It would have been impossible without system logging. \nBe careful about hardcoded paths if you want to stay cross-platform. You may have to rely on the installer to write a config file entry with the actual installation path which you'll have to load up at startup.\nTest actual deployment especially across a variety of firewalls. Some of the desktop firewalls get pretty aggressive about blocking access to network services that accept incoming requests.\n\nThat's all I can think of. Hope it helps.\n", "If you want a good solution, you should give up on making it cross platform. Your code should all be portable, but your deployment - almost by definition - needs to be platform-specific.\nI would recommend using py2exe on Windows, py2app on MacOS X, and building deb packages for Ubuntu with a .desktop file in the right place in the package for an entry to show up in the user's menu. Unfortunately for the last option there's no convenient 'py2deb' or 'py2xdg', but it's pretty easy to make the relevant text file by hand.\nAnd of course, I'd recommend bundling in Twisted as your web server for making the application easily self-contained :).\n" ]
[ 6, 4 ]
[]
[]
[ "django", "python", "web_applications" ]
stackoverflow_0000789673_django_python_web_applications.txt
Q: ctypes in Python 2.6 help I can't seem to get this code to work, I was under the impression I was doing this correctly. from ctypes import * kernel32 = windll.kernel32 string1 = "test" string2 = "test2" kernel32.MessageBox(None, string1, string2, MB_OK) ** I tried to change it to MessageBoxA as suggested below ** ** Error I get :: ** Traceback (most recent call last): File "C:\<string>", line 6, in <module> File "C:\Python26\Lib\ctypes\__init__.py", line 366, in __getattr__ func = self.__getitem__(name) File "C:\Python26\Lib\ctypes\__init__.py", line 371, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: function 'MessageBoxA' not found A: MessageBox is defined in user32 not kernel32, you also haven't defined MB_OK so use this instead windll.user32.MessageBoxA(None, string1, string2, 1) Also I recommend using python win32 API isntead of it ,as it has all constant and named functions edit: I mean use this from ctypes import * kernel32 = windll.kernel32 string1 = "test" string2 = "test2" #kernel32.MessageBox(None, string1, string2, MB_OK) windll.user32.MessageBoxA(None, string1, string2, 1) same thing you can do using win32 api as import win32gui win32gui.MessageBox(0, "a", "b", 1) A: The problem is that the function you're trying to call isn't actually named MessageBox(). There are two functions, named MessageBoxA() and MessageBoxW(): the former takes 8-bit ANSI strings, and the latter takes 16-bit Unicode (wide-character) strings. In C, the preprocessor symbol MessageBox is #defined to be either MessageBoxA or MessageBoxW, depending on whether or not Unicode is enabled (specifically, if the symbol _UNICODE is defined). Secondly, according to the MessageBox() documentation, MessageBoxA/W are located in user32.dll, not kernel32.dll. Try this (I can't verify it, since I'm not in front of a Windows box at the moment): user32 = windll.user32 user32.MessageBoxA(None, string1, string2, MB_OK) A: Oh and anytime you are confused about if a call needs kernel32 or user32 or something of the sorts. Don't be afraid to look for the call on MSDN. They have an Alphabetical List and also a list based on categories. Hope you find them helpful .
ctypes in Python 2.6 help
I can't seem to get this code to work, I was under the impression I was doing this correctly. from ctypes import * kernel32 = windll.kernel32 string1 = "test" string2 = "test2" kernel32.MessageBox(None, string1, string2, MB_OK) ** I tried to change it to MessageBoxA as suggested below ** ** Error I get :: ** Traceback (most recent call last): File "C:\<string>", line 6, in <module> File "C:\Python26\Lib\ctypes\__init__.py", line 366, in __getattr__ func = self.__getitem__(name) File "C:\Python26\Lib\ctypes\__init__.py", line 371, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: function 'MessageBoxA' not found
[ "MessageBox is defined in user32 not kernel32, you also haven't defined MB_OK\nso use this instead\nwindll.user32.MessageBoxA(None, string1, string2, 1)\n\nAlso I recommend using python win32 API isntead of it ,as it has all constant and named functions\nedit: I mean use this\nfrom ctypes import *\n\nkernel32 = windll.kernel32\n\nstring1 = \"test\"\nstring2 = \"test2\"\n\n#kernel32.MessageBox(None, string1, string2, MB_OK)\nwindll.user32.MessageBoxA(None, string1, string2, 1)\n\nsame thing you can do using win32 api as\nimport win32gui\nwin32gui.MessageBox(0, \"a\", \"b\", 1)\n\n", "The problem is that the function you're trying to call isn't actually named MessageBox(). There are two functions, named MessageBoxA() and MessageBoxW(): the former takes 8-bit ANSI strings, and the latter takes 16-bit Unicode (wide-character) strings. In C, the preprocessor symbol MessageBox is #defined to be either MessageBoxA or MessageBoxW, depending on whether or not Unicode is enabled (specifically, if the symbol _UNICODE is defined).\nSecondly, according to the MessageBox() documentation, MessageBoxA/W are located in user32.dll, not kernel32.dll.\nTry this (I can't verify it, since I'm not in front of a Windows box at the moment):\nuser32 = windll.user32\nuser32.MessageBoxA(None, string1, string2, MB_OK)\n\n", "Oh and anytime you are confused about if a call needs kernel32 or user32 or something of the sorts. Don't be afraid to look for the call on MSDN. They have an Alphabetical List and also a list based on categories.\nHope you find them helpful .\n" ]
[ 4, 0, 0 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0001005117_ctypes_python.txt
Q: Can I trace all the functions/methods executing in a python script? Is there a way to programmatically trace the execution of all python functions/methods? I would like to see what arguments each of them was called with. I really mean all, I'm not interested in a trace decorator. In Ruby, I could alias the method I wanted and add the extra behaviour there. A: Have a look at the trace module. You can also use it via the command line: python -m trace --help
Can I trace all the functions/methods executing in a python script?
Is there a way to programmatically trace the execution of all python functions/methods? I would like to see what arguments each of them was called with. I really mean all, I'm not interested in a trace decorator. In Ruby, I could alias the method I wanted and add the extra behaviour there.
[ "Have a look at the trace module.\nYou can also use it via the command line:\npython -m trace --help\n\n" ]
[ 12 ]
[]
[]
[ "debugging", "python", "trace" ]
stackoverflow_0001005665_debugging_python_trace.txt
Q: App Engine Datastore IN Operator - how to use? Reading: http://code.google.com/appengine/docs/python/datastore/gqlreference.html I want to use: := IN but am unsure how to make it work. Let's assume the following class User(db.Model): name = db.StringProperty() class UniqueListOfSavedItems(db.Model): str = db.StringPropery() datesaved = db.DateTimeProperty() class UserListOfSavedItems(db.Model): name = db.ReferenceProperty(User, collection='user') str = db.ReferenceProperty(UniqueListOfSavedItems, collection='itemlist') How can I do a query which gets me the list of saved items for a user? Obviously I can do: q = db.Gql("SELECT * FROM UserListOfSavedItems WHERE name :=", user[0].name) but that gets me a list of keys. I want to now take that list and get it into a query to get the str field out of UniqueListOfSavedItems. I thought I could do: q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems WHERE := str in q") but something's not right...any ideas? Is it (am at my day job, so can't test this now): q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems __key__ := str in q) side note: what a devilishly difficult problem to search on because all I really care about is the "IN" operator. A: Since you have a list of keys, you don't need to do a second query - you can do a batch fetch, instead. Try this: #and this should get me the items that a user saved useritems = db.get(saveditemkeys) (Note you don't even need the guard clause - a db.get on 0 entities is short-circuited appropritely.) What's the difference, you may ask? Well, a db.get takes about 20-40ms. A query, on the other hand (GQL or not) takes about 160-200ms. But wait, it gets worse! The IN operator is implemented in Python, and translates to multiple queries, which are executed serially. So if you do a query with an IN filter for 10 keys, you're doing 10 separate 160ms-ish query operations, for a total of about 1.6 seconds latency. A single db.get, in contrast, will have the same effect and take a total of about 30ms. A: +1 to Adam for getting me on the right track. Based on his pointer, and doing some searching at Code Search, I have the following solution. usersaveditems = User.Gql(“Select * from UserListOfSavedItems where user =:1”, userkey) saveditemkeys = [] for item in usersaveditems: #this should create a list of keys (references) to the saved item table saveditemkeys.append(item.str()) if len(usersavedsearches > 0): #and this should get me the items that a user saved useritems = db.Gql(“SELECT * FROM UniqueListOfSavedItems WHERE __key__ in :1’, saveditemkeys)
App Engine Datastore IN Operator - how to use?
Reading: http://code.google.com/appengine/docs/python/datastore/gqlreference.html I want to use: := IN but am unsure how to make it work. Let's assume the following class User(db.Model): name = db.StringProperty() class UniqueListOfSavedItems(db.Model): str = db.StringPropery() datesaved = db.DateTimeProperty() class UserListOfSavedItems(db.Model): name = db.ReferenceProperty(User, collection='user') str = db.ReferenceProperty(UniqueListOfSavedItems, collection='itemlist') How can I do a query which gets me the list of saved items for a user? Obviously I can do: q = db.Gql("SELECT * FROM UserListOfSavedItems WHERE name :=", user[0].name) but that gets me a list of keys. I want to now take that list and get it into a query to get the str field out of UniqueListOfSavedItems. I thought I could do: q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems WHERE := str in q") but something's not right...any ideas? Is it (am at my day job, so can't test this now): q2 = db.Gql("SELECT * FROM UniqueListOfSavedItems __key__ := str in q) side note: what a devilishly difficult problem to search on because all I really care about is the "IN" operator.
[ "Since you have a list of keys, you don't need to do a second query - you can do a batch fetch, instead. Try this:\n#and this should get me the items that a user saved\nuseritems = db.get(saveditemkeys)\n\n(Note you don't even need the guard clause - a db.get on 0 entities is short-circuited appropritely.)\nWhat's the difference, you may ask? Well, a db.get takes about 20-40ms. A query, on the other hand (GQL or not) takes about 160-200ms. But wait, it gets worse! The IN operator is implemented in Python, and translates to multiple queries, which are executed serially. So if you do a query with an IN filter for 10 keys, you're doing 10 separate 160ms-ish query operations, for a total of about 1.6 seconds latency. A single db.get, in contrast, will have the same effect and take a total of about 30ms.\n", "+1 to Adam for getting me on the right track. Based on his pointer, and doing some searching at Code Search, I have the following solution.\nusersaveditems = User.Gql(“Select * from UserListOfSavedItems where user =:1”, userkey)\n\nsaveditemkeys = []\n\nfor item in usersaveditems:\n #this should create a list of keys (references) to the saved item table\n saveditemkeys.append(item.str()) \n\nif len(usersavedsearches > 0):\n #and this should get me the items that a user saved\n useritems = db.Gql(“SELECT * FROM UniqueListOfSavedItems WHERE __key__ in :1’, saveditemkeys)\n\n" ]
[ 10, 0 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "gql", "python" ]
stackoverflow_0001003247_google_app_engine_google_cloud_datastore_gql_python.txt
Q: Coding collaboratively for the web In the past I've done the coding-part of my web-projects mostly by myself. Now, as we are a team working on some project, be it python or php or ..., is there some simple versioning system to use? My hoster doesn't seem to support any kind of this sort. On the other hand, I feel it is too early to start renting a whole server in this phase of the project just to be able to install a versioning system. Any simple ideas how to solve this problem? A: Try Mercurial. If your hosting has ssh and python support, you can run Mercurial on it. UPDATE: By the way, you don't need a hosting to run Mercurial - it's distributed and works without any servers. If you still want to have a repository on your hosting server - you can have it if your hoster supports ssh and python UPDATE: http://bitbucket.org - fantastic mercurial hosting, allows 1 private repository and unlimited public repositories for free A: See: https://stackoverflow.com/questions/59791/free-online-private-svn-repositories https://stackoverflow.com/questions/146505/can-someone-recommend-a-reliable-cvs-or-svn-hosting-service https://stackoverflow.com/questions/111292/free-version-control-services A: http://github.com Anonymous Sourcecode Hosting, Repository... A: Github and the others are free for open source projects only. However, Xp-dev is free as in free beer even to commercial ones. http://www.xp-dev.com/
Coding collaboratively for the web
In the past I've done the coding-part of my web-projects mostly by myself. Now, as we are a team working on some project, be it python or php or ..., is there some simple versioning system to use? My hoster doesn't seem to support any kind of this sort. On the other hand, I feel it is too early to start renting a whole server in this phase of the project just to be able to install a versioning system. Any simple ideas how to solve this problem?
[ "Try Mercurial. If your hosting has ssh and python support, you can run Mercurial on it.\nUPDATE: By the way, you don't need a hosting to run Mercurial - it's distributed and works without any servers. If you still want to have a repository on your hosting server - you can have it if your hoster supports ssh and python \nUPDATE: http://bitbucket.org - fantastic mercurial hosting, allows 1 private repository and unlimited public repositories for free\n", "See:\n\nhttps://stackoverflow.com/questions/59791/free-online-private-svn-repositories\nhttps://stackoverflow.com/questions/146505/can-someone-recommend-a-reliable-cvs-or-svn-hosting-service\nhttps://stackoverflow.com/questions/111292/free-version-control-services\n\n", "http://github.com\nAnonymous Sourcecode Hosting, Repository...\n", "Github and the others are free for open source projects only.\nHowever, Xp-dev is free as in free beer even to commercial ones. http://www.xp-dev.com/\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ "php", "python", "versioning", "web_services" ]
stackoverflow_0001005548_php_python_versioning_web_services.txt
Q: Make my code handle in the background function calls that take a long time to finish Certain functions in my code take a long time to return. I don't need the return value and I'd like to execute the next lines of code in the script before the slow function returns. More precisely, the functions send out commands via USB to another system (via a C++ library with SWIG) and once the other system has completed the task, it returns an "OK" value. I have reproduced the problem in the following example. How can I make "tic" and "toc" print one after the other without any delay? I suppose the solution involves threads, but I am not too familiar with them. Can anyone show me a simple way to solve this problem? from math import sqrt from time import sleep def longcalc(): total = 1e6 for i in range(total): r = sqrt(i) return r def longtime(): #Do stuff here sleep(1) return "sleep done" print "tic" longcalc() print "toc" longtime() print "tic" A: Unless the SWIGged C++ code is specifically set up to release the GIL (Global Interpreter Lock) before long delays and re-acquire it before getting back to Python, multi-threading might not prove very useful in practice. You could try multiprocessing instead: from multiprocessing import Process if __name__ == '__main__': print "tic" Process(target=longcalc).start() print "toc" Process(target=longtime).start() print "tic" multiprocessing is in the standard library in Python 2.6 and later, but can be separately downloaded and installed for versions 2.5 and 2.4. Edit: the asker is of course trying to do something more complicated than this, and in a comment explains: """I get a bunch of errors ending with: "pickle.PicklingError: Can't pickle <type 'PySwigObject'>: it's not found as __builtin__.PySwigObject". Can this be solved without reorganizing all my code? Process was called from inside a method bound to a button to my wxPython interface.""" multiprocessing does need to pickle objects to cross process boundaries; not sure what SWIGged object exactly is involved here, but, unless you can find a way to serialize and deserialize it, and register that with the copy_reg module, you need to avoid passing it across the boundary (make SWIGged objects owned and used by a single process, don't have them as module-global objects particularly in __main__, communicate among processes with Queue.Queue through objects that don't contain SWIGged objects, etc). The early errors (if different than the one you report "ending with") might actually be more significant, but I can't guess without seeing them. A: from threading import Thread # ... your code calcthread = Thread(target=longcalc) timethread = Thread(target=longtime) print "tic" calcthread.start() print "toc" timethread.start() print "tic" Have a look at the python threading docs for more information about multithreading in python. A word of warning about multithreading: it can be hard. Very hard. Debugging multithreaded software can lead to some of the worst experiences you will ever have as a software developer. So before you delve into the world of potential deadlocks and race conditions, be absolutely sure that it makes sense to convert your synchronous USB interactions into ansynchronous ones. Specifically, ensure that any code dependent upon the async code is executed after it has been completed (via a callback method or something similar). A: You can use a Future, which is not included in the standard library, but very simple to implement: from threading import Thread, Event class Future(object): def __init__(self, thunk): self._thunk = thunk self._event = Event() self._result = None self._failed = None Thread(target=self._run).start() def _run(self): try: self._result = self._thunk() except Exception, e: self._failed = True self._result = e else: self._failed = False self._event.set() def wait(self): self._event.wait() if self._failed: raise self._result else: return self._result You would use this particular implementation like this: import time def work(): for x in range(3): time.sleep(1) print 'Tick...' print 'Done!' return 'Result!' def main(): print 'Starting up...' f = Future(work) print 'Doing more main thread work...' time.sleep(1.5) print 'Now waiting...' print 'Got result: %s' % f.wait() Unfortunately, when using a system that has no "main" thread, it's hard to tell when to call "wait"; you obviously don't want to stop processing until you absolutely need an answer. With Twisted, you can use deferToThread, which allows you to return to the main loop. The idiomatically equivalent code in Twisted would be something like this: import time from twisted.internet import reactor from twisted.internet.task import deferLater from twisted.internet.threads import deferToThread from twisted.internet.defer import inlineCallbacks def work(): for x in range(3): time.sleep(1) print 'Tick...' print 'Done!' return 'Result!' @inlineCallbacks def main(): print 'Starting up...' d = deferToThread(work) print 'Doing more main thread work...' yield deferLater(reactor, 1.5, lambda : None) print "Now 'waiting'..." print 'Got result: %s' % (yield d) although in order to actually start up the reactor and exit when it's finished, you'd need to do this as well: reactor.callWhenRunning( lambda : main().addCallback(lambda _: reactor.stop())) reactor.run() The main difference with Twisted is that if more "stuff" happens in the main thread - other timed events fire, other network connections get traffic, buttons get clicked in a GUI - that work will happen seamlessly, because the deferLater and the yield d don't actually stop the whole thread, they only pause the "main" inlineCallbacks coroutine.
Make my code handle in the background function calls that take a long time to finish
Certain functions in my code take a long time to return. I don't need the return value and I'd like to execute the next lines of code in the script before the slow function returns. More precisely, the functions send out commands via USB to another system (via a C++ library with SWIG) and once the other system has completed the task, it returns an "OK" value. I have reproduced the problem in the following example. How can I make "tic" and "toc" print one after the other without any delay? I suppose the solution involves threads, but I am not too familiar with them. Can anyone show me a simple way to solve this problem? from math import sqrt from time import sleep def longcalc(): total = 1e6 for i in range(total): r = sqrt(i) return r def longtime(): #Do stuff here sleep(1) return "sleep done" print "tic" longcalc() print "toc" longtime() print "tic"
[ "Unless the SWIGged C++ code is specifically set up to release the GIL (Global Interpreter Lock) before long delays and re-acquire it before getting back to Python, multi-threading might not prove very useful in practice. You could try multiprocessing instead:\nfrom multiprocessing import Process\n\nif __name__ == '__main__':\n print \"tic\"\n Process(target=longcalc).start()\n print \"toc\"\n Process(target=longtime).start()\n print \"tic\"\n\nmultiprocessing is in the standard library in Python 2.6 and later, but can be separately downloaded and installed for versions 2.5 and 2.4.\nEdit: the asker is of course trying to do something more complicated than this, and in a comment explains:\n\"\"\"I get a bunch of errors ending with: \"pickle.PicklingError: Can't pickle <type 'PySwigObject'>: it's not found as __builtin__.PySwigObject\". Can this be solved without reorganizing all my code? Process was called from inside a method bound to a button to my wxPython interface.\"\"\"\nmultiprocessing does need to pickle objects to cross process boundaries; not sure what SWIGged object exactly is involved here, but, unless you can find a way to serialize and deserialize it, and register that with the copy_reg module, you need to avoid passing it across the boundary (make SWIGged objects owned and used by a single process, don't have them as module-global objects particularly in __main__, communicate among processes with Queue.Queue through objects that don't contain SWIGged objects, etc).\nThe early errors (if different than the one you report \"ending with\") might actually be more significant, but I can't guess without seeing them.\n", "from threading import Thread\n# ... your code \n\ncalcthread = Thread(target=longcalc)\ntimethread = Thread(target=longtime)\n\nprint \"tic\"\ncalcthread.start()\nprint \"toc\"\ntimethread.start()\nprint \"tic\"\n\nHave a look at the python threading docs for more information about multithreading in python. \nA word of warning about multithreading: it can be hard. Very hard. Debugging multithreaded software can lead to some of the worst experiences you will ever have as a software developer. \nSo before you delve into the world of potential deadlocks and race conditions, be absolutely sure that it makes sense to convert your synchronous USB interactions into ansynchronous ones. Specifically, ensure that any code dependent upon the async code is executed after it has been completed (via a callback method or something similar).\n", "You can use a Future, which is not included in the standard library, but very simple to implement:\nfrom threading import Thread, Event\n\nclass Future(object):\n def __init__(self, thunk):\n self._thunk = thunk\n self._event = Event()\n self._result = None\n self._failed = None\n Thread(target=self._run).start()\n\n def _run(self):\n try:\n self._result = self._thunk()\n except Exception, e:\n self._failed = True\n self._result = e\n else:\n self._failed = False\n self._event.set()\n\n def wait(self):\n self._event.wait()\n if self._failed:\n raise self._result\n else:\n return self._result\n\nYou would use this particular implementation like this:\nimport time\n\ndef work():\n for x in range(3):\n time.sleep(1)\n print 'Tick...'\n print 'Done!'\n return 'Result!'\n\ndef main():\n print 'Starting up...'\n f = Future(work)\n print 'Doing more main thread work...'\n time.sleep(1.5)\n print 'Now waiting...'\n print 'Got result: %s' % f.wait()\n\nUnfortunately, when using a system that has no \"main\" thread, it's hard to tell when to call \"wait\"; you obviously don't want to stop processing until you absolutely need an answer.\nWith Twisted, you can use deferToThread, which allows you to return to the main loop. The idiomatically equivalent code in Twisted would be something like this:\nimport time\n\nfrom twisted.internet import reactor\nfrom twisted.internet.task import deferLater\nfrom twisted.internet.threads import deferToThread\nfrom twisted.internet.defer import inlineCallbacks\n\ndef work():\n for x in range(3):\n time.sleep(1)\n print 'Tick...'\n print 'Done!'\n return 'Result!'\n\n@inlineCallbacks\ndef main():\n print 'Starting up...'\n d = deferToThread(work)\n print 'Doing more main thread work...'\n yield deferLater(reactor, 1.5, lambda : None)\n print \"Now 'waiting'...\"\n print 'Got result: %s' % (yield d)\n\nalthough in order to actually start up the reactor and exit when it's finished, you'd need to do this as well:\nreactor.callWhenRunning(\n lambda : main().addCallback(lambda _: reactor.stop()))\nreactor.run()\n\nThe main difference with Twisted is that if more \"stuff\" happens in the main thread - other timed events fire, other network connections get traffic, buttons get clicked in a GUI - that work will happen seamlessly, because the deferLater and the yield d don't actually stop the whole thread, they only pause the \"main\" inlineCallbacks coroutine.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0000998674_multithreading_python.txt
Q: Capitalizing non-ASCII words in Python How to capitalize words containing non-ASCII characters in Python? Is there a way to tune string's capitalize() method to do that? A: Use Unicode strings: # coding: cp1252 print u"é".capitalize() # Prints É If all you have is an 8-bit string, decode it into Unicode first: # coding: cp1252 print "é".decode('cp1252').capitalize() # Prints É If you then need it as an 8-bit string again, encode it: # coding: cp1252 print "é".decode('cp1252').capitalize().encode('cp1252') # Prints É (assuming your terminal is happy to receive cp1252) A: capitalize() should Just Work™ for Unicode strings.
Capitalizing non-ASCII words in Python
How to capitalize words containing non-ASCII characters in Python? Is there a way to tune string's capitalize() method to do that?
[ "Use Unicode strings:\n# coding: cp1252\nprint u\"é\".capitalize()\n# Prints É\n\nIf all you have is an 8-bit string, decode it into Unicode first:\n# coding: cp1252\nprint \"é\".decode('cp1252').capitalize()\n# Prints É\n\nIf you then need it as an 8-bit string again, encode it:\n# coding: cp1252\nprint \"é\".decode('cp1252').capitalize().encode('cp1252')\n# Prints É (assuming your terminal is happy to receive cp1252)\n\n", "capitalize() should Just Work™ for Unicode strings.\n" ]
[ 10, 1 ]
[]
[]
[ "ascii", "capitalization", "python", "unicode" ]
stackoverflow_0001006450_ascii_capitalization_python_unicode.txt
Q: A Python buffer that you can truncate from the left? Right now, I am buffering bytes using strings, StringIO, or cStringIO. But, I often need to remove bytes from the left side of the buffer. A naive approach would rebuild the entire buffer. Is there an optimal way to do this, if left-truncating is a very common operation? Python's garbage collector should actually GC the truncated bytes. Any sort of algorithm for this (keep the buffer in small pieces?), or an existing implementation, would really help. Edit: I tried to use Python 2.7's memoryview for this, but sadly, the data outside the "view" isn't GCed when the original reference is deleted: # (This will use ~2GB of memory, not 50MB) memoryview # Requires Python 2.7+ smalls = [] for i in xrange(10): big = memoryview('z'*(200*1000*1000)) small = big[195*1000*1000:] del big smalls.append(small) print '.', A: A deque will be efficient if left-removal operations are frequent (Unlike using a list, string or buffer, it's amortised O(1) for either-end removal). It will be more costly memory-wise than a string however, as you'll be storing each character as its own string object, rather than a packed sequence. Alternatively, you could create your own implementation (eg. a linked list of string / buffer objects of fixed size), which may store the data more compactly. A: Build your buffer as a list of characters or lines and slice the list. Only join as string on output. This is pretty efficient for most types of 'mutable string' behaviour. The GC will collect the truncated bytes because they are no longer referenced in the list. UPDATE: For modifying the list head you can simply reverse the list. This sounds like an inefficient thing to do however python's list implementation optimises this internally. from http://effbot.org/zone/python-list.htm : Reversing is fast, so temporarily reversing the list can often speed things up if you need to remove and insert a bunch of items at the beginning of the list: L.reverse() # append/insert/pop/delete at far end L.reverse()
A Python buffer that you can truncate from the left?
Right now, I am buffering bytes using strings, StringIO, or cStringIO. But, I often need to remove bytes from the left side of the buffer. A naive approach would rebuild the entire buffer. Is there an optimal way to do this, if left-truncating is a very common operation? Python's garbage collector should actually GC the truncated bytes. Any sort of algorithm for this (keep the buffer in small pieces?), or an existing implementation, would really help. Edit: I tried to use Python 2.7's memoryview for this, but sadly, the data outside the "view" isn't GCed when the original reference is deleted: # (This will use ~2GB of memory, not 50MB) memoryview # Requires Python 2.7+ smalls = [] for i in xrange(10): big = memoryview('z'*(200*1000*1000)) small = big[195*1000*1000:] del big smalls.append(small) print '.',
[ "A deque will be efficient if left-removal operations are frequent (Unlike using a list, string or buffer, it's amortised O(1) for either-end removal). It will be more costly memory-wise than a string however, as you'll be storing each character as its own string object, rather than a packed sequence.\nAlternatively, you could create your own implementation (eg. a linked list of string / buffer objects of fixed size), which may store the data more compactly.\n", "Build your buffer as a list of characters or lines and slice the list. Only join as string on output. This is pretty efficient for most types of 'mutable string' behaviour.\nThe GC will collect the truncated bytes because they are no longer referenced in the list.\nUPDATE: For modifying the list head you can simply reverse the list. This sounds like an inefficient thing to do however python's list implementation optimises this internally.\nfrom http://effbot.org/zone/python-list.htm :\n\nReversing is fast, so temporarily\n reversing the list can often speed\n things up if you need to remove and\n insert a bunch of items at the\n beginning of the list:\nL.reverse()\n# append/insert/pop/delete at far end\nL.reverse()\n\n\n" ]
[ 3, 1 ]
[]
[]
[ "buffer", "memoryview", "python", "string" ]
stackoverflow_0001006171_buffer_memoryview_python_string.txt
Q: Multiple Django Admin Sites on one Apache... When I log into one I get logged out of the other I have two Django projects and applications running on the same Apache installation. Both projects and both applications have the same name, for example myproject.myapplication. They are each in separately named directories so it looks like .../dir1/myproject/myapplication and .../dir2/myproject/myapplication. Everything about the actual public facing applications works fine. When I log into either of the admin sites it seems ok, but if I switch and do any work on the opposite admin site I get logged out of the first one. In short I can't be logged into both admin sites at once. Any help would be appreciated. A: Set the SESSION_COOKIE_DOMAIN option. You need to set the domain for each of your sites so the cookies don't override each other. You can also use SESSION_COOKIE_NAME to make the cookie names different for each site. A: I ran into a similar issue with a live & staging site hosted on the same Apache server (on CentOS). I added unique SESSION_COOKIE_NAME values to each site's settings (in local_settings.py, create one if you don't have one and import it in your settings.py), set the SESSION_COOKIE_DOMAIN for the live site and set SESSION_COOKIE_DOMAIN = None for staging. I also ran "python manage.py cleanup" to (hopefully) clean any conflicted information out of the database. A: Well, if they have the same project and application names, then the databases and tables will be the same. Your django_session table which holds the session information is the same for both sites. You have to use different project names that will go in different MySQL (or whatever) databases. A: The session information is stored in the database, so if you're sharing the database with both running instances, logging off one location will log you off both. If your circumstance requires you to share the database, the easiest workaround is probably to create a second user account with admin privileges. A: Let me guess, is this running on your localhost? and you have each site assigned to a different port? i.e. localhost:8000, localhost:8001 ..? I've had the same problem! (although I wasn't running Apache per se) When you login to the admin site, you get a cookie in your browser that's associated with the domain "localhost", the cookie stores a pointer of some sort to a session stored in the database on the server. When you visit the other site, the server tries to interpret the cookie, but fails. I'm guessing it deletes the cookie because it's "garbage". What you can do in this case, is change your domain use localhost:8000 for the first site, and 127.0.0.1:8001 for the second site. this way the second site doesn't attempt to read the cookie that was set by the first site I also think you can edit your HOSTS file to add more aliases to 127.0.0.1 if you need to. (but I haven't tried this)
Multiple Django Admin Sites on one Apache... When I log into one I get logged out of the other
I have two Django projects and applications running on the same Apache installation. Both projects and both applications have the same name, for example myproject.myapplication. They are each in separately named directories so it looks like .../dir1/myproject/myapplication and .../dir2/myproject/myapplication. Everything about the actual public facing applications works fine. When I log into either of the admin sites it seems ok, but if I switch and do any work on the opposite admin site I get logged out of the first one. In short I can't be logged into both admin sites at once. Any help would be appreciated.
[ "Set the SESSION_COOKIE_DOMAIN option. You need to set the domain for each of your sites so the cookies don't override each other.\nYou can also use SESSION_COOKIE_NAME to make the cookie names different for each site.\n", "I ran into a similar issue with a live & staging site hosted on the same Apache server (on CentOS). I added unique SESSION_COOKIE_NAME values to each site's settings (in local_settings.py, create one if you don't have one and import it in your settings.py), set the SESSION_COOKIE_DOMAIN for the live site and set SESSION_COOKIE_DOMAIN = None for staging. I also ran \"python manage.py cleanup\" to (hopefully) clean any conflicted information out of the database.\n", "Well, if they have the same project and application names, then the databases and tables will be the same. Your django_session table which holds the session information is the same for both sites. You have to use different project names that will go in different MySQL (or whatever) databases.\n", "The session information is stored in the database, so if you're sharing the database with both running instances, logging off one location will log you off both. If your circumstance requires you to share the database, the easiest workaround is probably to create a second user account with admin privileges.\n", "Let me guess, is this running on your localhost? and you have each site assigned to a different port? i.e. localhost:8000, localhost:8001 ..?\nI've had the same problem! (although I wasn't running Apache per se)\nWhen you login to the admin site, you get a cookie in your browser that's associated with the domain \"localhost\", the cookie stores a pointer of some sort to a session stored in the database on the server. \nWhen you visit the other site, the server tries to interpret the cookie, but fails. I'm guessing it deletes the cookie because it's \"garbage\".\nWhat you can do in this case, is change your domain\nuse localhost:8000 for the first site, and 127.0.0.1:8001 for the second site. this way the second site doesn't attempt to read the cookie that was set by the first site\nI also think you can edit your HOSTS file to add more aliases to 127.0.0.1 if you need to. (but I haven't tried this)\n" ]
[ 9, 1, 0, 0, 0 ]
[]
[]
[ "admin", "django", "python" ]
stackoverflow_0000327142_admin_django_python.txt
Q: How to tell Buildout to install a egg from a URL (w/o pypi) I have some egg accessible as a URL, say http://myhosting.com/somepkg.egg . Now I don't have this somepkg listed on pypi. How do I tell buildout to fetch and install it for me. I have tried a few recipes but no luck so far. TIA A: You should just be able to add a 'find-links' option to your [buildout] section within the buildout.cfg file. I just tested this internally with the following buildout.cfg. [buildout] find-links = http://buildslave01/eggs/hostapi.core-1.0_r102-py2.4.egg parts = mypython [mypython] recipe = zc.recipe.egg interpreter = mypython eggs = hostapi.core You can just specify the full path to the egg as the value to 'find-links.' Make sure the egg's 'pyx.y' version matches your local Python version. If they don't match, you'll get a not found error which is slightly misleading.
How to tell Buildout to install a egg from a URL (w/o pypi)
I have some egg accessible as a URL, say http://myhosting.com/somepkg.egg . Now I don't have this somepkg listed on pypi. How do I tell buildout to fetch and install it for me. I have tried a few recipes but no luck so far. TIA
[ "You should just be able to add a 'find-links' option to your [buildout] section within the buildout.cfg file. I just tested this internally with the following buildout.cfg.\n[buildout]\nfind-links = http://buildslave01/eggs/hostapi.core-1.0_r102-py2.4.egg\nparts = mypython\n\n[mypython]\nrecipe = zc.recipe.egg\ninterpreter = mypython\neggs = hostapi.core\n\nYou can just specify the full path to the egg as the value to 'find-links.' Make sure the egg's 'pyx.y' version matches your local Python version. If they don't match, you'll get a not found error which is slightly misleading.\n" ]
[ 5 ]
[]
[]
[ "buildout", "egg", "python" ]
stackoverflow_0001007488_buildout_egg_python.txt
Q: Popen and python Working on some code and I'm given the error when running it from the command prompt... NameError: name 'Popen' is not defined but I've imported both import os and import sys. Here's part of the code exepath = os.path.join(EXE File location is here) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath, '-el', str(el), '-n', str(z)] print 'The python program is running this command:' print cmd process = Popen(cmd, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] Am I missing something elementary? I wouldn't doubt it. Thanks! A: you should do: import subprocess subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE) # etc. A: Popen is defined in the subprocess module import subprocess ... subprocess.Popen(...) Or: from subprocess import Popen Popen(...) A: When you import a module, the module's members don't become part of the global namespace: you still have to prefix them with modulename.. So, you have to say import os process = os.popen(command, mode, bufsize) Alternatively, you can use the from module import names syntax to import things into the global namespace: from os import popen # Or, from os import * to import everything process = popen(command, mode, bufsize) A: If your import looks like this: import os Then you need to reference the things included in os like this: os.popen() If you dont want to do that, you can change your import to look like this: from os import * Which is not recommended because it can lead to namespace ambiguities (things in your code conflicting with things imported elsewhere.) You could also just do: from os import popen Which is more explicit and easier to read than from os import * A: This looks like Popen from the subprocess module (python >= 2.4) from subprocess import Popen
Popen and python
Working on some code and I'm given the error when running it from the command prompt... NameError: name 'Popen' is not defined but I've imported both import os and import sys. Here's part of the code exepath = os.path.join(EXE File location is here) exepath = '"' + os.path.normpath(exepath) + '"' cmd = [exepath, '-el', str(el), '-n', str(z)] print 'The python program is running this command:' print cmd process = Popen(cmd, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] Am I missing something elementary? I wouldn't doubt it. Thanks!
[ "you should do:\nimport subprocess\nsubprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)\n# etc.\n\n", "Popen is defined in the subprocess module\nimport subprocess\n...\nsubprocess.Popen(...)\n\nOr:\nfrom subprocess import Popen\nPopen(...)\n\n", "When you import a module, the module's members don't become part of the global namespace: you still have to prefix them with modulename.. So, you have to say\nimport os\nprocess = os.popen(command, mode, bufsize)\n\nAlternatively, you can use the from module import names syntax to import things into the global namespace:\nfrom os import popen # Or, from os import * to import everything\nprocess = popen(command, mode, bufsize)\n\n", "If your import looks like this:\nimport os\n\nThen you need to reference the things included in os like this:\nos.popen()\n\nIf you dont want to do that, you can change your import to look like this:\nfrom os import *\n\nWhich is not recommended because it can lead to namespace ambiguities (things in your code conflicting with things imported elsewhere.) You could also just do:\nfrom os import popen\n\nWhich is more explicit and easier to read than from os import *\n", "This looks like Popen from the subprocess module (python >= 2.4)\nfrom subprocess import Popen\n\n" ]
[ 38, 7, 2, 1, 1 ]
[ "You should be using os.popen() if you simply import os.\n" ]
[ -2 ]
[ "popen", "python" ]
stackoverflow_0001007855_popen_python.txt
Q: How to spawn parallel child processes on a multi-processor system? I have a Python script that I want to use as a controller to another Python script. I have a server with 64 processors, so want to spawn up to 64 child processes of this second Python script. The child script is called: $ python create_graphs.py --name=NAME where NAME is something like XYZ, ABC, NYU etc. In my parent controller script I retrieve the name variable from a list: my_list = [ 'XYZ', 'ABC', 'NYU' ] So my question is, what is the best way to spawn off these processes as children? I want to limit the number of children to 64 at a time, so need to track the status (if the child process has finished or not) so I can efficiently keep the whole generation running. I looked into using the subprocess package, but rejected it because it only spawns one child at a time. I finally found the multiprocessor package, but I admit to being overwhelmed by the whole threads vs. subprocesses documentation. Right now, my script uses subprocess.call to only spawn one child at a time and looks like this: #!/path/to/python import subprocess, multiprocessing, Queue from multiprocessing import Process my_list = [ 'XYZ', 'ABC', 'NYU' ] if __name__ == '__main__': processors = multiprocessing.cpu_count() for i in range(len(my_list)): if( i < processors ): cmd = ["python", "/path/to/create_graphs.py", "--name="+ my_list[i]] child = subprocess.call( cmd, shell=False ) I really want it to spawn up 64 children at a time. In other stackoverflow questions I saw people using Queue, but it seems like that creates a performance hit? A: What you are looking for is the process pool class in multiprocessing. import multiprocessing import subprocess def work(cmd): return subprocess.call(cmd, shell=False) if __name__ == '__main__': count = multiprocessing.cpu_count() pool = multiprocessing.Pool(processes=count) print pool.map(work, ['ls'] * count) And here is a calculation example to make it easier to understand. The following will divide 10000 tasks on N processes where N is the cpu count. Note that I'm passing None as the number of processes. This will cause the Pool class to use cpu_count for the number of processes (reference) import multiprocessing import subprocess def calculate(value): return value * 10 if __name__ == '__main__': pool = multiprocessing.Pool(None) tasks = range(10000) results = [] r = pool.map_async(calculate, tasks, callback=results.append) r.wait() # Wait on the results print results A: Here is the solution I came up, based on Nadia and Jim's comments. I am not sure if it is the best way, but it works. The original child script being called needs to be a shell script because I need to use some 3rd party apps including Matlab. So I had to take it out of Python and code it in bash. import sys import os import multiprocessing import subprocess def work(staname): print 'Processing station:',staname print 'Parent process:', os.getppid() print 'Process id:', os.getpid() cmd = [ "/bin/bash" "/path/to/executable/create_graphs.sh","--name=%s" % (staname) ] return subprocess.call(cmd, shell=False) if __name__ == '__main__': my_list = [ 'XYZ', 'ABC', 'NYU' ] my_list.sort() print my_list # Get the number of processors available num_processes = multiprocessing.cpu_count() threads = [] len_stas = len(my_list) print "+++ Number of stations to process: %s" % (len_stas) # run until all the threads are done, and there is no data left for list_item in my_list: # if we aren't using all the processors AND there is still data left to # compute, then spawn another thread if( len(threads) < num_processes ): p = multiprocessing.Process(target=work,args=[list_item]) p.start() print p, p.is_alive() threads.append(p) else: for thread in threads: if not thread.is_alive(): threads.remove(thread) Does this seem like a reasonable solution? I tried to use Jim's while loop format, but my script just returned nothing. I am not sure why that would be. Here is the output when I run the script with Jim's 'while' loop replacing the 'for' loop: hostname{me}2% controller.py ['ABC', 'NYU', 'XYZ'] Number of processes: 64 +++ Number of stations to process: 3 hostname{me}3% When I run it with the 'for' loop, I get something more meaningful: hostname{me}6% controller.py ['ABC', 'NYU', 'XYZ'] Number of processes: 64 +++ Number of stations to process: 3 Processing station: ABC Parent process: 1056 Process id: 1068 Processing station: NYU Parent process: 1056 Process id: 1069 Processing station: XYZ Parent process: 1056 Process id: 1071 hostname{me}7% So this works, and I am happy. However, I still don't get why I can't use Jim's 'while' style loop instead of the 'for' loop I am using. Thanks for all the help - I am impressed with the breadth of knowledge @ stackoverflow. A: I would definitely use multiprocessing rather than rolling my own solution using subprocess. A: I don't think you need queue unless you intend to get data out of the applications (Which if you do want data, I think it may be easier to add it to a database anyway) but try this on for size: put the contents of your create_graphs.py script all into a function called "create_graphs" import threading from create_graphs import create_graphs num_processes = 64 my_list = [ 'XYZ', 'ABC', 'NYU' ] threads = [] # run until all the threads are done, and there is no data left while threads or my_list: # if we aren't using all the processors AND there is still data left to # compute, then spawn another thread if (len(threads) < num_processes) and my_list: t = threading.Thread(target=create_graphs, args=[ my_list.pop() ]) t.setDaemon(True) t.start() threads.append(t) # in the case that we have the maximum number of threads check if any of them # are done. (also do this when we run out of data, until all the threads are done) else: for thread in threads: if not thread.isAlive(): threads.remove(thread) I know that this will result in 1 less threads than processors, which is probably good, it leaves a processor to manage the threads, disk i/o, and other things happening on the computer. If you decide you want to use the last core just add one to it edit: I think I may have misinterpreted the purpose of my_list. You do not need my_list to keep track of the threads at all (as they're all referenced by the items in the threads list). But this is a fine way of feeding the processes input - or even better: use a generator function ;) The purpose of my_list and threads my_list holds the data that you need to process in your function threads is just a list of the currently running threads the while loop does two things, start new threads to process the data, and check if any threads are done running. So as long as you have either (a) more data to process, or (b) threads that aren't finished running.... you want to program to continue running. Once both lists are empty they will evaluate to False and the while loop will exit
How to spawn parallel child processes on a multi-processor system?
I have a Python script that I want to use as a controller to another Python script. I have a server with 64 processors, so want to spawn up to 64 child processes of this second Python script. The child script is called: $ python create_graphs.py --name=NAME where NAME is something like XYZ, ABC, NYU etc. In my parent controller script I retrieve the name variable from a list: my_list = [ 'XYZ', 'ABC', 'NYU' ] So my question is, what is the best way to spawn off these processes as children? I want to limit the number of children to 64 at a time, so need to track the status (if the child process has finished or not) so I can efficiently keep the whole generation running. I looked into using the subprocess package, but rejected it because it only spawns one child at a time. I finally found the multiprocessor package, but I admit to being overwhelmed by the whole threads vs. subprocesses documentation. Right now, my script uses subprocess.call to only spawn one child at a time and looks like this: #!/path/to/python import subprocess, multiprocessing, Queue from multiprocessing import Process my_list = [ 'XYZ', 'ABC', 'NYU' ] if __name__ == '__main__': processors = multiprocessing.cpu_count() for i in range(len(my_list)): if( i < processors ): cmd = ["python", "/path/to/create_graphs.py", "--name="+ my_list[i]] child = subprocess.call( cmd, shell=False ) I really want it to spawn up 64 children at a time. In other stackoverflow questions I saw people using Queue, but it seems like that creates a performance hit?
[ "What you are looking for is the process pool class in multiprocessing.\nimport multiprocessing\nimport subprocess\n\ndef work(cmd):\n return subprocess.call(cmd, shell=False)\n\nif __name__ == '__main__':\n count = multiprocessing.cpu_count()\n pool = multiprocessing.Pool(processes=count)\n print pool.map(work, ['ls'] * count)\n\nAnd here is a calculation example to make it easier to understand. The following will divide 10000 tasks on N processes where N is the cpu count. Note that I'm passing None as the number of processes. This will cause the Pool class to use cpu_count for the number of processes (reference)\nimport multiprocessing\nimport subprocess\n\ndef calculate(value):\n return value * 10\n\nif __name__ == '__main__':\n pool = multiprocessing.Pool(None)\n tasks = range(10000)\n results = []\n r = pool.map_async(calculate, tasks, callback=results.append)\n r.wait() # Wait on the results\n print results\n\n", "Here is the solution I came up, based on Nadia and Jim's comments. I am not sure if it is the best way, but it works. The original child script being called needs to be a shell script because I need to use some 3rd party apps including Matlab. So I had to take it out of Python and code it in bash.\nimport sys\nimport os\nimport multiprocessing\nimport subprocess\n\ndef work(staname):\n print 'Processing station:',staname\n print 'Parent process:', os.getppid()\n print 'Process id:', os.getpid()\n cmd = [ \"/bin/bash\" \"/path/to/executable/create_graphs.sh\",\"--name=%s\" % (staname) ]\n return subprocess.call(cmd, shell=False)\n\nif __name__ == '__main__':\n\n my_list = [ 'XYZ', 'ABC', 'NYU' ]\n\n my_list.sort()\n\n print my_list\n\n # Get the number of processors available\n num_processes = multiprocessing.cpu_count()\n\n threads = []\n\n len_stas = len(my_list)\n\n print \"+++ Number of stations to process: %s\" % (len_stas)\n\n # run until all the threads are done, and there is no data left\n\n for list_item in my_list:\n\n # if we aren't using all the processors AND there is still data left to\n # compute, then spawn another thread\n\n if( len(threads) < num_processes ):\n\n p = multiprocessing.Process(target=work,args=[list_item])\n\n p.start()\n\n print p, p.is_alive()\n\n threads.append(p)\n\n else:\n\n for thread in threads:\n\n if not thread.is_alive():\n\n threads.remove(thread)\n\nDoes this seem like a reasonable solution? I tried to use Jim's while loop format, but my script just returned nothing. I am not sure why that would be. Here is the output when I run the script with Jim's 'while' loop replacing the 'for' loop:\nhostname{me}2% controller.py \n['ABC', 'NYU', 'XYZ']\nNumber of processes: 64\n+++ Number of stations to process: 3\nhostname{me}3%\n\nWhen I run it with the 'for' loop, I get something more meaningful:\nhostname{me}6% controller.py \n['ABC', 'NYU', 'XYZ']\nNumber of processes: 64\n+++ Number of stations to process: 3\nProcessing station: ABC\nParent process: 1056\nProcess id: 1068\nProcessing station: NYU\nParent process: 1056\nProcess id: 1069\nProcessing station: XYZ\nParent process: 1056\nProcess id: 1071\nhostname{me}7%\n\nSo this works, and I am happy. However, I still don't get why I can't use Jim's 'while' style loop instead of the 'for' loop I am using. Thanks for all the help - I am impressed with the breadth of knowledge @ stackoverflow.\n", "I would definitely use multiprocessing rather than rolling my own solution using subprocess.\n", "I don't think you need queue unless you intend to get data out of the applications (Which if you do want data, I think it may be easier to add it to a database anyway)\nbut try this on for size:\nput the contents of your create_graphs.py script all into a function called \"create_graphs\"\nimport threading\nfrom create_graphs import create_graphs\n\nnum_processes = 64\nmy_list = [ 'XYZ', 'ABC', 'NYU' ]\n\nthreads = []\n\n# run until all the threads are done, and there is no data left\nwhile threads or my_list:\n\n # if we aren't using all the processors AND there is still data left to\n # compute, then spawn another thread\n if (len(threads) < num_processes) and my_list:\n t = threading.Thread(target=create_graphs, args=[ my_list.pop() ])\n t.setDaemon(True)\n t.start()\n threads.append(t)\n\n # in the case that we have the maximum number of threads check if any of them\n # are done. (also do this when we run out of data, until all the threads are done)\n else:\n for thread in threads:\n if not thread.isAlive():\n threads.remove(thread)\n\nI know that this will result in 1 less threads than processors, which is probably good, it leaves a processor to manage the threads, disk i/o, and other things happening on the computer. If you decide you want to use the last core just add one to it\nedit: I think I may have misinterpreted the purpose of my_list. You do not need my_list to keep track of the threads at all (as they're all referenced by the items in the threads list). But this is a fine way of feeding the processes input - or even better: use a generator function ;)\nThe purpose of my_list and threads\nmy_list holds the data that you need to process in your function\nthreads is just a list of the currently running threads\nthe while loop does two things, start new threads to process the data, and check if any threads are done running.\nSo as long as you have either (a) more data to process, or (b) threads that aren't finished running.... you want to program to continue running. Once both lists are empty they will evaluate to False and the while loop will exit\n" ]
[ 70, 3, 1, 1 ]
[]
[]
[ "exec", "multiprocessing", "python", "subprocess" ]
stackoverflow_0000884650_exec_multiprocessing_python_subprocess.txt
Q: Getting distutils to install prebuilt compiled libraries? I manage an open source project (Remix, the source is available there) written in python. We ask users to run python setup.py install to install the software. Recently we added a compiled C++ package (a port of SoundTouch -- go to trunk/externals in the source to see it.) We'd like the setup.py file that installs the base Remix libraries to also install the pysoundtouch14 library. However, we don't want users to have to have a gcc or msvc toolchain on their system. We've precompiled binaries for common platforms (linux-i386, windows, mac 10.5 and 10.6), go to trunk/externals/pysoundtouch14/build to see them. I was hoping that a user who does not have gcc or msvc installed could just run pysoundtouch14's setup.py and it would detect the presence of our prebuilt binaries and just copy them to the right place (/Library/Python/2.5/site-packages, for example.) But that doesn't happen. On a new 10.5 system, for example, setup.py complains about no gcc being installed even though the .so it needs to install is already built in the build/ folder. So I have two direct questions: How can I get setup.py to just "install" prebuilt .so and .pyd files in the right place automatically depending on the platform without requiring a build system? How can one setup.py (our main setup.py file) also run the setup.py of another included package (pysoundtouch14's setup.py?) A: Unfortunately, I'd say that overriding install command is the way to go. This can be done easily, using custom distribution command. For example, see [1] http://svn.zope.org/Zope/branches/2.9/setup.py?rev=69978&view=auto A: Your first question is a tough one given the multiplatform requirement. If it was just windows, you could use the post-installation script option to run a script to handle the libraries or just use a non-python tool such as NSIS. I'm not sure what else can be done other than Almad's suggestion. For your second question, you might want to look into Paver.
Getting distutils to install prebuilt compiled libraries?
I manage an open source project (Remix, the source is available there) written in python. We ask users to run python setup.py install to install the software. Recently we added a compiled C++ package (a port of SoundTouch -- go to trunk/externals in the source to see it.) We'd like the setup.py file that installs the base Remix libraries to also install the pysoundtouch14 library. However, we don't want users to have to have a gcc or msvc toolchain on their system. We've precompiled binaries for common platforms (linux-i386, windows, mac 10.5 and 10.6), go to trunk/externals/pysoundtouch14/build to see them. I was hoping that a user who does not have gcc or msvc installed could just run pysoundtouch14's setup.py and it would detect the presence of our prebuilt binaries and just copy them to the right place (/Library/Python/2.5/site-packages, for example.) But that doesn't happen. On a new 10.5 system, for example, setup.py complains about no gcc being installed even though the .so it needs to install is already built in the build/ folder. So I have two direct questions: How can I get setup.py to just "install" prebuilt .so and .pyd files in the right place automatically depending on the platform without requiring a build system? How can one setup.py (our main setup.py file) also run the setup.py of another included package (pysoundtouch14's setup.py?)
[ "Unfortunately, I'd say that overriding install command is the way to go.\nThis can be done easily, using custom distribution command. For example, see [1]\nhttp://svn.zope.org/Zope/branches/2.9/setup.py?rev=69978&view=auto\n", "Your first question is a tough one given the multiplatform requirement. If it was just windows, you could use the post-installation script option to run a script to handle the libraries or just use a non-python tool such as NSIS. I'm not sure what else can be done other than Almad's suggestion.\nFor your second question, you might want to look into Paver.\n" ]
[ 1, 1 ]
[]
[]
[ "compiled", "installation", "open_source", "python" ]
stackoverflow_0001002581_compiled_installation_open_source_python.txt
Q: How do I test if a string exists in a Genshi stream? I'm working on a plugin for Trac and am inserting some javascript into the rendered HTML by manipulating the Genshi stream. I need to test if a javascript function is already in the HTML and if it is then overwrite it with a new version, if it isn't then add it to the HTML. How do I perform a search to see if the function is already there? A: Aha!! I have solved this by first attempting to remove the function from the stream: stream = stream | Transformer('.//head/script["functionName()"]').remove() and then adding the updated/new version: stream = stream | Transformer('.//head').append(tag.script(functionNameCode, type="text/javascript"))
How do I test if a string exists in a Genshi stream?
I'm working on a plugin for Trac and am inserting some javascript into the rendered HTML by manipulating the Genshi stream. I need to test if a javascript function is already in the HTML and if it is then overwrite it with a new version, if it isn't then add it to the HTML. How do I perform a search to see if the function is already there?
[ "Aha!! I have solved this by first attempting to remove the function from the stream: \nstream = stream | Transformer('.//head/script[\"functionName()\"]').remove()\n\nand then adding the updated/new version:\nstream = stream | Transformer('.//head').append(tag.script(functionNameCode, type=\"text/javascript\"))\n\n" ]
[ 1 ]
[]
[]
[ "genshi", "python", "stream" ]
stackoverflow_0001008038_genshi_python_stream.txt
Q: Is it possible to launch a Paster shell with some modules pre-imported? Is it possible to run "paster shell blah.ini" (or a variant thereof) and have it automatically load certain libraries? I hate having to always type "from foo.bar import mystuff" as the first command in every paster shell, and would like the computer to do it for me. A: An option to try would be to create a sitecustomize.py script. If you have this in the same folder as your paster shell, the python interpreter should load it up on startup. Let me clarify, sitecustomize.py, if found, is always loaded on startup of the interpreter. So if you put it where it can be found, ideally somewhere that is only found when the paster shell starts, then you should be able to add your imports to it and have them be ready. This is probably your best bet. If the paster shell is a packaged app (a la py2exe) it should still work. See also: http://www.rexx.com/~dkuhlman/pylons_quick_site.html#using-an-ipython-embedded-shell http://pylonshq.com/project/pylonshq/ticket/428 A: If you set the environment variable PYTHONSTARTUP to the name of a file, it will execute that file on opening the interactive prompt. I don't know anything about paster shell, but I assume it works similarly. Alternatively you could look into iPython, which has much more powerful features (particularly when installed with the readline library). For example %run allows you to run a script in the current namespace, or you can use history completion. Edit: Okay. Having looked into it a bit more, I'm fairly certain that paster shell just does a set of useful imports, and could be easily replicated with a short script and ipython and then %run myscript.py Edit: Having looked at the source, it would be very hard to do (I was wrong about the default imports. It parses your config file as well), however if you have Pylons and iPython both installed, then paster shell should use iPython automagically. Double check that both are installed properly, and double check that paster shell isn't using iPython already (it might be looking like normal python prompt).
Is it possible to launch a Paster shell with some modules pre-imported?
Is it possible to run "paster shell blah.ini" (or a variant thereof) and have it automatically load certain libraries? I hate having to always type "from foo.bar import mystuff" as the first command in every paster shell, and would like the computer to do it for me.
[ "An option to try would be to create a sitecustomize.py script. If you have this in the same folder as your paster shell, the python interpreter should load it up on startup. \nLet me clarify, sitecustomize.py, if found, is always loaded on startup of the interpreter. So if you put it where it can be found, ideally somewhere that is only found when the paster shell starts, then you should be able to add your imports to it and have them be ready.\nThis is probably your best bet. If the paster shell is a packaged app (a la py2exe) it should still work.\nSee also:\nhttp://www.rexx.com/~dkuhlman/pylons_quick_site.html#using-an-ipython-embedded-shell\nhttp://pylonshq.com/project/pylonshq/ticket/428\n", "If you set the environment variable PYTHONSTARTUP to the name of a file, it will execute that file on opening the interactive prompt.\nI don't know anything about paster shell, but I assume it works similarly.\nAlternatively you could look into iPython, which has much more powerful features (particularly when installed with the readline library). For example %run allows you to run a script in the current namespace, or you can use history completion.\nEdit:\nOkay. Having looked into it a bit more, I'm fairly certain that paster shell just does a set of useful imports, and could be easily replicated with a short script and ipython and then %run myscript.py\nEdit:\nHaving looked at the source, it would be very hard to do (I was wrong about the default imports. It parses your config file as well), however if you have Pylons and iPython both installed, then paster shell should use iPython automagically. Double check that both are installed properly, and double check that paster shell isn't using iPython already (it might be looking like normal python prompt).\n" ]
[ 2, 0 ]
[]
[]
[ "paster", "pylons", "python" ]
stackoverflow_0000922351_paster_pylons_python.txt
Q: How to do a meaningful code-coverage analysis of my unit-tests? I manage the testing for a very large financial pricing system. Recently our HQ have insisted that we verify that every single part of our project has a meaningful test in place. At the very least they want a system which guarantees that when we change something we can spot unintentional changes to other sub-systems. Preferably they want something which validates the correctness of every component in our system. That's obviously going to be quite a lot of work! It could take years, but for this kind of project it's worth it. I need to find out which parts of our code are not covered by any of our unit-tests. If I knew which parts of my system were untested then I could set about developing new tests which would eventually approach towards my goal of complete test-coverage. So how can I go about running this kind of analysis. What tools are available to me? I use Python 2.4 on Windows 32bit XP UPDATE0: Just to clarify: We have a very comprehensive unit-test suite (plus a seperate and very comprehensive regtest suite which is outside the scope of this exercise). We also have a very stable continuous integration platform (built with Hudson) which is designed to split-up and run standard python unit-tests across our test facility: Approx 20 PCs built to the company spec. The object of this exercise is to plug any gaps in our python unittest suite (only) suite so that every component has some degree of unittest coverage. Other developers will be taking responsibility for non Python components of the project (which are also outside of scope). "Component" is intentionally vague: Sometime it will be a class, other time an entire module or assembly of modules. It might even refer to a single financial concept (e.g. a single type of financial option or a financial model used by many types of option). This cake can be cut in many ways. "Meaningful" tests (to me) are ones which validate that the function does what the developer originally intended. We do not want to simply reproduce the regtests in pure python. Often the developer's intent is not immediatly obvious, hence the need to research and clarify anything which looks vague and then enshrine this knowledge in a unit-test which makes the original intent quite explicit. A: For the code coverage alone, you could use coverage.py. As for coverage.py vs figleaf: figleaf differs from the gold standard of Python coverage tools ('coverage.py') in several ways. First and foremost, figleaf uses the same criterion for "interesting" lines of code as the sys.settrace function, which obviates some of the complexity in coverage.py (but does mean that your "loc" count goes down). Second, figleaf does not record code executed in the Python standard library, which results in a significant speedup. And third, the format in which the coverage format is saved is very simple and easy to work with. You might want to use figleaf if you're recording coverage from multiple types of tests and need to aggregate the coverage in interesting ways, and/or control when coverage is recorded. coverage.py is a better choice for command-line execution, and its reporting is a fair bit nicer. I guess both have their pros and cons. A: First step would be writing meaningfull tests. If you'll be writing tests only meant to reach full coverage, you'll be counter-productive; it will probably mean you'll focus on unit's implementation details instead of it's expectations. BTW, I'd use nose as unittest framework (http://somethingaboutorange.com/mrl/projects/nose/0.11.1/); it's plugin system is very good and leaves coverage option to you (--with-coverage for Ned's coverage, --with-figleaf for Titus one; support for coverage3 should be coming), and you can write plugisn for your own build system, too. A: FWIW, this is what we do. Since I don't know about your Unit-Test and Regression-Test setup, you have to decide yourself whether this is helpful. Every Python package has UnitTests. We automatically detect unit tests using nose. Nose automagically detects standard Python unit tests (basically everything that looks like a test). Thereby we don't miss unit-tests. Nose also has a plug-in concept so that you can produce, e.g. nice output. We strive for 100% coverage for unit-testing. To this end, we use Coverage to check, because a nose-plugin provides integration. We have set up Eclipse (our IDE) to automatically run nose whenever a file changes so that the unit-tests always get executed, which shows code-coverage as a by-product. A: "every single part of our project has a meaningful test in place" "Part" is undefined. "Meaningful" is undefined. That's okay, however, since it gets better further on. "validates the correctness of every component in our system" "Component" is undefined. But correctness is defined, and we can assign a number of alternatives to component. You only mention Python, so I'll assume the entire project is pure Python. Validates the correctness of every module. Validates the correctness of every class of every module. Validates the correctness of every method of every class of every module. You haven't asked about line of code coverage or logic path coverage, which is a good thing. That way lies madness. "guarantees that when we change something we can spot unintentional changes to other sub-systems" This is regression testing. That's a logical consequence of any unit testing discipline. Here's what you can do. Enumerate every module. Create a unittest for that module that is just a unittest.main(). This should be quick -- a few days at most. Write a nice top-level unittest script that uses a testLoader to all unit tests in your tests directory and runs them through the text runner. At this point, you'll have a lot of files -- one per module -- but no actual test cases. Getting the testloader and the top-level script to work will take a few days. It's important to have this overall harness working. Prioritize your modules. A good rule is "most heavily reused". Another rule is "highest risk from failure". Another rule is "most bugs reported". This takes a few hours. Start at the top of the list. Write a TestCase per class with no real methods or anything. Just a framework. This takes a few days at most. Be sure the docstring for each TestCase positively identifies the Module and Class under test and the status of the test code. You can use these docstrings to determine test coverage. At this point you'll have two parallel tracks. You have to actually design and implement the tests. Depending on the class under test, you may have to build test databases, mock objects, all kinds of supporting material. Testing Rework. Starting with your highest priority untested module, start filling in the TestCases for each class in each module. New Development. For every code change, a unittest.TestCase must be created for the class being changed. The test code follows the same rules as any other code. Everything is checked in at the end of the day. It has to run -- even if the tests don't all pass. Give the test script to the product manager (not the QA manager, the actual product manager who is responsible for shipping product to customers) and make sure they run the script every day and find out why it didn't run or why tests are failing. The actual running of the master test script is not a QA job -- it's everyone's job. Every manager at every level of the organization has to be part of the daily build script output. All of their jobs have to depend on "all tests passed last night". Otherwise, the product manager will simply pull resources away from testing and you'll have nothing. A: Assuming you already have a relatively comprehensive test suite, there are tools for the python part. The C part is much more problematic, depending on tools availability. For python unit tests For C code, it is difficult on many platforms because gprof, the Gnu code profiler cannot handle code built with -fPIC. So you have to build every extension statically in this case, which is not supported by many extensions (see my blog post for numpy, for example). On windows, there may be better code coverage tools for compiled code, but that may require you to recompile the extensions with MS compilers. As for the "right" code coverage, I think a good balance it to avoid writing complicated unit tests as much as possible. If a unit test is more complicated than the thing it tests, then it is a probably not a good test, or a broken test.
How to do a meaningful code-coverage analysis of my unit-tests?
I manage the testing for a very large financial pricing system. Recently our HQ have insisted that we verify that every single part of our project has a meaningful test in place. At the very least they want a system which guarantees that when we change something we can spot unintentional changes to other sub-systems. Preferably they want something which validates the correctness of every component in our system. That's obviously going to be quite a lot of work! It could take years, but for this kind of project it's worth it. I need to find out which parts of our code are not covered by any of our unit-tests. If I knew which parts of my system were untested then I could set about developing new tests which would eventually approach towards my goal of complete test-coverage. So how can I go about running this kind of analysis. What tools are available to me? I use Python 2.4 on Windows 32bit XP UPDATE0: Just to clarify: We have a very comprehensive unit-test suite (plus a seperate and very comprehensive regtest suite which is outside the scope of this exercise). We also have a very stable continuous integration platform (built with Hudson) which is designed to split-up and run standard python unit-tests across our test facility: Approx 20 PCs built to the company spec. The object of this exercise is to plug any gaps in our python unittest suite (only) suite so that every component has some degree of unittest coverage. Other developers will be taking responsibility for non Python components of the project (which are also outside of scope). "Component" is intentionally vague: Sometime it will be a class, other time an entire module or assembly of modules. It might even refer to a single financial concept (e.g. a single type of financial option or a financial model used by many types of option). This cake can be cut in many ways. "Meaningful" tests (to me) are ones which validate that the function does what the developer originally intended. We do not want to simply reproduce the regtests in pure python. Often the developer's intent is not immediatly obvious, hence the need to research and clarify anything which looks vague and then enshrine this knowledge in a unit-test which makes the original intent quite explicit.
[ "For the code coverage alone, you could use coverage.py.\nAs for coverage.py vs figleaf:\n\nfigleaf differs from the gold standard\n of Python coverage tools\n ('coverage.py') in several ways. \n First and foremost, figleaf uses the\n same criterion for \"interesting\" lines\n of code as the sys.settrace function,\n which obviates some of the complexity\n in coverage.py (but does mean that\n your \"loc\" count goes down). Second,\n figleaf does not record code executed\n in the Python standard library, which\n results in a significant speedup. And\n third, the format in which the\n coverage format is saved is very\n simple and easy to work with.\nYou might want to use figleaf if\n you're recording coverage from\n multiple types of tests and need to\n aggregate the coverage in interesting\n ways, and/or control when coverage is\n recorded. coverage.py is a better\n choice for command-line execution, and\n its reporting is a fair bit nicer.\n\nI guess both have their pros and cons.\n", "First step would be writing meaningfull tests. If you'll be writing tests only meant to reach full coverage, you'll be counter-productive; it will probably mean you'll focus on unit's implementation details instead of it's expectations.\nBTW, I'd use nose as unittest framework (http://somethingaboutorange.com/mrl/projects/nose/0.11.1/); it's plugin system is very good and leaves coverage option to you (--with-coverage for Ned's coverage, --with-figleaf for Titus one; support for coverage3 should be coming), and you can write plugisn for your own build system, too.\n", "FWIW, this is what we do. Since I don't know about your Unit-Test and Regression-Test setup, you have to decide yourself whether this is helpful.\n\nEvery Python package has\nUnitTests.\nWe automatically detect unit tests using nose. Nose automagically detects standard Python unit tests (basically everything that looks like a test). Thereby we don't miss unit-tests. Nose also has a plug-in concept so that you can produce, e.g. nice output.\nWe strive for 100% coverage for\nunit-testing. To this end, we use\nCoverage\nto check, because a nose-plugin provides integration.\nWe have set up Eclipse (our IDE) to automatically run nose whenever a file changes so that the unit-tests always get executed, which shows code-coverage as a by-product.\n\n", "\"every single part of our project has a meaningful test in place\"\n\"Part\" is undefined. \"Meaningful\" is undefined. That's okay, however, since it gets better further on.\n\"validates the correctness of every component in our system\"\n\"Component\" is undefined. But correctness is defined, and we can assign a number of alternatives to component. You only mention Python, so I'll assume the entire project is pure Python.\n\nValidates the correctness of every module.\nValidates the correctness of every class of every module.\nValidates the correctness of every method of every class of every module.\n\nYou haven't asked about line of code coverage or logic path coverage, which is a good thing. That way lies madness.\n\"guarantees that when we change something we can spot unintentional changes to other sub-systems\"\nThis is regression testing. That's a logical consequence of any unit testing discipline.\nHere's what you can do.\n\nEnumerate every module. Create a unittest for that module that is just a unittest.main(). This should be quick -- a few days at most.\nWrite a nice top-level unittest script that uses a testLoader to all unit tests in your tests directory and runs them through the text runner. At this point, you'll have a lot of files -- one per module -- but no actual test cases. Getting the testloader and the top-level script to work will take a few days. It's important to have this overall harness working.\nPrioritize your modules. A good rule is \"most heavily reused\". Another rule is \"highest risk from failure\". Another rule is \"most bugs reported\". This takes a few hours.\nStart at the top of the list. Write a TestCase per class with no real methods or anything. Just a framework. This takes a few days at most. Be sure the docstring for each TestCase positively identifies the Module and Class under test and the status of the test code. You can use these docstrings to determine test coverage.\n\nAt this point you'll have two parallel tracks. You have to actually design and implement the tests. Depending on the class under test, you may have to build test databases, mock objects, all kinds of supporting material.\n\nTesting Rework. Starting with your highest priority untested module, start filling in the TestCases for each class in each module.\nNew Development. For every code change, a unittest.TestCase must be created for the class being changed.\n\nThe test code follows the same rules as any other code. Everything is checked in at the end of the day. It has to run -- even if the tests don't all pass.\nGive the test script to the product manager (not the QA manager, the actual product manager who is responsible for shipping product to customers) and make sure they run the script every day and find out why it didn't run or why tests are failing.\nThe actual running of the master test script is not a QA job -- it's everyone's job. Every manager at every level of the organization has to be part of the daily build script output. All of their jobs have to depend on \"all tests passed last night\". Otherwise, the product manager will simply pull resources away from testing and you'll have nothing.\n", "Assuming you already have a relatively comprehensive test suite, there are tools for the python part. The C part is much more problematic, depending on tools availability.\n\nFor python unit tests\nFor C code, it is difficult on many platforms because gprof, the Gnu code profiler cannot handle code built with -fPIC. So you have to build every extension statically in this case, which is not supported by many extensions (see my blog post for numpy, for example). On windows, there may be better code coverage tools for compiled code, but that may require you to recompile the extensions with MS compilers.\n\nAs for the \"right\" code coverage, I think a good balance it to avoid writing complicated unit tests as much as possible. If a unit test is more complicated than the thing it tests, then it is a probably not a good test, or a broken test.\n" ]
[ 6, 4, 4, 3, 1 ]
[]
[]
[ "python", "testing" ]
stackoverflow_0001006189_python_testing.txt
Q: Lookup and combine data in Python I have 3 text files many lines of value1<tab>value2 (maybe 600) many more lines of value2<tab>value3 (maybe 1000) many more lines of value2<tab>value4 (maybe 2000) Not all lines match, some will have one or more vals missing. I want to take file 1, read down it and lookup corresponding values in files 2 & 3, and write the output as - for example value1<tab>value2<tab>value3<tab>value4 value1<tab>value2<tab>blank <tab>value4 i.e. indicate that the value is missing by printing a bit of text in awk I can BEGIN by reading the files into arrays up front then END and step through them. But I want to use Python (3) for portability. I do it on a pc using MS Access and linking tables but there is a time penalty for each time I use this method. All efforts to understand this in dictionaries or lists have confused me. I now seem to have every Python book! Many thanks to anyone who can offer advice. (if interested, it's arp, mac and vendor codes) A: Untested: f1 = open("file1.txt") f2 = open("file2.txt") f3 = open("file3.txt") v1 = [line.split() for line in f1] # dict comprehensions following, these need Python 3 v2 = {vals[0]:vals[1] for vals in line.split() for line in f2} v3 = {vals[0]:vals[1] for vals in line.split() for line in f3} for v in v1: print( v[0] + "\t" + v[1] + "\t" + v2.get(v[1],"blank ") + "\t" + v3.get(v[1],"blank ") ) A: Start with this. def loadDictionaryFromAFile( aFile ): dictionary = {} for line in aFile: fields = line.split('\t') dictionary[fields[0]]= fields dict2 = loadDictionaryFromAFile( open("file2","r" ) dict3 = loadDictionaryFromAFile( open("file3","r" ) for line in open("file1","r"): fields = line.split("/t") d2= dict2.get( fields[0], None ) d3= dict3.get( fields[0], None ) print fields, d2, d3 You may want to customize it to change the formatting of the output.
Lookup and combine data in Python
I have 3 text files many lines of value1<tab>value2 (maybe 600) many more lines of value2<tab>value3 (maybe 1000) many more lines of value2<tab>value4 (maybe 2000) Not all lines match, some will have one or more vals missing. I want to take file 1, read down it and lookup corresponding values in files 2 & 3, and write the output as - for example value1<tab>value2<tab>value3<tab>value4 value1<tab>value2<tab>blank <tab>value4 i.e. indicate that the value is missing by printing a bit of text in awk I can BEGIN by reading the files into arrays up front then END and step through them. But I want to use Python (3) for portability. I do it on a pc using MS Access and linking tables but there is a time penalty for each time I use this method. All efforts to understand this in dictionaries or lists have confused me. I now seem to have every Python book! Many thanks to anyone who can offer advice. (if interested, it's arp, mac and vendor codes)
[ "Untested:\nf1 = open(\"file1.txt\")\nf2 = open(\"file2.txt\")\nf3 = open(\"file3.txt\")\n\nv1 = [line.split() for line in f1]\n# dict comprehensions following, these need Python 3\nv2 = {vals[0]:vals[1] for vals in line.split() for line in f2}\nv3 = {vals[0]:vals[1] for vals in line.split() for line in f3}\n\nfor v in v1:\n print( v[0] + \"\\t\" + v[1] + \"\\t\" + v2.get(v[1],\"blank \") + \"\\t\" + v3.get(v[1],\"blank \") )\n\n", "Start with this.\ndef loadDictionaryFromAFile( aFile ):\n dictionary = {}\n for line in aFile:\n fields = line.split('\\t')\n dictionary[fields[0]]= fields\n\ndict2 = loadDictionaryFromAFile( open(\"file2\",\"r\" )\ndict3 = loadDictionaryFromAFile( open(\"file3\",\"r\" )\n\nfor line in open(\"file1\",\"r\"):\n fields = line.split(\"/t\")\n d2= dict2.get( fields[0], None )\n d3= dict3.get( fields[0], None )\n print fields, d2, d3\n\nYou may want to customize it to change the formatting of the output.\n" ]
[ 5, 3 ]
[]
[]
[ "file", "python", "string" ]
stackoverflow_0001008587_file_python_string.txt
Q: Name of file I'm editing I'm editing a file in ~/Documents. However, my working directory is somewhere else, say ~/Desktop. The file I'm editing is a Python script. I'm interested in doing a command like... :!python without needing to do :!python ~/Documents/script.py Is that possible? If so, what would be the command? Thank you. A: Try: !python % A: I quite often map a key to do this for me. I usually use the F5 key as that has no command associated with it by default in vim. The mapping I like to use is: :map <F5> :w<CR>:!python % 2>&1 \| tee /var/tmp/robertw/results<CR> this will also make sure that you've written out your script before running it. It also captures any output, after duplicating stderr onto stdout, in a temp file. If you've done a: :set autoread and a: :sb /var/tmp/robertw/results you will finish up with two buffers being displayed. One containing the script and the other containing the output, incl. errors, from your script. By setting autoread the window displaying the output will be automatically loaded after pressing the v key. A trick to remember is to use cntl-ww to toggle between the windows and that the mapping, because it refers to % (the current file) will only work when the cursor is in the window containing the Python script. I find this really cuts down on my code, test, debug cycle time.
Name of file I'm editing
I'm editing a file in ~/Documents. However, my working directory is somewhere else, say ~/Desktop. The file I'm editing is a Python script. I'm interested in doing a command like... :!python without needing to do :!python ~/Documents/script.py Is that possible? If so, what would be the command? Thank you.
[ "Try: !python %\n", "I quite often map a key to do this for me. I usually use the F5 key as that has no command associated with it by default in vim.\nThe mapping I like to use is:\n:map <F5> :w<CR>:!python % 2>&1 \\| tee /var/tmp/robertw/results<CR>\n\nthis will also make sure that you've written out your script before running it. It also captures any output, after duplicating stderr onto stdout, in a temp file.\nIf you've done a:\n:set autoread\n\nand a:\n:sb /var/tmp/robertw/results\n\nyou will finish up with two buffers being displayed. One containing the script and the other containing the output, incl. errors, from your script. By setting autoread the window displaying the output will be automatically loaded after pressing the v key.\nA trick to remember is to use cntl-ww to toggle between the windows and that the mapping, because it refers to % (the current file) will only work when the cursor is in the window containing the Python script.\nI find this really cuts down on my code, test, debug cycle time.\n" ]
[ 9, 6 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0001008557_python_vim.txt
Q: How do you make this code more pythonic? Could you guys please tell me how I can make the following code more pythonic? The code is correct. Full disclosure - it's problem 1b in Handout #4 of this machine learning course. I'm supposed to use newton's algorithm on the two data sets for fitting a logistic hypothesis. But they use matlab & I'm using scipy Eg one question i have is the matrixes kept rounding to integers until I initialized one value to 0.0. Is there a better way? Thanks import os.path import math from numpy import matrix from scipy.linalg import inv #, det, eig x = matrix( '0.0;0;1' ) y = 11 grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') theta = matrix( '0.0;0;0' ) # run until convergence=6or7 for i in range(1, 6): #reset grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') xfile = open("q1x.dat", "r") yfile = open("q1y.dat", "r") #over whole set=99 items for i in range(1, 100): xline = xfile.readline() s= xline.split(" ") x[0] = float(s[1]) x[1] = float(s[2]) y = float(yfile.readline()) hypoth = 1/ (1+ math.exp(-(theta.transpose() * x))) for j in range(0,3): grad[j] = grad[j] + (y-hypoth)* x[j] for k in range(0,3): hess[j,k] = hess[j,k] - (hypoth *(1-hypoth)*x[j]*x[k]) theta = theta - inv(hess)*grad #update theta after construction xfile.close() yfile.close() print "done" print theta A: One obvious change is to get rid of the "for i in range(1, 100):" and just iterate over the file lines. To iterate over both files (xfile and yfile), zip them. ie replace that block with something like: import itertools for xline, yline in itertools.izip(xfile, yfile): s= xline.split(" ") x[0] = float(s[1]) x[1] = float(s[2]) y = float(yline) ... (This is assuming the file is 100 lines, (ie. you want the whole file). If you're deliberately restricting to the first 100 lines, you could use something like: for i, xline, yline in itertools.izip(range(100), xfile, yfile): However, its also inefficient to iterate over the same file 6 times - better to load it into memory in advance, and loop over it there, ie. outside your loop, have: xfile = open("q1x.dat", "r") yfile = open("q1y.dat", "r") data = zip([line.split(" ")[1:3] for line in xfile], map(float, yfile)) And inside just: for (x1,x2), y in data: x[0] = x1 x[1] = x2 ... A: x = matrix([[0.],[0],[1]]) theta = matrix(zeros([3,1])) for i in range(5): grad = matrix(zeros([3,1])) hess = matrix(zeros([3,3])) [xfile, yfile] = [open('q1'+a+'.dat', 'r') for a in 'xy'] for xline, yline in zip(xfile, yfile): x.transpose()[0,:2] = [map(float, xline.split(" ")[1:3])] y = float(yline) hypoth = 1 / (1 + math.exp(theta.transpose() * x)) grad += (y - hypoth) * x hess -= hypoth * (1 - hypoth) * x * x.transpose() theta += inv(hess) * grad print "done" print theta A: the matrixes kept rounding to integers until I initialized one value to 0.0. Is there a better way? At the top of your code: from __future__ import division In Python 2.6 and earlier, integer division always returns an integer unless there is at least one floating point number within. In Python 3.0 (and in future division in 2.6), division works more how we humans might expect it to. If you want integer division to return an integer, and you've imported from future, use a double //. That is from __future__ import division print 1//2 # prints 0 print 5//2 # prints 2 print 1/2 # prints 0.5 print 5/2 # prints 2.5 A: You could make use of the with statement. A: the code that reads the files into lists could be drastically simpler for line in open("q1x.dat", "r"): x = map(float,line.split(" ")[1:]) y = map(float, open("q1y.dat", "r").readlines())
How do you make this code more pythonic?
Could you guys please tell me how I can make the following code more pythonic? The code is correct. Full disclosure - it's problem 1b in Handout #4 of this machine learning course. I'm supposed to use newton's algorithm on the two data sets for fitting a logistic hypothesis. But they use matlab & I'm using scipy Eg one question i have is the matrixes kept rounding to integers until I initialized one value to 0.0. Is there a better way? Thanks import os.path import math from numpy import matrix from scipy.linalg import inv #, det, eig x = matrix( '0.0;0;1' ) y = 11 grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') theta = matrix( '0.0;0;0' ) # run until convergence=6or7 for i in range(1, 6): #reset grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') xfile = open("q1x.dat", "r") yfile = open("q1y.dat", "r") #over whole set=99 items for i in range(1, 100): xline = xfile.readline() s= xline.split(" ") x[0] = float(s[1]) x[1] = float(s[2]) y = float(yfile.readline()) hypoth = 1/ (1+ math.exp(-(theta.transpose() * x))) for j in range(0,3): grad[j] = grad[j] + (y-hypoth)* x[j] for k in range(0,3): hess[j,k] = hess[j,k] - (hypoth *(1-hypoth)*x[j]*x[k]) theta = theta - inv(hess)*grad #update theta after construction xfile.close() yfile.close() print "done" print theta
[ "One obvious change is to get rid of the \"for i in range(1, 100):\" and just iterate over the file lines. To iterate over both files (xfile and yfile), zip them. ie replace that block with something like:\n import itertools\n\n for xline, yline in itertools.izip(xfile, yfile):\n s= xline.split(\" \")\n x[0] = float(s[1])\n x[1] = float(s[2])\n y = float(yline)\n ...\n\n(This is assuming the file is 100 lines, (ie. you want the whole file). If you're deliberately restricting to the first 100 lines, you could use something like:\n for i, xline, yline in itertools.izip(range(100), xfile, yfile):\n\nHowever, its also inefficient to iterate over the same file 6 times - better to load it into memory in advance, and loop over it there, ie. outside your loop, have:\nxfile = open(\"q1x.dat\", \"r\")\nyfile = open(\"q1y.dat\", \"r\")\ndata = zip([line.split(\" \")[1:3] for line in xfile], map(float, yfile))\n\nAnd inside just:\nfor (x1,x2), y in data:\n x[0] = x1\n x[1] = x2\n ...\n\n", "x = matrix([[0.],[0],[1]])\ntheta = matrix(zeros([3,1]))\nfor i in range(5):\n grad = matrix(zeros([3,1]))\n hess = matrix(zeros([3,3]))\n [xfile, yfile] = [open('q1'+a+'.dat', 'r') for a in 'xy']\n for xline, yline in zip(xfile, yfile):\n x.transpose()[0,:2] = [map(float, xline.split(\" \")[1:3])]\n y = float(yline)\n hypoth = 1 / (1 + math.exp(theta.transpose() * x))\n grad += (y - hypoth) * x\n hess -= hypoth * (1 - hypoth) * x * x.transpose()\n theta += inv(hess) * grad\nprint \"done\"\nprint theta\n\n", "\nthe matrixes kept rounding to integers until I initialized one value\n to 0.0. Is there a better way?\n\nAt the top of your code:\nfrom __future__ import division\n\nIn Python 2.6 and earlier, integer division always returns an integer unless there is at least one floating point number within. In Python 3.0 (and in future division in 2.6), division works more how we humans might expect it to.\nIf you want integer division to return an integer, and you've imported from future, use a double //. That is\nfrom __future__ import division\nprint 1//2 # prints 0\nprint 5//2 # prints 2\nprint 1/2 # prints 0.5\nprint 5/2 # prints 2.5\n\n", "You could make use of the with statement.\n", "the code that reads the files into lists could be drastically simpler\nfor line in open(\"q1x.dat\", \"r\"):\n x = map(float,line.split(\" \")[1:])\ny = map(float, open(\"q1y.dat\", \"r\").readlines())\n\n" ]
[ 9, 4, 3, 0, 0 ]
[]
[]
[ "machine_learning", "python", "scipy" ]
stackoverflow_0001007215_machine_learning_python_scipy.txt
Q: Using Perl, Python, or Ruby, how to write a program to "click" on the screen at scheduled time? Using Perl, Python, or Ruby, can I write a program, probably calling Win32 API, to "click" on the screen at scheduled time, like every 1 hour? Details: This is for experimentation -- and can the clicking be effective on Flash content as well as any element on screen? It can be nice if the program can record where on screen the click needs to happen, or at least draw a red dot on the screen to show where it is clicking on. Can the click be targeted towards a window or is it only a general pixel on the screen? What if some virus scanning program pops up covering up the place where the click should happen? (although if the program clicks on the white space of a window first, then it can bring that window to the foreground first). By the way, can Grease Monkey or any Firefox add-on be used to do this too? A: If you are trying to automate some task in a website you might want to look at WWW::Selenium. It, along with Selenium Remote Control, allows you to remote control a web browser. A: In Python there is ctypes and in Perl there is Win32::API ctypes Example from ctypes import * windll.user32.MessageBoxA(None, "Hey MessageBox", "ctypes", 0); Win32::Api Example use Win32::GUI qw( WM_CLOSE ); my $tray = Win32::GUI::FindWindow("WindowISearchFor","WindowISearchFor"); Win32::GUI::SendMessage($tray,WM_CLOSE,0,0); A: To answer the actual question, in Perl, you would use the SendMouse (and the associated functions) provided by the Win32::GuiTest module. #!/usr/bin/perl use strict; use warnings; use Win32::GuiTest qw( MouseMoveAbsPix SendMouse ); MouseMoveAbsPix(640,400); SendMouse "{LEFTCLICK}"; __END__ UPDATE: What if some virus scanning program pops up covering up the place where the click should happen? In that case, you would use FindWindowLike to find the window and MouseClick to send a click to that specific window. A: If using a different tool is allowed, you should take a look at AutoHotkey or AutoIt. These tools were made for this sort of thing, and I've always been keen on using the right tools for the right jobs. AutoHotkey is based on AutoIt I believe, and it is my personal preference. You only really need 2 functions for what you're trying to achieve, MouseMove and MouseClick. A: I find this is easier to approach in Java or C++. Java has a Robot class that allows you to just pass x, y coordinates and click somewhere. Using C++, you can achieve that same functionality using mouse_event() or SendMessage() with the WM_MOUSE_DOWN flag. SendMessage is more technical but it allows you to use FindWindow() and send mouse clicks to a specific window, even if it's minimized. Using a scripting language like Python or Ruby, I'd guess that you'd end up hooking into one of these Windows API functions anyway.
Using Perl, Python, or Ruby, how to write a program to "click" on the screen at scheduled time?
Using Perl, Python, or Ruby, can I write a program, probably calling Win32 API, to "click" on the screen at scheduled time, like every 1 hour? Details: This is for experimentation -- and can the clicking be effective on Flash content as well as any element on screen? It can be nice if the program can record where on screen the click needs to happen, or at least draw a red dot on the screen to show where it is clicking on. Can the click be targeted towards a window or is it only a general pixel on the screen? What if some virus scanning program pops up covering up the place where the click should happen? (although if the program clicks on the white space of a window first, then it can bring that window to the foreground first). By the way, can Grease Monkey or any Firefox add-on be used to do this too?
[ "If you are trying to automate some task in a website you might want to look at WWW::Selenium. It, along with Selenium Remote Control, allows you to remote control a web browser.\n", "In Python there is ctypes and in Perl there is Win32::API\nctypes Example\nfrom ctypes import *\nwindll.user32.MessageBoxA(None, \"Hey MessageBox\", \"ctypes\", 0);\n\nWin32::Api Example\nuse Win32::GUI qw( WM_CLOSE );\nmy $tray = Win32::GUI::FindWindow(\"WindowISearchFor\",\"WindowISearchFor\");\nWin32::GUI::SendMessage($tray,WM_CLOSE,0,0);\n\n", "To answer the actual question, in Perl, you would use the SendMouse (and the associated functions) provided by the Win32::GuiTest module.\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\nuse Win32::GuiTest qw( MouseMoveAbsPix SendMouse );\n\nMouseMoveAbsPix(640,400);\nSendMouse \"{LEFTCLICK}\";\n\n__END__\n\nUPDATE:\n\nWhat if some virus scanning program pops up covering up the place \n where the click should happen? \n\nIn that case, you would use FindWindowLike to find the window and MouseClick to send a click to that specific window.\n", "If using a different tool is allowed, you should take a look at AutoHotkey or AutoIt. These tools were made for this sort of thing, and I've always been keen on using the right tools for the right jobs.\nAutoHotkey is based on AutoIt I believe, and it is my personal preference. You only really need 2 functions for what you're trying to achieve, MouseMove and MouseClick.\n", "I find this is easier to approach in Java or C++. Java has a Robot class that allows you to just pass x, y coordinates and click somewhere. Using C++, you can achieve that same functionality using mouse_event() or SendMessage() with the WM_MOUSE_DOWN flag. SendMessage is more technical but it allows you to use FindWindow() and send mouse clicks to a specific window, even if it's minimized.\nUsing a scripting language like Python or Ruby, I'd guess that you'd end up hooking into one of these Windows API functions anyway.\n" ]
[ 8, 7, 6, 1, 0 ]
[]
[]
[ "perl", "python", "ruby", "winapi" ]
stackoverflow_0001007391_perl_python_ruby_winapi.txt
Q: Output 2 dim array 'list of lists" to text file in python Simple question - I am creating a two dim array (ddist = [[0]*d for _ in [0]*d]) using lists in the code below. It outputs distance using gis data. I just want a simple way to take the result of my array/list and output to a text file keeping the same N*N structure. I have used output from print statements in the past but not a good solution in this case. I am new to python by way of SAS. def match_bg(): #as the name suggests this function will match the variations of blockgroups with grid travel time. Then output into two arras time and distance. count = -1 countwo = -1 ctime = -1 ddist = [[0]*d for _ in [0]*d] #cratesan N*N array list dtime = -1 while count < 10: count = count +1 #j[count][7] = float(j[count][7]) #j[count][6] = float(j[count][6]) while countwo < d: countwo = countwo+1 if count < 1: #change values in bg file j[countwo][7] = float(j[countwo][7]) j[countwo][6] = float(j[countwo][6]) #print j[count], j[countwo] while ctime < RowsT: #print ctime, lenth, t[ctime][0], count, countwo ctime = ctime + 1 #takes both verations of big zone which should be end of the file and matches to travetime file - note 0 and 1 for t[] should be same for different files if ((j[count][lenth-1] == t[ctime][0]) and (j[countwo][lenth-1] == t[ctime][1])) or ((j[countwo][lenth-1] == t[ctime][0]) and (j[count][lenth-1] == t[ctime][1])): if t[ctime][0] != t[ctime][1]: #jkdljf x1=3963*j[count][7]*(math.pi/180) x2=3963*j[countwo][7]*(math.pi/180) y1=math.cos(j[count][6]*math.pi/180)*3963*j[count][7]*(math.pi/180) y2=math.cos(j[countwo][6]*math.pi/180)*3963*j[countwo][7]*(math.pi/180) dist=math.sqrt(pow(( x1-x2), 2) + pow((y1-y2), 2)) dtime = dist/t[ctime][11] print countwo, count ddist[count-1][countwo-1] = dist/t[ctime][lenth] print dtime, "ajusted time", "not same grid" print elif j[count][5] != j[countwo][5]: #ljdkjfs x1=3963*j[count][7]*(math.pi/180) x2=3963*j[countwo][7]*(math.pi/180) y1=math.cos(j[count][6]*math.pi/180)*3963*j[count][7]*(math.pi/180) y2=math.cos(j[countwo][6]*math.pi/180)*3963*j[countwo][7]*(math.pi/180) dist=math.sqrt(pow(( x1-x2), 2) + pow((y1-y2), 2)) # could change to calculation dtime = (dist/.65)/(t[ctime][10]/60.0) print dtime, dist, "not in the same bg", j[count], j[countwo], t[ctime] elif j[count][5] == j[countwo][5]: if t[count][7] < 3000000: dtime = 3 elif t[count][7] < 20000000: dtime = 8 else: dtime = 12 print dtime, "same bg" print t[ctime][0], t[ctime], 1, j[count], j[countwo] else: print "error is skip logic", j[count], j[countwo], t[ctime] break #elif (j[countwo][lenth-1] == t[ctime][0]) and (j[count][lenth-1] == t[ctime][1]): #print t[ctime][0], t[ctime], 2, j[count], j[countwo] #break ctime = -1 countwo = -1 A: that's what you could to output your 2-d list (or any 2d list for that matter): with open(outfile, 'w') as file: file.writelines('\t'.join(str(j) for j in i) + '\n' for i in top_list)
Output 2 dim array 'list of lists" to text file in python
Simple question - I am creating a two dim array (ddist = [[0]*d for _ in [0]*d]) using lists in the code below. It outputs distance using gis data. I just want a simple way to take the result of my array/list and output to a text file keeping the same N*N structure. I have used output from print statements in the past but not a good solution in this case. I am new to python by way of SAS. def match_bg(): #as the name suggests this function will match the variations of blockgroups with grid travel time. Then output into two arras time and distance. count = -1 countwo = -1 ctime = -1 ddist = [[0]*d for _ in [0]*d] #cratesan N*N array list dtime = -1 while count < 10: count = count +1 #j[count][7] = float(j[count][7]) #j[count][6] = float(j[count][6]) while countwo < d: countwo = countwo+1 if count < 1: #change values in bg file j[countwo][7] = float(j[countwo][7]) j[countwo][6] = float(j[countwo][6]) #print j[count], j[countwo] while ctime < RowsT: #print ctime, lenth, t[ctime][0], count, countwo ctime = ctime + 1 #takes both verations of big zone which should be end of the file and matches to travetime file - note 0 and 1 for t[] should be same for different files if ((j[count][lenth-1] == t[ctime][0]) and (j[countwo][lenth-1] == t[ctime][1])) or ((j[countwo][lenth-1] == t[ctime][0]) and (j[count][lenth-1] == t[ctime][1])): if t[ctime][0] != t[ctime][1]: #jkdljf x1=3963*j[count][7]*(math.pi/180) x2=3963*j[countwo][7]*(math.pi/180) y1=math.cos(j[count][6]*math.pi/180)*3963*j[count][7]*(math.pi/180) y2=math.cos(j[countwo][6]*math.pi/180)*3963*j[countwo][7]*(math.pi/180) dist=math.sqrt(pow(( x1-x2), 2) + pow((y1-y2), 2)) dtime = dist/t[ctime][11] print countwo, count ddist[count-1][countwo-1] = dist/t[ctime][lenth] print dtime, "ajusted time", "not same grid" print elif j[count][5] != j[countwo][5]: #ljdkjfs x1=3963*j[count][7]*(math.pi/180) x2=3963*j[countwo][7]*(math.pi/180) y1=math.cos(j[count][6]*math.pi/180)*3963*j[count][7]*(math.pi/180) y2=math.cos(j[countwo][6]*math.pi/180)*3963*j[countwo][7]*(math.pi/180) dist=math.sqrt(pow(( x1-x2), 2) + pow((y1-y2), 2)) # could change to calculation dtime = (dist/.65)/(t[ctime][10]/60.0) print dtime, dist, "not in the same bg", j[count], j[countwo], t[ctime] elif j[count][5] == j[countwo][5]: if t[count][7] < 3000000: dtime = 3 elif t[count][7] < 20000000: dtime = 8 else: dtime = 12 print dtime, "same bg" print t[ctime][0], t[ctime], 1, j[count], j[countwo] else: print "error is skip logic", j[count], j[countwo], t[ctime] break #elif (j[countwo][lenth-1] == t[ctime][0]) and (j[count][lenth-1] == t[ctime][1]): #print t[ctime][0], t[ctime], 2, j[count], j[countwo] #break ctime = -1 countwo = -1
[ "that's what you could to output your 2-d list (or any 2d list for that matter):\nwith open(outfile, 'w') as file:\n file.writelines('\\t'.join(str(j) for j in i) + '\\n' for i in top_list)\n\n" ]
[ 5 ]
[]
[]
[ "list", "python", "text" ]
stackoverflow_0001009712_list_python_text.txt
Q: Timestamp conversion is off by an hour I'm trying to parse a twitter feed in django, and I'm having a strange problem converting the published time: I've got the time from the feed into a full 9-tuple correctly: >> print tweet_time time.struct_time(tm_year=2009, tm_mon=6, tm_mday=17, tm_hour=14, tm_min=35, tm_sec=28, tm_wday=2, tm_yday=168, tm_isdst=0) But when I call this: tweet_time = datetime.fromtimestamp(time.mktime(tweet_time)) I end up with the a time 1 hour ahead: >> print tweet_time 2009-06-17 15:35:28 What am I missing here? A: try flipping the isdst (is daylight savings flag) to a -1 and see if that fixes it. -1 tells it to use (guess) the local daylight savings setting and roll with that.
Timestamp conversion is off by an hour
I'm trying to parse a twitter feed in django, and I'm having a strange problem converting the published time: I've got the time from the feed into a full 9-tuple correctly: >> print tweet_time time.struct_time(tm_year=2009, tm_mon=6, tm_mday=17, tm_hour=14, tm_min=35, tm_sec=28, tm_wday=2, tm_yday=168, tm_isdst=0) But when I call this: tweet_time = datetime.fromtimestamp(time.mktime(tweet_time)) I end up with the a time 1 hour ahead: >> print tweet_time 2009-06-17 15:35:28 What am I missing here?
[ "try flipping the isdst (is daylight savings flag) to a -1 and see if that fixes it. -1 tells it to use (guess) the local daylight savings setting and roll with that. \n" ]
[ 5 ]
[]
[]
[ "datetime", "django", "python", "time" ]
stackoverflow_0001009812_datetime_django_python_time.txt
Q: Using Django JSON serializer for object that is not a Model Is it possible to use Django serializer without a Model? How it is done? Will it work with google-app-engine? I don't use Django framework, but since it is available, I would want to use its resources here and there. Here is the code I tried: from django.core import serializers obj = {'a':42,'q':'meaning of life'} serialised = serializers.serialize('json', obj) this generates an error ERROR ... __init__.py:385] 'str' object has no attribute '_meta' A: Serializers are only for models. Instead you can use simplejson bundled with Django. from django.utils import simplejson json_str = simplejson.dumps(my_object) Simplejson 2.0.9 docs are here. A: The GQLEncoder class in this library can take a db.Model entity and serialize it. I'm not sure if this is what you're looking for, but it's been useful to me.
Using Django JSON serializer for object that is not a Model
Is it possible to use Django serializer without a Model? How it is done? Will it work with google-app-engine? I don't use Django framework, but since it is available, I would want to use its resources here and there. Here is the code I tried: from django.core import serializers obj = {'a':42,'q':'meaning of life'} serialised = serializers.serialize('json', obj) this generates an error ERROR ... __init__.py:385] 'str' object has no attribute '_meta'
[ "Serializers are only for models. Instead you can use simplejson bundled with Django.\nfrom django.utils import simplejson\njson_str = simplejson.dumps(my_object)\n\nSimplejson 2.0.9 docs are here.\n", "The GQLEncoder class in this library can take a db.Model entity and serialize it. I'm not sure if this is what you're looking for, but it's been useful to me. \n" ]
[ 15, 0 ]
[]
[]
[ "django", "google_app_engine", "json", "python", "serialization" ]
stackoverflow_0001005422_django_google_app_engine_json_python_serialization.txt
Q: Using string as variable name Is there any way for me to use a string to call a method of a class? Here's an example that will hopefully explain better (using the way I think it should be): class helloworld(): def world(self): print "Hello World!" str = "world" hello = helloworld() hello.`str`() Which would output Hello World!. Thanks in advance. A: You can use getattr: >>> class helloworld: ... def world(self): ... print("Hello World!") ... >>> m = "world" >>> hello = helloworld() >>> getattr(hello, m)() Hello World! Note that the parens in class helloworld() as in your example are unnecessary, in this case. And, as SilentGhost points out, str is an unfortunate name for a variable. A: Warning: exec is a dangerous function to use, study it before using it You can also use the built-in function "exec": >>> def foo(): print('foo was called'); ... >>> some_string = 'foo'; >>> exec(some_string + '()'); foo was called >>>
Using string as variable name
Is there any way for me to use a string to call a method of a class? Here's an example that will hopefully explain better (using the way I think it should be): class helloworld(): def world(self): print "Hello World!" str = "world" hello = helloworld() hello.`str`() Which would output Hello World!. Thanks in advance.
[ "You can use getattr:\n>>> class helloworld:\n... def world(self):\n... print(\"Hello World!\")\n... \n>>> m = \"world\"\n>>> hello = helloworld()\n>>> getattr(hello, m)()\nHello World!\n\n\nNote that the parens in class helloworld() as in your example are unnecessary, in this case.\nAnd, as SilentGhost points out, str is an unfortunate name for a variable.\n\n", "Warning: exec is a dangerous function to use, study it before using it\nYou can also use the built-in function \"exec\":\n>>> def foo(): print('foo was called');\n...\n>>> some_string = 'foo';\n>>> exec(some_string + '()');\nfoo was called\n>>>\n\n" ]
[ 16, 2 ]
[ "one way is you can set variables to be equal to functions just like data\ndef thing1():\n print \"stuff\"\n\ndef thing2():\n print \"other stuff\"\n\navariable = thing1\navariable ()\navariable = thing2\navariable ()\n\nAnd the output you'l get is \nstuff\nother stuff\n\nThen you can get more complicated and have\nsomedictionary[\"world\"] = world\nsomedictionary[\"anotherfunction\"] = anotherfunction\n\nand so on. If you want to automatically compile a modules methods into the dictionary use dir()\n", "What you're looking for is exec\nclass helloworld():\n def world(self):\n print \"Hello World!\"\n\nstr = \"world\"\nhello = helloworld()\n\ncompleteString = \"hello.%s()\" % str\n\nexec(completString)\n\n" ]
[ -3, -3 ]
[ "python" ]
stackoverflow_0001009831_python.txt
Q: Obtaining financial data from Google Finance which is outside the scope of the API Google's finance API is incomplete -- many of the figures on a page such as: http://www.google.com/finance?fstype=ii&q=NYSE:GE are not available via the API. I need this data to rank companies on Canadian stock exchanges according to the formula of Greenblatt, available via google search for "greenblatt index scans". My question: what is the most intelligent/clean/efficient way of accessing and processing the data on these webpages. Is the tedious approach really necessary in this case, and if so, what is the best way of going about it? I'm currently learning Python for projects related to this one. A: You could try asking Google to provide the missing APIs. Otherwise, you're stuck with screen scraping, which is never fun, prone to breaking without notice, and likely in violation of Google's terms of service. But, if you still want to write a screen scraper, it's hard to beat a combination of mechanize and BeautifulSoup. BeautifulSoup is an HTML parser and mechanize is a Python-based web browser that will let you log in, store cookies, and generally navigate around like any other web browser. A: BeautifulSoup would be the preferred method of HTML parsing with Python Have you looked into options besides Google (e.g. Yahoo Finance API)? A: Scraping web pages always sucks, but I would recommend converting them to xml (via tidy or some other HTML -> XML program) and then using xpath to walk the nodes that you are interested in.
Obtaining financial data from Google Finance which is outside the scope of the API
Google's finance API is incomplete -- many of the figures on a page such as: http://www.google.com/finance?fstype=ii&q=NYSE:GE are not available via the API. I need this data to rank companies on Canadian stock exchanges according to the formula of Greenblatt, available via google search for "greenblatt index scans". My question: what is the most intelligent/clean/efficient way of accessing and processing the data on these webpages. Is the tedious approach really necessary in this case, and if so, what is the best way of going about it? I'm currently learning Python for projects related to this one.
[ "You could try asking Google to provide the missing APIs. Otherwise, you're stuck with screen scraping, which is never fun, prone to breaking without notice, and likely in violation of Google's terms of service.\nBut, if you still want to write a screen scraper, it's hard to beat a combination of mechanize and BeautifulSoup. BeautifulSoup is an HTML parser and mechanize is a Python-based web browser that will let you log in, store cookies, and generally navigate around like any other web browser. \n", "BeautifulSoup would be the preferred method of HTML parsing with Python\nHave you looked into options besides Google (e.g. Yahoo Finance API)?\n", "Scraping web pages always sucks, but I would recommend converting them to xml (via tidy or some other HTML -> XML program) and then using xpath to walk the nodes that you are interested in.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "api", "data_mining", "google_finance", "python" ]
stackoverflow_0001009524_api_data_mining_google_finance_python.txt
Q: A question on python sorting efficiency Alright so I am making a commandline based implementation of a website search feature. The website has a list of all the links I need in alphabetical order. Usage would be something like ./find.py LinkThatStartsWithB So it would navigate to the webpage associated with the letter B. My questions is what is the most efficient/smartest way to use the input by the user and navigate to the webpage? What I was thinking at first was something along the lines of using a list and then getting the first letter of the word and using the numeric identifier to tell where to go in list index. (A = 1, B = 2...) Example code: #Use base url as starting point then add extension on end. Base_URL = "http://www.website.com/" #Use list index as representation of letter Alphabetic_Urls = [ "/extensionA.html", "/extensionB.html", "/extensionC.html", ] Or would Dictionary be a better bet? Thanks A: How are you getting this list of URLS? If your commandline app is crawling the website for links, and you are only looking for a single item, building a dictionary is pointless. It will take at least as long to build the dict as it would to just check as you go! eg, just search as: for link in mysite.getallLinks(): if link[0] == firstletter: print link If you are going to be doing multiple searches (rather than just a single commandline parameter), then it might be worth building a dictionary using something like: import collections d=collections.defaultdict(list) for link in mysite.getallLinks(): d[link[0]].append(link) # Dict of first letter -> list of links # Print all links starting with firstletter for link in d[firstletter]: print link Though given that there are just 26 buckets, it's not going to make that much of a difference. A: The smartest way here will be whatever makes the code simplest to read. When you've only got 26 items in a list, who cares what algorithm it uses to look through it? You'd have to use something really, really stupid to make it have an impact on performance. If you're really interested in the performance though, you'd need to benchmark different options. Looking at just the complexity doesn't tell the whole story, because it hides the factors involved. For instance, a dictionary lookup will involve computing the hash of the key, looking that up in tables, then checking equality. For short lists, a simple linear search can sometimes be more efficient, depending on how costly the hashing algorithm is. If your example is really accurate though, can't you just take the first letter of the input string and predict the URL from that? ("/extension" + letter + ".html") A: Dictionary! O(1) A: Dictionary would be a good choice if you have (and will always have) a small number of items. If the list of URL's is going to expand in the future you will probably actually want to sort the URL's by their letter and then match the input against that instead of hard-coding the dictionary for each one. A: Since it sounds like you're only talking about 26 total items, you probably don't have to worry too much about efficiency. Anything you come up with should be fast enough. In general, I recommend trying to use the data structure that is the best approximation of your problem domain. For example, it sounds like you are trying to map letters to URLs. E.g., this is the "A" url and this is the "B" url. In that case, a mapping data structure like a dict sounds appropriate: html_files = { 'a': '/extensionA.html', 'b': '/extensionB.html', 'c': '/extensionC.html', } Although in this exact example you could actually cheat it and skip the data structure altogether -- '/extension%s.html' % letter.upper() :)
A question on python sorting efficiency
Alright so I am making a commandline based implementation of a website search feature. The website has a list of all the links I need in alphabetical order. Usage would be something like ./find.py LinkThatStartsWithB So it would navigate to the webpage associated with the letter B. My questions is what is the most efficient/smartest way to use the input by the user and navigate to the webpage? What I was thinking at first was something along the lines of using a list and then getting the first letter of the word and using the numeric identifier to tell where to go in list index. (A = 1, B = 2...) Example code: #Use base url as starting point then add extension on end. Base_URL = "http://www.website.com/" #Use list index as representation of letter Alphabetic_Urls = [ "/extensionA.html", "/extensionB.html", "/extensionC.html", ] Or would Dictionary be a better bet? Thanks
[ "How are you getting this list of URLS?\nIf your commandline app is crawling the website for links, and you are only looking for a single item, building a dictionary is pointless. It will take at least as long to build the dict as it would to just check as you go! eg, just search as:\nfor link in mysite.getallLinks():\n if link[0] == firstletter:\n print link\n\nIf you are going to be doing multiple searches (rather than just a single commandline parameter), then it might be worth building a dictionary using something like:\nimport collections\nd=collections.defaultdict(list)\nfor link in mysite.getallLinks():\n d[link[0]].append(link) # Dict of first letter -> list of links\n\n# Print all links starting with firstletter\nfor link in d[firstletter]:\n print link\n\nThough given that there are just 26 buckets, it's not going to make that much of a difference.\n", "The smartest way here will be whatever makes the code simplest to read. When you've only got 26 items in a list, who cares what algorithm it uses to look through it? You'd have to use something really, really stupid to make it have an impact on performance.\nIf you're really interested in the performance though, you'd need to benchmark different options. Looking at just the complexity doesn't tell the whole story, because it hides the factors involved. For instance, a dictionary lookup will involve computing the hash of the key, looking that up in tables, then checking equality. For short lists, a simple linear search can sometimes be more efficient, depending on how costly the hashing algorithm is.\nIf your example is really accurate though, can't you just take the first letter of the input string and predict the URL from that? (\"/extension\" + letter + \".html\")\n", "Dictionary!\nO(1)\n", "Dictionary would be a good choice if you have (and will always have) a small number of items. If the list of URL's is going to expand in the future you will probably actually want to sort the URL's by their letter and then match the input against that instead of hard-coding the dictionary for each one. \n", "Since it sounds like you're only talking about 26 total items, you probably don't have to worry too much about efficiency. Anything you come up with should be fast enough.\nIn general, I recommend trying to use the data structure that is the best approximation of your problem domain. For example, it sounds like you are trying to map letters to URLs. E.g., this is the \"A\" url and this is the \"B\" url. In that case, a mapping data structure like a dict sounds appropriate:\nhtml_files = {\n 'a': '/extensionA.html',\n 'b': '/extensionB.html',\n 'c': '/extensionC.html',\n}\n\nAlthough in this exact example you could actually cheat it and skip the data structure altogether -- '/extension%s.html' % letter.upper() :)\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0001005494_python_sorting.txt
Q: How to distribute proportionally dates on a scale with Python I have a very simple charting component which takes integer on the x/y axis. My problem is that I need to represent date/float on this chart. So I though I could distribute proportionally dates on a scale. In other words, let's say I have the following date : 01/01/2008, 02/01/2008 and 31/12/2008. The algorithm would return 0, 16.667, and 100 (1 month = 16.667%). I tried to play with the datetime and timedelta classes of Python 2.5 and I am unable to achieve this. I thought I could use the number of ticks, but I am not even able to get that info from datetime. Any idea how I could write this algorithm in Python? Otherwise, any other ideas or algorithms? A: If you're dealing with dates, then you can use the method toordinal. import datetime jan1=datetime.datetime(2008,1,1) dec31=datetime.datetime(2008,12,31) feb1=datetime.datetime(2008,02,01) dates=[jan1,dec31,feb1] dates.sort() datesord=[d.toordinal() for d in dates] start,end=datesord[0],datesord[-1] def datetofloat(date,start,end): """date,start,end are ordinal dates ie Jan 1 of the year 1 has ordinal 1 Jan 1 of the year 2008 has ordinal 733042""" return (date-start)*1.0/(end-start) print datetofloat(dates[0],start,end) 0.0 print datetofloat(dates[1],start,end) 0.0849315068493* print datetofloat(dates[2],start,end) 1.0 *16.67% is about two months of a year, so the proportion for Feb 1 is about half of that. A: It's fairly easy to convert a timedelta into a numeric value. Select an epoch time. Calculate deltas for every value relative to the epoch. Convert the delta's into a numeric value. Then map the numeric values as you normally would. Conversion is straight forward. Something like: def f(delta): return delta.seconds + delta.days * 1440 * 60 + (delta.microseconds / 1000000.0) A: I don't know if I fully understand what you are trying to do, but you can just deal with times as number of seconds since the UNIX epoch and then just use plain old subtraction to get a range that you can scale to the size of your plot. In processing, the map function will handle this case for you. http://processing.org/reference/map_.html I'm sure you can adapt this for your purpose
How to distribute proportionally dates on a scale with Python
I have a very simple charting component which takes integer on the x/y axis. My problem is that I need to represent date/float on this chart. So I though I could distribute proportionally dates on a scale. In other words, let's say I have the following date : 01/01/2008, 02/01/2008 and 31/12/2008. The algorithm would return 0, 16.667, and 100 (1 month = 16.667%). I tried to play with the datetime and timedelta classes of Python 2.5 and I am unable to achieve this. I thought I could use the number of ticks, but I am not even able to get that info from datetime. Any idea how I could write this algorithm in Python? Otherwise, any other ideas or algorithms?
[ "If you're dealing with dates, then you can use the method toordinal.\nimport datetime\n\njan1=datetime.datetime(2008,1,1)\ndec31=datetime.datetime(2008,12,31)\nfeb1=datetime.datetime(2008,02,01)\n\ndates=[jan1,dec31,feb1]\ndates.sort()\n\ndatesord=[d.toordinal() for d in dates]\nstart,end=datesord[0],datesord[-1]\n\ndef datetofloat(date,start,end):\n \"\"\"date,start,end are ordinal dates\n ie Jan 1 of the year 1 has ordinal 1\n Jan 1 of the year 2008 has ordinal 733042\"\"\"\n return (date-start)*1.0/(end-start)\n\nprint datetofloat(dates[0],start,end)\n 0.0\nprint datetofloat(dates[1],start,end)\n 0.0849315068493*\nprint datetofloat(dates[2],start,end)\n 1.0\n\n*16.67% is about two months of a year, so the proportion for Feb 1 is about half of that.\n", "It's fairly easy to convert a timedelta into a numeric value.\nSelect an epoch time. Calculate deltas for every value relative to the epoch. Convert the delta's into a numeric value. Then map the numeric values as you normally would.\nConversion is straight forward. Something like:\ndef f(delta):\n return delta.seconds + delta.days * 1440 * 60 + \n (delta.microseconds / 1000000.0)\n\n", "I don't know if I fully understand what you are trying to do, but you can just deal with times as number of seconds since the UNIX epoch and then just use plain old subtraction to get a range that you can scale to the size of your plot.\nIn processing, the map function will handle this case for you. http://processing.org/reference/map_.html I'm sure you can adapt this for your purpose\n" ]
[ 3, 1, 0 ]
[]
[]
[ "algorithm", "datetime", "python", "timedelta" ]
stackoverflow_0001010139_algorithm_datetime_python_timedelta.txt
Q: Would extracting page metadata be a good use of multiple inheritance? I was wondering if I have a couple of models which both include fields like "meta_keywords" or "slug" which have to do with the web page the model instance will be displayed on, whether it would be advisable to break those page metadata elements out into their own class, say PageMeta, and have my other models subclass those via multiple inheritance? A: General advice for a lightly-specified question: Nontrivial multiple inheritance in Python requires Advanced Techniques to deal with the metaclass/metatype conflict. Look over this recipe from the ActiveState archives and see if it looks like the kind of stuff you like: Extract from linked recipe: The simplest case where a metatype conflict happens is the following. Consider a class A with metaclass M_A and a class B with an independent metaclass M_B; suppose we derive C from A and B. The question is: what is the metaclass of C ? Is it M_A or M_B ? The correct answer (see the book "Putting metaclasses to work" for a thoughtful discussion) is M_C, where M_C is a metaclass that inherits from M_A and M_B. However, Python is not that magic, and it does not automatically create M_C. Instead, it raises a TypeError, warning the programmer of the possible confusion. Consequently, I recommend limiting your use of multiple inheritance in Python to the following cases: You must, because your problem domain requires you to combine two separately-maintained single-inheritance libraries. You have achieved such fluency with metatype and metaclass that you can write recipe 204197 or its equivalent as easily and confidently as you can write a print statement. Edit: Here's Guido van Rossum in An Introduction to Python: It is clear that indiscriminate use of multiple inheritance is a maintenance nightmare, given the reliance in Python on conventions to avoid accidental name conflicts. Here he is again in PEP 253, which describes the ideas which were incorporated into Python, but not the implementation: Metatypes determine various policies for types, such as what happens when a type is called, how dynamic types are (whether a type's dict can be modified after it is created), what the method resolution order is, how instance attributes are looked up, and so on. I'll argue that left-to-right depth-first is not the best solution when you want to get the most use from multiple inheritance. I'll argue that with multiple inheritance, the metatype of the subtype must be a descendant of the metatypes of all base types. This does not mean you shouldn't use multiple inheritance; I'm just warning you so you won't be suprised one day to find yourself slapping your forehead and exclaiming "D'oh! The metatype of one of my subtypes isn't a descendant of the metatypes of all its base types! Don't you hate when that happens?"
Would extracting page metadata be a good use of multiple inheritance?
I was wondering if I have a couple of models which both include fields like "meta_keywords" or "slug" which have to do with the web page the model instance will be displayed on, whether it would be advisable to break those page metadata elements out into their own class, say PageMeta, and have my other models subclass those via multiple inheritance?
[ "General advice for a lightly-specified question:\nNontrivial multiple inheritance in Python requires Advanced Techniques to deal with the metaclass/metatype conflict. Look over this recipe from the ActiveState archives and see if it looks like the kind of stuff you like:\nExtract from linked recipe:\n\nThe simplest case where a metatype\n conflict happens is the following.\n Consider a class A with metaclass M_A\n and a class B with an independent\n metaclass M_B; suppose we derive C\n from A and B. The question is: what is\n the metaclass of C ? Is it M_A or M_B\n ?\nThe correct answer (see the book\n \"Putting metaclasses to work\" for a\n thoughtful discussion) is M_C, where\n M_C is a metaclass that inherits from\n M_A and M_B.\nHowever, Python is not that magic, and\n it does not automatically create M_C.\n Instead, it raises a TypeError,\n warning the programmer of the possible\n confusion.\n\nConsequently, I recommend limiting your use of multiple inheritance in Python to the following cases:\n\nYou must, because your problem domain requires you to combine two separately-maintained single-inheritance libraries.\nYou have achieved such fluency with metatype and metaclass that you can write recipe 204197 or its equivalent as easily and confidently as you can write a print statement. \n\nEdit:\nHere's Guido van Rossum in An Introduction to Python:\n\nIt is clear that indiscriminate use of\n multiple inheritance is a maintenance\n nightmare, given the reliance in\n Python on conventions to avoid\n accidental name conflicts.\n\nHere he is again in PEP 253, which describes the ideas which were incorporated into Python, but not the implementation:\n\nMetatypes determine various policies\n for types, such as what\n happens when a type is called, how dynamic types are (whether a\n type's dict can be modified after it is created), what the\n method resolution order is, how instance attributes are looked\n up, and so on.\nI'll argue that left-to-right depth-first is not the best\n solution when you want to get the most use from multiple\n inheritance.\nI'll argue that with multiple inheritance, the metatype of the\n subtype must be a descendant of the metatypes of all base types.\n\nThis does not mean you shouldn't use multiple inheritance; I'm just warning you so you won't be suprised one day to find yourself slapping your forehead and exclaiming \"D'oh! \nThe metatype of one of my subtypes isn't a descendant of the metatypes of all its base types! Don't you hate when that happens?\"\n" ]
[ 0 ]
[]
[]
[ "architecture", "django", "mixins", "multiple_inheritance", "python" ]
stackoverflow_0001010349_architecture_django_mixins_multiple_inheritance_python.txt
Q: How to construct a webob.Request or a WSGI 'environ' dict from raw HTTP request byte stream? Suppose I have a byte stream with the following in it: POST /mum/ble?q=huh Content-Length: 18 Content-Type: application/json; charset="utf-8" Host: localhost:80 ["do", "re", "mi"] Is there a way to produce an WSGI-style 'environ' dict from it? Hopefully, I've overlooked an easy answer, and it is as easy to achieve as the opposite operation. Consider: >>> import json >>> from webob import Request >>> r = Request.blank('/mum/ble?q=huh') >>> r.method = 'POST' >>> r.content_type = 'application/json' >>> r.charset = 'utf-8' >>> r.body = json.dumps(['do', 're', 'mi']) >>> print str(r) # Request's __str__ method gives raw HTTP bytes back! POST /mum/ble?q=huh Content-Length: 18 Content-Type: application/json; charset="utf-8" Host: localhost:80 ["do", "re", "mi"] A: Reusing Python's standard library code for the purpose is a bit tricky (it was not designed to be reused that way!-), but should be doable, e.g: import cStringIO from wsgiref import simple_server, util input_string = """POST /mum/ble?q=huh HTTP/1.0 Content-Length: 18 Content-Type: application/json; charset="utf-8" Host: localhost:80 ["do", "re", "mi"] """ class FakeHandler(simple_server.WSGIRequestHandler): def __init__(self, rfile): self.rfile = rfile self.wfile = cStringIO.StringIO() # for error msgs self.server = self self.base_environ = {} self.client_address = ['?', 80] self.raw_requestline = self.rfile.readline() self.parse_request() def getenv(self): env = self.get_environ() util.setup_testing_defaults(env) env['wsgi.input'] = self.rfile return env handler = FakeHandler(rfile=cStringIO.StringIO(input_string)) wsgi_env = handler.getenv() print wsgi_env Basically, we need to subclass the request handler to fake out the construction process that's normally performed for it by the server (rfile and wfile built from the socket to the client, and so on). This isn't quite complete, I think, but should be close and I hope it proves helpful! Note that I've also fixed your example HTTP request: without an HTTP/1.0 or 1.1 at the end of the raw request line, a POST is considered ill-formed and causes an exception and a resulting error message on handler.wfile.
How to construct a webob.Request or a WSGI 'environ' dict from raw HTTP request byte stream?
Suppose I have a byte stream with the following in it: POST /mum/ble?q=huh Content-Length: 18 Content-Type: application/json; charset="utf-8" Host: localhost:80 ["do", "re", "mi"] Is there a way to produce an WSGI-style 'environ' dict from it? Hopefully, I've overlooked an easy answer, and it is as easy to achieve as the opposite operation. Consider: >>> import json >>> from webob import Request >>> r = Request.blank('/mum/ble?q=huh') >>> r.method = 'POST' >>> r.content_type = 'application/json' >>> r.charset = 'utf-8' >>> r.body = json.dumps(['do', 're', 'mi']) >>> print str(r) # Request's __str__ method gives raw HTTP bytes back! POST /mum/ble?q=huh Content-Length: 18 Content-Type: application/json; charset="utf-8" Host: localhost:80 ["do", "re", "mi"]
[ "Reusing Python's standard library code for the purpose is a bit tricky (it was not designed to be reused that way!-), but should be doable, e.g:\nimport cStringIO\nfrom wsgiref import simple_server, util\n\ninput_string = \"\"\"POST /mum/ble?q=huh HTTP/1.0\nContent-Length: 18\nContent-Type: application/json; charset=\"utf-8\"\nHost: localhost:80\n\n[\"do\", \"re\", \"mi\"]\n\"\"\"\n\nclass FakeHandler(simple_server.WSGIRequestHandler):\n def __init__(self, rfile):\n self.rfile = rfile\n self.wfile = cStringIO.StringIO() # for error msgs\n self.server = self\n self.base_environ = {}\n self.client_address = ['?', 80]\n self.raw_requestline = self.rfile.readline()\n self.parse_request()\n\n def getenv(self):\n env = self.get_environ()\n util.setup_testing_defaults(env)\n env['wsgi.input'] = self.rfile\n return env\n\nhandler = FakeHandler(rfile=cStringIO.StringIO(input_string))\nwsgi_env = handler.getenv()\n\nprint wsgi_env\n\nBasically, we need to subclass the request handler to fake out the construction process that's normally performed for it by the server (rfile and wfile built from the socket to the client, and so on). This isn't quite complete, I think, but should be close and I hope it proves helpful!\nNote that I've also fixed your example HTTP request: without an HTTP/1.0 or 1.1 at the end of the raw request line, a POST is considered ill-formed and causes an exception and a resulting error message on handler.wfile.\n" ]
[ 5 ]
[]
[]
[ "python", "webob", "wsgi" ]
stackoverflow_0001010103_python_webob_wsgi.txt
Q: plot line at particular angle and offset I'm attempting to plot a particular line over an original image (an array) that i have. Basically, I have an angle and offset (measured from the center of the image) that I want to plot the line over. The problem is, I'm not exactly sure how to do this. I can write a really complicated piece of code to do this, but I'm wondering if there's an easier way that I don't know of (maybe with matplotlib). Thanks. A: Assuming that your offset is actually a x, y coordinate of the center of the line, and that the line should be a fixed length, then it's a simple matter of trigonometry with matplotlib: x = [offsetx-linelength*cos(angle), offsetx+linelength*cos(angle)] y = [offsety-linelength*sin(angle), offsety+linelength*sin(angle)] plot(x, y, '-') A: Use PIL and draw line, cricle, or another image over the original image import Image, ImageDraw im = Image.open("my.png") draw = ImageDraw.Draw(im) draw.line((0, 0, 100, 100), fill=128) del draw # write to stdout im.save(sys.stdout, "PNG")
plot line at particular angle and offset
I'm attempting to plot a particular line over an original image (an array) that i have. Basically, I have an angle and offset (measured from the center of the image) that I want to plot the line over. The problem is, I'm not exactly sure how to do this. I can write a really complicated piece of code to do this, but I'm wondering if there's an easier way that I don't know of (maybe with matplotlib). Thanks.
[ "Assuming that your offset is actually a x, y coordinate of the center of the line, and that the line should be a fixed length, then it's a simple matter of trigonometry with matplotlib:\nx = [offsetx-linelength*cos(angle), offsetx+linelength*cos(angle)]\ny = [offsety-linelength*sin(angle), offsety+linelength*sin(angle)]\nplot(x, y, '-')\n\n", "Use PIL and draw line, cricle, or another image over the original image\nimport Image, ImageDraw\n\nim = Image.open(\"my.png\")\n\ndraw = ImageDraw.Draw(im)\ndraw.line((0, 0, 100, 100), fill=128)\ndel draw \n\n# write to stdout\nim.save(sys.stdout, \"PNG\")\n\n" ]
[ 2, 1 ]
[]
[]
[ "angle", "image", "offset", "plot", "python" ]
stackoverflow_0001010423_angle_image_offset_plot_python.txt
Q: Customizing Django auto admin terminology I'm playing around with Django's admin module, but I've seemed to run into a bit of a bump that's more of an annoyance than an error. I have my modules setup using names like UserData and Status, so Django's admin panel likes to try to call each row in UserData a user datas and each status a statuss. Is there any way I can change the terminology so it will say, for example, Profiles instead of User Datas. A: You can define verbose_name and verbose_name_plural in your model's inner Meta class to override the values used there. See http://docs.djangoproject.com/en/dev/ref/models/options/#verbose-name-plural
Customizing Django auto admin terminology
I'm playing around with Django's admin module, but I've seemed to run into a bit of a bump that's more of an annoyance than an error. I have my modules setup using names like UserData and Status, so Django's admin panel likes to try to call each row in UserData a user datas and each status a statuss. Is there any way I can change the terminology so it will say, for example, Profiles instead of User Datas.
[ "You can define verbose_name and verbose_name_plural in your model's inner Meta class to override the values used there. See http://docs.djangoproject.com/en/dev/ref/models/options/#verbose-name-plural\n" ]
[ 6 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001010794_django_python.txt
Q: dictionary in python? in how many way i traverse dictionary in python??? A: Many ways! testdict = {"bob" : 0, "joe": 20, "kate" : 73, "sue" : 40} for items in testdict.items(): print (items) for key in testdict.keys(): print (key, testdict[key]) for item in testdict.iteritems(): print item for key in testdict.iterkeys(): print (key, testdict[key]) That's a few, but that begins to departing from these simple ways into something more complex. All code was tested. A: If I am interpreting your question correctly, you can transverse Dictionaries in many ways. A good read for beginners is located here. Also a Dictionary might not be your best bet.More information would be helpful, not to mention it would aid in assisting you. A: http://docs.python.org/tutorial/datastructures.html#looping-techniques >>> knights = {'gallahad': 'the pure', 'robin': 'the brave'} >>> for k, v in knights.iteritems(): ... print k, v ... gallahad the pure robin the brave
dictionary in python?
in how many way i traverse dictionary in python???
[ "Many ways!\ntestdict = {\"bob\" : 0, \"joe\": 20, \"kate\" : 73, \"sue\" : 40}\n\nfor items in testdict.items():\n print (items)\n\nfor key in testdict.keys():\n print (key, testdict[key])\n\nfor item in testdict.iteritems():\n print item\n\nfor key in testdict.iterkeys():\n print (key, testdict[key])\n\nThat's a few, but that begins to departing from these simple ways into something more complex. All code was tested.\n", "If I am interpreting your question correctly, you can transverse Dictionaries in many ways.\nA good read for beginners is located here.\nAlso a Dictionary might not be your best bet.More information would be helpful, not to mention it would aid in assisting you. \n", "http://docs.python.org/tutorial/datastructures.html#looping-techniques\n>>> knights = {'gallahad': 'the pure', 'robin': 'the brave'}\n>>> for k, v in knights.iteritems():\n... print k, v\n...\ngallahad the pure\nrobin the brave\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dictionary", "python", "traversal" ]
stackoverflow_0001010788_dictionary_python_traversal.txt
Q: how many places are optimized in Python's bytecode(version 2.5) Can anyone tell me how many places there are optimized in Python's bytecode? I was trying to de-compile Python's bytecode these days,but I found that in Python's version 2.5 there are a lot of optimization.For example: to this code a,b,c=([],[],[])#build list the non-optimized bytecode before version2.5 is like that: BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_4 UNPACK_LIST_ STORE_NAME 'a' STORE_NAME 'b' STORE_NAME 'c' In the version2.5,the optimized bytecode is like this: BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_0 ROT_THREE ROT_TWO STORE_FAST 'a' STORE_FAST 'b' STORE_FAST 'c' This is only one example,but there are many other places may be optimized. So,does anybode know is there some documentation to clarify these optimization or tell me in which way I can find all of them? A: The Python/peephole.c source file is where basically all such optimizations are performed -- the link I gave is to the current version (2.6 or better), because I'm having trouble getting to the dynamic source browser here, but once it works again it's easy to see specific versions such as the one that was extant for (say) 2.5.2 or whatever other specific version you need this information for. A: I don't think there's any documentation per se, but there's the C code for the Python interpreter. You can find several different versions of it here.
how many places are optimized in Python's bytecode(version 2.5)
Can anyone tell me how many places there are optimized in Python's bytecode? I was trying to de-compile Python's bytecode these days,but I found that in Python's version 2.5 there are a lot of optimization.For example: to this code a,b,c=([],[],[])#build list the non-optimized bytecode before version2.5 is like that: BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_4 UNPACK_LIST_ STORE_NAME 'a' STORE_NAME 'b' STORE_NAME 'c' In the version2.5,the optimized bytecode is like this: BUILD_LIST_0 BUILD_LIST_0 BUILD_LIST_0 ROT_THREE ROT_TWO STORE_FAST 'a' STORE_FAST 'b' STORE_FAST 'c' This is only one example,but there are many other places may be optimized. So,does anybode know is there some documentation to clarify these optimization or tell me in which way I can find all of them?
[ "The Python/peephole.c source file is where basically all such optimizations are performed -- the link I gave is to the current version (2.6 or better), because I'm having trouble getting to the dynamic source browser here, but once it works again it's easy to see specific versions such as the one that was extant for (say) 2.5.2 or whatever other specific version you need this information for.\n", "I don't think there's any documentation per se, but there's the C code for the Python interpreter. You can find several different versions of it here.\n" ]
[ 2, 0 ]
[]
[]
[ "bytecode", "python" ]
stackoverflow_0001010914_bytecode_python.txt
Q: Traversing multi-dimensional dictionary in django I'm a PHP guy on my first day in Python-land, trying to convert a php site to python (learning experience), and I'm hurting for advice. I never thought it would be so hard to use multi-dimensional arrays or dictionaries as you pythoners call them. So I can create multi-dimensional arrays using this, but i can't loop it in a django template. this doesnt work but i imagine i cant loop through it if i could get it to work. {% for key,val in dictionary.items %} only works for actual dictionaries it seems, not the custon multi-dimensional dictionary classes. I'm creating my dictionary from a sql query: vid[ video[ 7 ] ][ 'cat_short_name' ] = video[ 2 ] vid[ video[ 7 ] ][ 'cat_name' ] = video[ 1 ] vid[ video[ 7 ] ][ 'cat_id' ] = video[ 7 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_short_name' ] = video[ 5 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_name' ] = video[ 4 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_website' ] = video[ 6 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'top_video' ] = 0 vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_id' ] = video[ 8 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_name' ] = video[ 9 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_url' ] = video[ 10 ] I basically need to get all companies in a certain category and then get all videos in that company so i can nest them easily in my template. This is how i did it in php, creating one huge deep array. Trying to duplicate in Python has proven difficult. I thought maybe i could do with the backwards lookups in django using set_MODEL but i couldn't figure that out either. Any help on how to accomplish my goal would be appreciated. I hope my question is clear EDIT: When im done looping in my template it looks like this... <h1>Category</h1> <h2>Company</h2> <ul> <li>video</li> </ul> <h2>Company</h2> <ul> <li>video</li> <li>video</li> </ul> <h1>Category</h1> <h2>Company</h2> <ul> <li>video</li> </ul> <h2>Company</h2> <ul> <li>video</li> <li>video</li> </ul> A: You should be using the built in ORM instead of using your own queries (at least for something simple like this), makes things much easier (assuming you've also built your models in your models.py file) In your view: def categories_view(request): categories = Categories.objects.all() #maybe put an order_by or filter here return render_to_response("your_template.html", {'categories':categories}) In your template: {% for category in categories %} <h1>{{ category.name }}</h1> {% for company in category.company_set.all %} <h2>{{ company.name }}</h2> <ul> {% for video in company.video_set.all %} <li>{{ video.name }}</li> {% endfor %} </ul> {% endfor %} {% endfor %} I haven't tested it but it should work. Compare this code to what you would have to write if you weren't using ORM, in either PHP or Python. take a look at the django docs for more info, I'd recommend taking a few hours and doing the tutorial. Update: modified the code to use "_set.all" A: When moving from one language or framework to another, you need to realise that it's not usually a good idea to write your code in exactly the same way, even if you can. For example: I'm creating my dictionary from a sql query Why are you doing this? The way to represent objects from a database in Django is to use a model. That will take care of a whole lot of stuff for you, including the SQL but will also help with iterating through related tables. A: I'm also a Django beginner... You should be able to nest the for loops to get something like this: {% for key,val in dictionary.items %} {% for key,val in val.items %} and so on. A: If you built your complicated dictionary in the following way: vid[ video[ 7 ], 'cat_short_name' ] = video[ 2 ] vid[ video[ 7 ], 'cat_name' ] = video[ 1 ] vid[ video[ 7 ], 'cat_id' ] = video[ 7 ] vid[ video[ 7 ], 'companies', video[ 14 ], 'comp_short_name' ] = video[ 5 ] etc, would that help? The key in this case would be a tuple (with 2 items in the first three cases, 4 items in the fourth), and I'm not sure how you mean to treat it, but the loop on items to get key and value, per se, should work fine. A: {% for key, val in vid.items %} <h1>{{ val.cat_name }}</h1> {% for k2, v2 in val.companies.items %} <h2>{{ v2.comp_name }}</h2> <ul> {% for k3, v3 in v2.videos.items %} <li>{{ v3.vid_name }}</li> {% endfor %} </ul> {% endfor %} {% endfor %}
Traversing multi-dimensional dictionary in django
I'm a PHP guy on my first day in Python-land, trying to convert a php site to python (learning experience), and I'm hurting for advice. I never thought it would be so hard to use multi-dimensional arrays or dictionaries as you pythoners call them. So I can create multi-dimensional arrays using this, but i can't loop it in a django template. this doesnt work but i imagine i cant loop through it if i could get it to work. {% for key,val in dictionary.items %} only works for actual dictionaries it seems, not the custon multi-dimensional dictionary classes. I'm creating my dictionary from a sql query: vid[ video[ 7 ] ][ 'cat_short_name' ] = video[ 2 ] vid[ video[ 7 ] ][ 'cat_name' ] = video[ 1 ] vid[ video[ 7 ] ][ 'cat_id' ] = video[ 7 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_short_name' ] = video[ 5 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_name' ] = video[ 4 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'comp_website' ] = video[ 6 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'top_video' ] = 0 vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_id' ] = video[ 8 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_name' ] = video[ 9 ] vid[ video[ 7 ] ][ 'companies' ][ video[ 14 ] ][ 'videos' ][ video[ 8 ] ][ 'vid_url' ] = video[ 10 ] I basically need to get all companies in a certain category and then get all videos in that company so i can nest them easily in my template. This is how i did it in php, creating one huge deep array. Trying to duplicate in Python has proven difficult. I thought maybe i could do with the backwards lookups in django using set_MODEL but i couldn't figure that out either. Any help on how to accomplish my goal would be appreciated. I hope my question is clear EDIT: When im done looping in my template it looks like this... <h1>Category</h1> <h2>Company</h2> <ul> <li>video</li> </ul> <h2>Company</h2> <ul> <li>video</li> <li>video</li> </ul> <h1>Category</h1> <h2>Company</h2> <ul> <li>video</li> </ul> <h2>Company</h2> <ul> <li>video</li> <li>video</li> </ul>
[ "You should be using the built in ORM instead of using your own queries (at least for something simple like this), makes things much easier (assuming you've also built your models in your models.py file)\nIn your view:\ndef categories_view(request):\n categories = Categories.objects.all() #maybe put an order_by or filter here\n return render_to_response(\"your_template.html\", {'categories':categories})\n\nIn your template:\n{% for category in categories %}\n <h1>{{ category.name }}</h1>\n {% for company in category.company_set.all %}\n <h2>{{ company.name }}</h2>\n <ul>\n {% for video in company.video_set.all %}\n <li>{{ video.name }}</li>\n {% endfor %}\n </ul>\n {% endfor %}\n{% endfor %}\n\nI haven't tested it but it should work. Compare this code to what you would have to write if you weren't using ORM, in either PHP or Python.\ntake a look at the django docs for more info, I'd recommend taking a few hours and doing the tutorial.\nUpdate: modified the code to use \"_set.all\"\n", "When moving from one language or framework to another, you need to realise that it's not usually a good idea to write your code in exactly the same way, even if you can.\nFor example:\n\nI'm creating my dictionary from a sql query\n\nWhy are you doing this? The way to represent objects from a database in Django is to use a model. That will take care of a whole lot of stuff for you, including the SQL but will also help with iterating through related tables.\n", "I'm also a Django beginner...\nYou should be able to nest the for loops to get something like this:\n{% for key,val in dictionary.items %}\n {% for key,val in val.items %}\n\nand so on.\n", "If you built your complicated dictionary in the following way:\nvid[ video[ 7 ], 'cat_short_name' ] = video[ 2 ]\nvid[ video[ 7 ], 'cat_name' ] = video[ 1 ]\nvid[ video[ 7 ], 'cat_id' ] = video[ 7 ]\n\nvid[ video[ 7 ], 'companies', video[ 14 ], 'comp_short_name' ] = video[ 5 ]\n\netc, would that help? The key in this case would be a tuple (with 2 items in the first three cases, 4 items in the fourth), and I'm not sure how you mean to treat it, but the loop on items to get key and value, per se, should work fine.\n", "\n{% for key, val in vid.items %}\n <h1>{{ val.cat_name }}</h1>\n {% for k2, v2 in val.companies.items %}\n <h2>{{ v2.comp_name }}</h2>\n <ul>\n {% for k3, v3 in v2.videos.items %}\n <li>{{ v3.vid_name }}</li>\n {% endfor %}\n </ul>\n {% endfor %}\n{% endfor %}\n\n\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ "dictionary", "django", "django_models", "django_templates", "python" ]
stackoverflow_0001010848_dictionary_django_django_models_django_templates_python.txt
Q: Python factorization I'd just like to know the best way of listing all integer factors of a number, given a dictionary of its prime factors and their exponents. For example if we have {2:3, 3:2, 5:1} (2^3 * 3^2 * 5 = 360) Then I could write: for i in range(4): for j in range(3): for k in range(1): print 2**i * 3**j * 5**k But here I've got 3 horrible for loops. Is it possible to abstract this into a function given any factorization as a dictionary object argument? A: I have blogged about this, and the fastest pure python (without itertools) comes from a post by Tim Peters to the python list, and uses nested recursive generators: def divisors(factors) : """ Generates all divisors, unordered, from the prime factorization. """ ps = sorted(set(factors)) omega = len(ps) def rec_gen(n = 0) : if n == omega : yield 1 else : pows = [1] for j in xrange(factors.count(ps[n])) : pows += [pows[-1] * ps[n]] for q in rec_gen(n + 1) : for p in pows : yield p * q for p in rec_gen() : yield p Note that the way it is written, it takes a list of prime factors, not a dictionary, i.e. [2, 2, 2, 3, 3, 5] instead of {2 : 3, 3 : 2, 5 : 1}. A: Using itertools.product from Python 2.6: #!/usr/bin/env python import itertools, operator def all_factors(prime_dict): series = [[p**e for e in range(maxe+1)] for p, maxe in prime_dict.items()] for multipliers in itertools.product(*series): yield reduce(operator.mul, multipliers) Example: print sorted(all_factors({2:3, 3:2, 5:1})) Output: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30, 36, 40, 45, 60, 72, 90, 120, 180, 360] A: Well, not only you have 3 loops, but this approach won't work if you have more than 3 factors :) One possible way: def genfactors(fdict): factors = set([1]) for factor, count in fdict.iteritems(): for ignore in range(count): factors.update([n*factor for n in factors]) # that line could also be: # factors.update(map(lambda e: e*factor, factors)) return factors factors = {2:3, 3:2, 5:1} for factor in genfactors(factors): print factor Also, you can avoid duplicating some work in the inner loop: if your working set is (1,3), and want to apply to 2^3 factors, we were doing: (1,3) U (1,3)*2 = (1,2,3,6) (1,2,3,6) U (1,2,3,6)*2 = (1,2,3,4,6,12) (1,2,3,4,6,12) U (1,2,3,4,6,12)*2 = (1,2,3,4,6,8,12,24) See how many duplicates we have in the second sets? But we can do instead: (1,3) + (1,3)*2 = (1,2,3,6) (1,2,3,6) + ((1,3)*2)*2 = (1,2,3,4,6,12) (1,2,3,4,6,12) + (((1,3)*2)*2)*2 = (1,2,3,4,6,8,12,24) The solution looks even nicer without the sets: def genfactors(fdict): factors = [1] for factor, count in fdict.iteritems(): newfactors = factors for ignore in range(count): newfactors = map(lambda e: e*factor, newfactors) factors += newfactors return factors A: Yes. When you've got an algorithm that needs n nested for loops, you can usually turn it into a recursive function: def print_factors(d, product=1): if len(d) == 0: # Base case: we've dealt with all prime factors, so print product # Just print the product return d2 = dict(d) # Copy the dict because we don't want to modify it k,v = d2.popitem() # Pick any k**v pair from it for i in range(v+1): # For all possible powers i of k from 0 to v (inclusive) # Multiply the product by k**i and recurse. print_factors(d2, product*k**i) d = {2:3, 3:2, 5:1} print_factors(d) A: Basically, what you have here is a set, consisting of each factor of the target number. In your example, the set would be {2 2 2 3 3 5}. Each strict subset of that set is the factorization of one of the divisors of your number, so if you can generate all the subsets of that set, you can multiply the elements of each subset together and get all the integer divisors. The code should be pretty obvious from there: generate a list containing the factorization, generate all subsets of that list (bonus points for using a generator; I think there's a relevant function in the standard library). Then multiply and go from there. Not optimally efficient by any means, but nice looking.
Python factorization
I'd just like to know the best way of listing all integer factors of a number, given a dictionary of its prime factors and their exponents. For example if we have {2:3, 3:2, 5:1} (2^3 * 3^2 * 5 = 360) Then I could write: for i in range(4): for j in range(3): for k in range(1): print 2**i * 3**j * 5**k But here I've got 3 horrible for loops. Is it possible to abstract this into a function given any factorization as a dictionary object argument?
[ "I have blogged about this, and the fastest pure python (without itertools) comes from a post by Tim Peters to the python list, and uses nested recursive generators:\ndef divisors(factors) :\n \"\"\"\n Generates all divisors, unordered, from the prime factorization.\n \"\"\"\n ps = sorted(set(factors))\n omega = len(ps)\n\n def rec_gen(n = 0) :\n if n == omega :\n yield 1\n else :\n pows = [1]\n for j in xrange(factors.count(ps[n])) :\n pows += [pows[-1] * ps[n]]\n for q in rec_gen(n + 1) :\n for p in pows :\n yield p * q\n\n for p in rec_gen() :\n yield p\n\nNote that the way it is written, it takes a list of prime factors, not a dictionary, i.e. [2, 2, 2, 3, 3, 5] instead of {2 : 3, 3 : 2, 5 : 1}.\n", "Using itertools.product from Python 2.6:\n#!/usr/bin/env python\nimport itertools, operator\n\ndef all_factors(prime_dict):\n series = [[p**e for e in range(maxe+1)] for p, maxe in prime_dict.items()]\n for multipliers in itertools.product(*series):\n yield reduce(operator.mul, multipliers)\n\nExample:\nprint sorted(all_factors({2:3, 3:2, 5:1}))\n\nOutput:\n[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30, 36, 40, 45, 60,\n 72, 90, 120, 180, 360]\n\n", "Well, not only you have 3 loops, but this approach won't work if you have more than 3 factors :)\nOne possible way:\ndef genfactors(fdict): \n factors = set([1])\n\n for factor, count in fdict.iteritems():\n for ignore in range(count):\n factors.update([n*factor for n in factors])\n # that line could also be:\n # factors.update(map(lambda e: e*factor, factors))\n\n return factors\n\nfactors = {2:3, 3:2, 5:1}\n\nfor factor in genfactors(factors):\n print factor\n\nAlso, you can avoid duplicating some work in the inner loop: if your working set is (1,3), and want to apply to 2^3 factors, we were doing:\n\n(1,3) U (1,3)*2 = (1,2,3,6)\n(1,2,3,6) U (1,2,3,6)*2 = (1,2,3,4,6,12)\n(1,2,3,4,6,12) U (1,2,3,4,6,12)*2 = (1,2,3,4,6,8,12,24)\n\nSee how many duplicates we have in the second sets?\nBut we can do instead:\n\n(1,3) + (1,3)*2 = (1,2,3,6)\n(1,2,3,6) + ((1,3)*2)*2 = (1,2,3,4,6,12)\n(1,2,3,4,6,12) + (((1,3)*2)*2)*2 = (1,2,3,4,6,8,12,24)\n\nThe solution looks even nicer without the sets:\ndef genfactors(fdict):\n factors = [1]\n\n for factor, count in fdict.iteritems():\n newfactors = factors\n for ignore in range(count):\n newfactors = map(lambda e: e*factor, newfactors)\n factors += newfactors\n\n return factors\n\n", "Yes. When you've got an algorithm that needs n nested for loops, you can usually turn it into a recursive function:\ndef print_factors(d, product=1):\n if len(d) == 0: # Base case: we've dealt with all prime factors, so\n print product # Just print the product\n return\n d2 = dict(d) # Copy the dict because we don't want to modify it\n k,v = d2.popitem() # Pick any k**v pair from it\n for i in range(v+1): # For all possible powers i of k from 0 to v (inclusive)\n # Multiply the product by k**i and recurse.\n print_factors(d2, product*k**i)\n\nd = {2:3, 3:2, 5:1}\nprint_factors(d)\n\n", "Basically, what you have here is a set, consisting of each factor of the target number. In your example, the set would be {2 2 2 3 3 5}. Each strict subset of that set is the factorization of one of the divisors of your number, so if you can generate all the subsets of that set, you can multiply the elements of each subset together and get all the integer divisors.\nThe code should be pretty obvious from there: generate a list containing the factorization, generate all subsets of that list (bonus points for using a generator; I think there's a relevant function in the standard library). Then multiply and go from there. Not optimally efficient by any means, but nice looking.\n" ]
[ 15, 10, 9, 3, 1 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001010381_algorithm_python.txt
Q: How to get links on a webpage using mechanize and open those links I want to use mechanize with python to get all the links of the page, and then open the links.How can I do it? A: Here is an example from the project's page: import re from mechanize import Browser br = Browser() br.open("http://www.example.com/") # ... # .links() optionally accepts the keyword args of .follow_/.find_link() for link in br.links(url_regex="python.org"): print link br.follow_link(link) # takes EITHER Link instance OR keyword args br.back() A: The Browser object in mechanize has a links method that will retrieve all the links on the page.
How to get links on a webpage using mechanize and open those links
I want to use mechanize with python to get all the links of the page, and then open the links.How can I do it?
[ "Here is an example from the project's page:\n\nimport re\nfrom mechanize import Browser\n\nbr = Browser()\nbr.open(\"http://www.example.com/\")\n\n# ...\n\n# .links() optionally accepts the keyword args of .follow_/.find_link()\nfor link in br.links(url_regex=\"python.org\"):\n print link\n br.follow_link(link) # takes EITHER Link instance OR keyword args\n br.back()\n\n", "The Browser object in mechanize has a links method that will retrieve all the links on the page.\n" ]
[ 11, 2 ]
[]
[]
[ "mechanize", "python" ]
stackoverflow_0001011975_mechanize_python.txt
Q: Execution order with threads and PyGTK on Windows I'm having issues with threads and PyGTK on Windows. According the the PyGTK FAQ (and my own experimentation), the only way to reliably update the GUI from a child thread is to use the gobject.idle_add function. However, it can't be guaranteed when this function will be called. How can I guarantee that the line following the gobject.idle_add gets called after the function it points to? Very simple and contrived example: import gtk import gobject from threading import Thread class Gui(object): def __init__(self): self.button = gtk.Button("Click") self.button.connect("clicked", self.onButtonClicked) self.textEntry = gtk.Entry() self.content = gtk.HBox() self.content.pack_start(self.button) self.content.pack_start(self.textEntry) self.window = gtk.Window() self.window.connect("destroy", self.quit) self.window.add(self.content) self.window.show_all() def onButtonClicked(self, button): Thread(target=self.startThread).start() def startThread(self): #I want these next 2 lines to run in order gobject.idle_add(self.updateText) print self.textEntry.get_text() def updateText(self): self.textEntry.set_text("Hello!") def quit(self, widget): gtk.main_quit() gobject.threads_init() x = Gui() gtk.main() A: Don't try to update or access your GUI from a thread. You're just asking for trouble. For example, the fact that "get_text" works at all in a thread is almost an accident. You might be able to rely on it in GTK - although I'm not even sure about that - but you won't be able to do so in other GUI toolkits. If you have things that really need doing in threads, you should get the data you need from the GUI before launching the thread, and then update the GUI from the thread by using idle_add, like this: import time import gtk import gobject from threading import Thread w = gtk.Window() h = gtk.HBox() v = gtk.VBox() addend1 = gtk.Entry() h.add(addend1) h.add(gtk.Label(" + ")) addend2 = gtk.Entry() h.add(addend2) h.add(gtk.Label(" = ")) summation = gtk.Entry() summation.set_text("?") summation.set_editable(False) h.add(summation) v.add(h) progress = gtk.ProgressBar() v.add(progress) b = gtk.Button("Do It") v.add(b) w.add(v) status = gtk.Statusbar() v.add(status) w.show_all() def hardWork(a1, a2): messages = ["Doing the hard work to add %s to %s..." % (a1, a2), "Oof, I'm working so hard...", "Almost done..."] for index, message in enumerate(messages): fraction = index / float(len(messages)) gobject.idle_add(progress.set_fraction, fraction) gobject.idle_add(status.push, 4321, message) time.sleep(1) result = a1 + a2 gobject.idle_add(summation.set_text, str(result)) gobject.idle_add(status.push, 4321, "Done!") gobject.idle_add(progress.set_fraction, 1.0) def addthem(*ignored): a1 = int(addend1.get_text()) a2 = int(addend2.get_text()) Thread(target=lambda : hardWork(a1, a2)).start() b.connect("clicked", addthem) gtk.gdk.threads_init() gtk.main() If you really, absolutely need to read data from the GUI in the middle of a thread (this is a really bad idea, don't do it - you can get into really surprising deadlocks, especially when the program is shutting down) there is a utility in Twisted, blockingCallFromThread, which will do the hard work for you. You can use it like this: from twisted.internet.gtk2reactor import install install() from twisted.internet import reactor from twisted.internet.threads import blockingCallFromThread from threading import Thread import gtk w = gtk.Window() v = gtk.VBox() e = gtk.Entry() b = gtk.Button("Get Text") v.add(e) v.add(b) w.add(v) def inThread(): print 'Getting value' textValue = blockingCallFromThread(reactor, e.get_text) print 'Got it!', repr(textValue) def kickOffThread(*ignored): Thread(target=inThread).start() b.connect("clicked", kickOffThread) w.show_all() reactor.run() If you want to see how the magic works, you can always read the source. A: You could wrap the two functions into another function and call idle_add on this function: def update_and_print(self): self.updateText() print self.textEntry.get_text() def startThread(self): gobject.idle_add(self.update_and_print)
Execution order with threads and PyGTK on Windows
I'm having issues with threads and PyGTK on Windows. According the the PyGTK FAQ (and my own experimentation), the only way to reliably update the GUI from a child thread is to use the gobject.idle_add function. However, it can't be guaranteed when this function will be called. How can I guarantee that the line following the gobject.idle_add gets called after the function it points to? Very simple and contrived example: import gtk import gobject from threading import Thread class Gui(object): def __init__(self): self.button = gtk.Button("Click") self.button.connect("clicked", self.onButtonClicked) self.textEntry = gtk.Entry() self.content = gtk.HBox() self.content.pack_start(self.button) self.content.pack_start(self.textEntry) self.window = gtk.Window() self.window.connect("destroy", self.quit) self.window.add(self.content) self.window.show_all() def onButtonClicked(self, button): Thread(target=self.startThread).start() def startThread(self): #I want these next 2 lines to run in order gobject.idle_add(self.updateText) print self.textEntry.get_text() def updateText(self): self.textEntry.set_text("Hello!") def quit(self, widget): gtk.main_quit() gobject.threads_init() x = Gui() gtk.main()
[ "Don't try to update or access your GUI from a thread. You're just asking for trouble. For example, the fact that \"get_text\" works at all in a thread is almost an accident. You might be able to rely on it in GTK - although I'm not even sure about that - but you won't be able to do so in other GUI toolkits.\nIf you have things that really need doing in threads, you should get the data you need from the GUI before launching the thread, and then update the GUI from the thread by using idle_add, like this:\nimport time\nimport gtk\nimport gobject\nfrom threading import Thread\n\nw = gtk.Window()\nh = gtk.HBox()\nv = gtk.VBox()\naddend1 = gtk.Entry()\nh.add(addend1)\nh.add(gtk.Label(\" + \"))\naddend2 = gtk.Entry()\nh.add(addend2)\nh.add(gtk.Label(\" = \"))\nsummation = gtk.Entry()\nsummation.set_text(\"?\")\nsummation.set_editable(False)\nh.add(summation)\nv.add(h)\nprogress = gtk.ProgressBar()\nv.add(progress)\nb = gtk.Button(\"Do It\")\nv.add(b)\nw.add(v)\nstatus = gtk.Statusbar()\nv.add(status)\nw.show_all()\n\ndef hardWork(a1, a2):\n messages = [\"Doing the hard work to add %s to %s...\" % (a1, a2),\n \"Oof, I'm working so hard...\",\n \"Almost done...\"]\n for index, message in enumerate(messages):\n fraction = index / float(len(messages))\n gobject.idle_add(progress.set_fraction, fraction)\n gobject.idle_add(status.push, 4321, message)\n time.sleep(1)\n result = a1 + a2\n gobject.idle_add(summation.set_text, str(result))\n gobject.idle_add(status.push, 4321, \"Done!\")\n gobject.idle_add(progress.set_fraction, 1.0)\n\n\ndef addthem(*ignored):\n a1 = int(addend1.get_text())\n a2 = int(addend2.get_text())\n Thread(target=lambda : hardWork(a1, a2)).start()\n\nb.connect(\"clicked\", addthem)\ngtk.gdk.threads_init()\ngtk.main()\n\nIf you really, absolutely need to read data from the GUI in the middle of a thread (this is a really bad idea, don't do it - you can get into really surprising deadlocks, especially when the program is shutting down) there is a utility in Twisted, blockingCallFromThread, which will do the hard work for you. You can use it like this:\nfrom twisted.internet.gtk2reactor import install\ninstall()\nfrom twisted.internet import reactor\n\nfrom twisted.internet.threads import blockingCallFromThread\nfrom threading import Thread\n\nimport gtk\n\nw = gtk.Window()\nv = gtk.VBox()\ne = gtk.Entry()\nb = gtk.Button(\"Get Text\")\n\nv.add(e)\nv.add(b)\nw.add(v)\n\ndef inThread():\n print 'Getting value'\n textValue = blockingCallFromThread(reactor, e.get_text)\n print 'Got it!', repr(textValue)\n\ndef kickOffThread(*ignored):\n Thread(target=inThread).start()\n\nb.connect(\"clicked\", kickOffThread)\nw.show_all()\n\nreactor.run()\n\nIf you want to see how the magic works, you can always read the source.\n", "You could wrap the two functions into another function and call idle_add on this function:\ndef update_and_print(self):\n self.updateText()\n print self.textEntry.get_text()\n\ndef startThread(self):\n gobject.idle_add(self.update_and_print)\n\n" ]
[ 2, 1 ]
[]
[]
[ "multithreading", "pygtk", "python", "windows" ]
stackoverflow_0001008322_multithreading_pygtk_python_windows.txt