content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Operations for Long and Float in Python I'm trying to compute this: from scipy import * 3600**3400 * (exp(-3600)) / factorial(3400) the error: unsupported long and float A: Try using logarithms instead of working with the numbers directly. Since none of your operations are addition or subtraction, you could do the whole thing in logarithm form and convert back at the end. A: Computing with numbers of such magnitude, you just can't use ordinary 64-bit-or-so floats, which is what Python's core runtime supports. Consider gmpy (do not get the sourceforge version, it's aeons out of date) -- with that, math, and some care...: >>> e = gmpy.mpf(math.exp(1)) >>> gmpy.mpz(3600)**3400 * (e**(-3600)) / gmpy.fac(3400) mpf('2.37929475533825366213e-5') (I'm biased about gmpy, of course, since I originated and still participate in that project, but I'd never make strong claims about its floating point abilities... I've been using it mostly for integer stuff... still, it does make this computation possible!-). A: You could try using the Decimal object. Calculations will be slower but you won't have trouble with really small numbers. from decimal import Decimal I don't know how Decimal interacts with the scipy module, however. This numpy discussion might be relevant. A: Well the error is coming about because you are trying to multiply 3600**3400 which is a long with exp(-3600) which is a float. But regardless, the error you are receiving is disguising the true problem. It seems exp(-3600) is too big a number to fit in a float anyway. The python math library is fickle with large numbers, at best. A: exp(-3600) is too smale, factorial(3400) is too large: In [1]: from scipy import exp In [2]: exp(-3600) Out[2]: 0.0 In [3]: from scipy import factorial In [4]: factorial(3400) Out[4]: array(1.#INF) What about calculate it step by step as a workaround(and it makes sense to check the smallest and biggest intermediate result): from math import exp output = 1 smallest = 1e100 biggest = 0 for i,j in izip(xrange(1, 1701), xrange(3400, 1699, -1)): output = output * 3600 * exp(-3600/3400) / i output = output * 3600 * exp(-3600/3400) / j smallest = min(smallest, output) biggest = max(biggest, output) print "output: ", output print "smallest: ", smallest print "biggest: ", biggest output is: output: 2.37929475534e-005 smallest: 2.37929475534e-005 biggest: 1.28724174494e+214
Operations for Long and Float in Python
I'm trying to compute this: from scipy import * 3600**3400 * (exp(-3600)) / factorial(3400) the error: unsupported long and float
[ "Try using logarithms instead of working with the numbers directly. Since none of your operations are addition or subtraction, you could do the whole thing in logarithm form and convert back at the end.\n", "Computing with numbers of such magnitude, you just can't use ordinary 64-bit-or-so floats, which is what Python's core runtime supports. Consider gmpy (do not get the sourceforge version, it's aeons out of date) -- with that, math, and some care...:\n>>> e = gmpy.mpf(math.exp(1))\n>>> gmpy.mpz(3600)**3400 * (e**(-3600)) / gmpy.fac(3400)\nmpf('2.37929475533825366213e-5')\n\n(I'm biased about gmpy, of course, since I originated and still participate in that project, but I'd never make strong claims about its floating point abilities... I've been using it mostly for integer stuff... still, it does make this computation possible!-).\n", "You could try using the Decimal object. Calculations will be slower but you won't have trouble with really small numbers.\nfrom decimal import Decimal\n\nI don't know how Decimal interacts with the scipy module, however.\nThis numpy discussion might be relevant.\n", "Well the error is coming about because you are trying to multiply\n3600**3400\n\nwhich is a long with \nexp(-3600)\n\nwhich is a float. \nBut regardless, the error you are receiving is disguising the true problem. It seems exp(-3600) is too big a number to fit in a float anyway. The python math library is fickle with large numbers, at best.\n", "exp(-3600) is too smale, factorial(3400) is too large: \nIn [1]: from scipy import exp\n\nIn [2]: exp(-3600)\nOut[2]: 0.0\nIn [3]: from scipy import factorial\n\nIn [4]: factorial(3400)\nOut[4]: array(1.#INF)\n\nWhat about calculate it step by step as a workaround(and it makes sense\n to check the smallest and biggest intermediate result):\nfrom math import exp\noutput = 1\nsmallest = 1e100\nbiggest = 0\nfor i,j in izip(xrange(1, 1701), xrange(3400, 1699, -1)):\n output = output * 3600 * exp(-3600/3400) / i\n output = output * 3600 * exp(-3600/3400) / j\n smallest = min(smallest, output)\n biggest = max(biggest, output)\nprint \"output: \", output\nprint \"smallest: \", smallest\nprint \"biggest: \", biggest\n\noutput is: \noutput: 2.37929475534e-005\nsmallest: 2.37929475534e-005\nbiggest: 1.28724174494e+214\n\n" ]
[ 3, 2, 1, 0, 0 ]
[]
[]
[ "floating_point", "long_integer", "python", "scipy" ]
stackoverflow_0001526142_floating_point_long_integer_python_scipy.txt
Q: Why can't I save an object in Django? thechan = Score.objects.filter(content=44)[0:1] thechan[0].custom_score = 2 thechan[0].save() I do print statements, and it shows everything fine. However, it's not SAVING! I go into my database, and I run a simple SELECT statement..and it's not changed! select custom_score FROM music_score where content_id = 44; A: What's going on here is that Score.objects.filter() doesn't return a regular list, but a QuerySet. QuerySets behave like lists in some ways, but every time you slice one you get a new QuerySet instance, and everytime you index into one, you get a new instance of your model class. That means your original code does something like: thechan = Score.objects.filter(content=44)[0:1] thechan[0].custom_score = 2 thechan = Score.objects.filter(content=44)[0:1] thechan[0].save() # saves an unmodified object back to the DB, no effective change If for whatever reason you needed to do this on a QuerySet rather than just using get(), you could write: thechan = Score.objects.filter(content=44)[0] thechan.custom_score = 2 thechan.save() instead. This distinction becomes a bit more important if you are, say, iterating over the elements of a QuerySet instead of dealing with a single record.
Why can't I save an object in Django?
thechan = Score.objects.filter(content=44)[0:1] thechan[0].custom_score = 2 thechan[0].save() I do print statements, and it shows everything fine. However, it's not SAVING! I go into my database, and I run a simple SELECT statement..and it's not changed! select custom_score FROM music_score where content_id = 44;
[ "What's going on here is that Score.objects.filter() doesn't return a regular list, but a QuerySet. QuerySets behave like lists in some ways, but every time you slice one you get a new QuerySet instance, and everytime you index into one, you get a new instance of your model class.\nThat means your original code does something like:\nthechan = Score.objects.filter(content=44)[0:1]\nthechan[0].custom_score = 2\n\nthechan = Score.objects.filter(content=44)[0:1]\nthechan[0].save() # saves an unmodified object back to the DB, no effective change\n\nIf for whatever reason you needed to do this on a QuerySet rather than just using get(), you could write:\nthechan = Score.objects.filter(content=44)[0]\nthechan.custom_score = 2\nthechan.save()\n\ninstead. This distinction becomes a bit more important if you are, say, iterating over the elements of a QuerySet instead of dealing with a single record.\n" ]
[ 13 ]
[ "Fixed.\nthechan = Score.objects.get(content=44)\nthechan.custom_score = 2\nthechan.save()\n\n" ]
[ -1 ]
[ "django", "frameworks", "python" ]
stackoverflow_0001528656_django_frameworks_python.txt
Q: reading lines 2 at a time Is there a better way to read lines two at a time from a file in python than: with open(fn) as f: for line in f: try: line2 = f.next() except StopIteration: line2 = '' print line, line2 # or something more interesting I'm in 2.5.4. Anything different in newer versions? EDIT: a deleted answer noted: in py3k you'd need to do next(f) instead of f.next(). Not to mention the print change A: import itertools with open(fn) as f: for line, line2 in itertools.izip_longest(f, f, fillvalue=''): print line, line2 Alas, izip_longest requires Python 2.6 or better; 2.5 only has izip, which would truncate the last line if f has an odd number of lines. It's quite easy to supply the equivalent functionality as a generator, of course. Here's a more general "N at a time" iterator-wrapper: def natatime(itr, fillvalue=None, n=2): return itertools.izip_longest(*(iter(itr),)*n, fillvalue=fillvalue) itertools is generally the best way to go, but, if you insisted on implementing it by yourself, then: def natatime_no_itertools(itr, fillvalue=None, n=2): x = iter(itr) for item in x: yield (item,) + tuple(next(x, fillvalue) for _ in xrange(n-1)) In 2.5, I think the best approach is actually not a generator, but another itertools-based solution: def natatime_25(itr, fillvalue=None, n=2): x = itertools.chain(iter(itr), (fillvalue,) * (n-1)) return itertools.izip(*(x,)*n) (since 2.5 doesn't have the built-in next, as well as missing izip_longest). A: You could possibly make it more clear with a generator: def read2(f): for line in f: try: line2 = f.next() except StopIteration: line2 = '' yield line, line2 with open(fn) as f: for line1, line2 in read2(f): print line1 print line2 A: for small to medium sized files, >>> data=open("file").readlines() >>> for num,line in enumerate(data[::2]): ... print ''.join(data[num:num+2])
reading lines 2 at a time
Is there a better way to read lines two at a time from a file in python than: with open(fn) as f: for line in f: try: line2 = f.next() except StopIteration: line2 = '' print line, line2 # or something more interesting I'm in 2.5.4. Anything different in newer versions? EDIT: a deleted answer noted: in py3k you'd need to do next(f) instead of f.next(). Not to mention the print change
[ "import itertools\n\nwith open(fn) as f:\n for line, line2 in itertools.izip_longest(f, f, fillvalue=''):\n print line, line2\n\nAlas, izip_longest requires Python 2.6 or better; 2.5 only has izip, which would truncate the last line if f has an odd number of lines. It's quite easy to supply the equivalent functionality as a generator, of course.\nHere's a more general \"N at a time\" iterator-wrapper:\ndef natatime(itr, fillvalue=None, n=2):\n return itertools.izip_longest(*(iter(itr),)*n, fillvalue=fillvalue)\n\nitertools is generally the best way to go, but, if you insisted on implementing it by yourself, then:\ndef natatime_no_itertools(itr, fillvalue=None, n=2):\n x = iter(itr)\n for item in x:\n yield (item,) + tuple(next(x, fillvalue) for _ in xrange(n-1))\n\nIn 2.5, I think the best approach is actually not a generator, but another itertools-based solution:\ndef natatime_25(itr, fillvalue=None, n=2):\n x = itertools.chain(iter(itr), (fillvalue,) * (n-1))\n return itertools.izip(*(x,)*n)\n\n(since 2.5 doesn't have the built-in next, as well as missing izip_longest).\n", "You could possibly make it more clear with a generator:\ndef read2(f):\n for line in f:\n try:\n line2 = f.next()\n except StopIteration:\n line2 = ''\n\n yield line, line2\n\nwith open(fn) as f:\n for line1, line2 in read2(f):\n print line1\n print line2\n\n", "for small to medium sized files, \n>>> data=open(\"file\").readlines()\n>>> for num,line in enumerate(data[::2]):\n... print ''.join(data[num:num+2])\n\n" ]
[ 18, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001528711_python.txt
Q: Can a WIN32 program authenticate into Django authentication system, using MYSQL? I have a web service with Django Framework. My friend's project is a WIN32 program and also a MS-sql server. The Win32 program currently has a login system that talks to a MS-sql for authentication. However, we would like to INTEGRATE this login system as one. Please answer the 2 things: I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)? If not, what is the best way of combining the authentication? A: Either provide a view where your win32 client can post to the django server and get a response that means "good login" or "bad login". This will require you to modify the win32 client and create a very simple django view. Or provide your own Django Authentication backend that authenticates your django logins against the MS-sql server. This alternative will require no modification to your win32 client but probably quite a bit of effort on the authentication backend front. A bit of research might yield someone else's backend that you can re-use. This looks like a promising place to start - they claim that "Both Windows Authentication (Integrated Security) and SQL Server Authentication supported." A: If the only thing the WIN32 app uses the MS-SQL Server for is Authentication/Authorization then you could write a new Authentication/Authorization provider that uses a set of Web Services (that you would have to create) that expose the Django provider.
Can a WIN32 program authenticate into Django authentication system, using MYSQL?
I have a web service with Django Framework. My friend's project is a WIN32 program and also a MS-sql server. The Win32 program currently has a login system that talks to a MS-sql for authentication. However, we would like to INTEGRATE this login system as one. Please answer the 2 things: I want scrap the MS-SQL to use only the Django authentication system on the linux server. Can the WIN32 client talk to Django using a Django API (login)? If not, what is the best way of combining the authentication?
[ "Either provide a view where your win32 client can post to the django server and get a response that means \"good login\" or \"bad login\". This will require you to modify the win32 client and create a very simple django view.\nOr provide your own Django Authentication backend that authenticates your django logins against the MS-sql server. This alternative will require no modification to your win32 client but probably quite a bit of effort on the authentication backend front. A bit of research might yield someone else's backend that you can re-use. This looks like a promising place to start - they claim that \"Both Windows Authentication (Integrated Security) and SQL Server\n Authentication supported.\"\n", "If the only thing the WIN32 app uses the MS-SQL Server for is Authentication/Authorization then you could write a new Authentication/Authorization provider that uses a set of Web Services (that you would have to create) that expose the Django provider.\n" ]
[ 2, 0 ]
[]
[]
[ "authentication", "django", "frameworks", "python", "windows" ]
stackoverflow_0001529128_authentication_django_frameworks_python_windows.txt
Q: why is urllib2 missing table fields which I can see in the Firefox source? the html that I am receiving from urllib2 is missing dozens of fields of data that I can see when I view the source of the URL in Firefox. Any advice would be much appreciated. Here is what it looks like: from FireFox view source: # ...<td class=td6>as</td></tr></thead>|ManyFields|<br></div><div id="c1">... from urllib2 return html: # ...<td class=td6>as</td></tr></thead>|</table>|<br></div><div id="c1">... A: It seems from a cursory check that the page you're getting has a lot of Javascript; perhaps that Javascript cooperates in building the information that you see at the end in Firefox (at least some of it is actively altering the page's contents). If you need to scrape JS-rich pages, your best bet is to automate an actual browser via Selenium. A: The extra content you're seeing is generated by JavaScript. It is not part of the raw HTML document, and hence won't be present with a plain HTTP fetcher such as urllib2.
why is urllib2 missing table fields which I can see in the Firefox source?
the html that I am receiving from urllib2 is missing dozens of fields of data that I can see when I view the source of the URL in Firefox. Any advice would be much appreciated. Here is what it looks like: from FireFox view source: # ...<td class=td6>as</td></tr></thead>|ManyFields|<br></div><div id="c1">... from urllib2 return html: # ...<td class=td6>as</td></tr></thead>|</table>|<br></div><div id="c1">...
[ "It seems from a cursory check that the page you're getting has a lot of Javascript; perhaps that Javascript cooperates in building the information that you see at the end in Firefox (at least some of it is actively altering the page's contents). If you need to scrape JS-rich pages, your best bet is to automate an actual browser via Selenium.\n", "The extra content you're seeing is generated by JavaScript. It is not part of the raw HTML document, and hence won't be present with a plain HTTP fetcher such as urllib2.\n" ]
[ 2, 0 ]
[]
[]
[ "field", "html", "python", "urllib2" ]
stackoverflow_0001529234_field_html_python_urllib2.txt
Q: Help to solve my problems with python LIST? author_A = [['book_x',1,10],['book_y',2,20],['book_z',3,30]] author_B = [['book_s',5,10],['book_t',2,20],['book_z',3,30]] author_A AND author_B = ['book_z',3,30] author_A = [['book_x',1,10],['book_y',2,20]] author_B = [['book_s',5,10],['book_t',2,20]] --------------------------------------------- I Want present data like this author quantity Amount($) A&B 3 30 A 3 30 B 7 30 total 13 90 I DO NOT Want present data like this !!! in this case it's ADDED duplicate ['book_z',3,30] author quantity Amount($) A 6 60 B 10 60 total 16 120 that is my problems ,Anybody Please help me to sove this problems. Thanks everybody A: author_A = [['book_x',1,10],['book_y',2,20],['book_z',3,30]] author_B = [['book_s',5,10],['book_t',2,20],['book_z',3,30]] def present(A, B): Aset = set(tuple(x) for x in A) Bset = set(tuple(x) for x in B) both = Aset & Bset justA = Aset - both justB = Bset - both totals = [0, 0] print "%-12s %-12s %12s" % ('author', 'quantity', 'Amount($)') for subset, name in zip((both, justA, justB), ('A*B', 'A', 'B')): tq = sum(x[1] for x in subset) ta = sum(x[2] for x in subset) totals[0] += tq totals[1] += ta print ' %-11s %-11d %-11d' % (name, tq, ta) print ' %-11s %-11d %-11d' % ('total', totals[0], totals[1]) present(author_A, author_B) I've tried to reproduce your desired weird format with some numbers left-aligned and totally funky spacing, but I'm sure you'll need to tweak the formatting in the various print statements to get the exact (and totally weird) formatting effect of your examples. However, apart from the spacing and left- vs right- alignment of the output, this should otherwise be exactly what you request. A: You can find the intersections and exclusive ones like this... A_and_B = [a for a in author_A if a in author_B] only_A = [a for a in author_A if a not in author_B] only_B = [b for b in author_B if b not in author_A] then it is only a matter of printing them... print '%s %d %d' % tuple(A_and_B) print '%s %d %d' % tuple(only_A) print '%s %d %d' % tuple(only_B) Hope that helps A: books = { 'A':[['book_x',1,10],['book_y',2,20]], 'B':[['book_s',5,10],['book_t',2,20]], 'A & B':[['book_z',3,30]], } for key in books: quantity = [] amount = [] for item in books[key]: quantity.append(item[1]) amount.append(item[2]) print ("%s\t%s\t%s") % (key,sum(quantity),sum(amount))
Help to solve my problems with python LIST?
author_A = [['book_x',1,10],['book_y',2,20],['book_z',3,30]] author_B = [['book_s',5,10],['book_t',2,20],['book_z',3,30]] author_A AND author_B = ['book_z',3,30] author_A = [['book_x',1,10],['book_y',2,20]] author_B = [['book_s',5,10],['book_t',2,20]] --------------------------------------------- I Want present data like this author quantity Amount($) A&B 3 30 A 3 30 B 7 30 total 13 90 I DO NOT Want present data like this !!! in this case it's ADDED duplicate ['book_z',3,30] author quantity Amount($) A 6 60 B 10 60 total 16 120 that is my problems ,Anybody Please help me to sove this problems. Thanks everybody
[ "author_A = [['book_x',1,10],['book_y',2,20],['book_z',3,30]]\nauthor_B = [['book_s',5,10],['book_t',2,20],['book_z',3,30]]\n\ndef present(A, B):\n Aset = set(tuple(x) for x in A)\n Bset = set(tuple(x) for x in B)\n both = Aset & Bset\n justA = Aset - both\n justB = Bset - both\n totals = [0, 0]\n print \"%-12s %-12s %12s\" % ('author', 'quantity', 'Amount($)')\n for subset, name in zip((both, justA, justB), ('A*B', 'A', 'B')):\n tq = sum(x[1] for x in subset)\n ta = sum(x[2] for x in subset)\n totals[0] += tq\n totals[1] += ta\n print ' %-11s %-11d %-11d' % (name, tq, ta)\n print ' %-11s %-11d %-11d' % ('total', totals[0], totals[1])\n\npresent(author_A, author_B)\n\nI've tried to reproduce your desired weird format with some numbers left-aligned and totally funky spacing, but I'm sure you'll need to tweak the formatting in the various print statements to get the exact (and totally weird) formatting effect of your examples. However, apart from the spacing and left- vs right- alignment of the output, this should otherwise be exactly what you request.\n", "You can find the intersections and exclusive ones like this...\nA_and_B = [a for a in author_A if a in author_B]\nonly_A = [a for a in author_A if a not in author_B]\nonly_B = [b for b in author_B if b not in author_A]\n\nthen it is only a matter of printing them...\nprint '%s %d %d' % tuple(A_and_B)\nprint '%s %d %d' % tuple(only_A)\nprint '%s %d %d' % tuple(only_B)\n\nHope that helps\n", "books = {\n 'A':[['book_x',1,10],['book_y',2,20]],\n 'B':[['book_s',5,10],['book_t',2,20]],\n 'A & B':[['book_z',3,30]],\n}\nfor key in books:\n quantity = []\n amount = []\n for item in books[key]:\n quantity.append(item[1])\n amount.append(item[2])\n print (\"%s\\t%s\\t%s\") % (key,sum(quantity),sum(amount))\n\n" ]
[ 6, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001529307_python.txt
Q: Django Vote Up/Down method I am making a small app that lets users vote items either up or down. I'm using Django (and new to it!). I am just wondering, what is the best way to present the upvote link to the user. As a link, button or something else? I have already done something like this in php with a different framework but I'm not sure if I can do it the same way. Should I have a method for up/down vote and then display a link to the user to click. When they click it, it performs the method and refreshes the page? A: Here's the gist of my solution. I use images with jQuery/AJAX to handle clicks. Strongly influenced by this site. There's some stuff that could use some work (error handling in the client, for example -- and much of it could probably be refactored) but hopefully the code is useful to you. The HTML: <div class="vote-buttons"> {% ifequal thisUserUpVote 0 %} <img class="vote-up" src = "images/vote-up-off.png" title="Vote this thread UP. (click again to undo)" /> {% else %} <img class="vote-up selected" src = "images/vote-up-on.png" title="Vote this thread UP. (click again to undo)" /> {% endifequal %} {% ifequal thisUserDownVote 0 %} <img class="vote-down" src = "images/vote-down-off.png" title="Vote this thread DOWN if it is innapropriate or incorrect. (click again to undo)" /> {% else %} <img class="vote-down selected" src = "images/vote-down-on.png" title="Vote this thread DOWN if it is innapropriate or incorrect. (click again to undo)" /> {% endifequal %} </div> <!-- .votebuttons --> The jQuery: $(document).ready(function() { $('div.vote-buttons img.vote-up').click(function() { var id = {{ thread.id }}; var vote_type = 'up'; if ($(this).hasClass('selected')) { var vote_action = 'recall-vote' $.post('/ajax/thread/vote', {id:id, type:vote_type, action:vote_action}, function(response) { if (isInt(response)) { $('img.vote-up').removeAttr('src') .attr('src', 'images/vote-up-off.png') .removeClass('selected'); $('div.vote-tally span.num').html(response); } }); } else { var vote_action = 'vote' $.post('/ajax/thread/vote', {id:id, type:vote_type, action:vote_action}, function(response) { if (isInt(response)) { $('img.vote-up').removeAttr('src') .attr('src', 'images/vote-up-on.png') .addClass('selected'); $('div.vote-tally span.num').html(response); } }); } }); The Django view that handles the AJAX request: def vote(request): thread_id = int(request.POST.get('id')) vote_type = request.POST.get('type') vote_action = request.POST.get('action') thread = get_object_or_404(Thread, pk=thread_id) thisUserUpVote = thread.userUpVotes.filter(id = request.user.id).count() thisUserDownVote = thread.userDownVotes.filter(id = request.user.id).count() if (vote_action == 'vote'): if (thisUserUpVote == 0) and (thisUserDownVote == 0): if (vote_type == 'up'): thread.userUpVotes.add(request.user) elif (vote_type == 'down'): thread.userDownVotes.add(request.user) else: return HttpResponse('error-unknown vote type') else: return HttpResponse('error - already voted', thisUserUpVote, thisUserDownVote) elif (vote_action == 'recall-vote'): if (vote_type == 'up') and (thisUserUpVote == 1): thread.userUpVotes.remove(request.user) elif (vote_type == 'down') and (thisUserDownVote ==1): thread.userDownVotes.remove(request.user) else: return HttpResponse('error - unknown vote type or no vote to recall') else: return HttpResponse('error - bad action') num_votes = thread.userUpVotes.count() - thread.userDownVotes.count() return HttpResponse(num_votes) And the relevant parts of the Thread model: class Thread(models.Model): # ... userUpVotes = models.ManyToManyField(User, blank=True, related_name='threadUpVotes') userDownVotes = models.ManyToManyField(User, blank=True, related_name='threadDownVotes') A: Just plug and play: RedditStyleVoting Implementing reddit style voting for any Model with django-voting http://code.google.com/p/django-voting/wiki/RedditStyleVoting A: Whatever you do, make sure that it's submitted by POST and not GET; GET requests should never alter database information. A: As a link, button or something else? Something else, what about an image? When they click it, it performs the method and refreshes the page? Perhaps you could better use ajax to invoke a method to save the vote, and not refresh anything at all. This is what comes to my mind.
Django Vote Up/Down method
I am making a small app that lets users vote items either up or down. I'm using Django (and new to it!). I am just wondering, what is the best way to present the upvote link to the user. As a link, button or something else? I have already done something like this in php with a different framework but I'm not sure if I can do it the same way. Should I have a method for up/down vote and then display a link to the user to click. When they click it, it performs the method and refreshes the page?
[ "Here's the gist of my solution. I use images with jQuery/AJAX to handle clicks. Strongly influenced by this site. There's some stuff that could use some work (error handling in the client, for example -- and much of it could probably be refactored) but hopefully the code is useful to you.\nThe HTML:\n <div class=\"vote-buttons\">\n {% ifequal thisUserUpVote 0 %}\n <img class=\"vote-up\" src = \"images/vote-up-off.png\" title=\"Vote this thread UP. (click again to undo)\" />\n {% else %}\n <img class=\"vote-up selected\" src = \"images/vote-up-on.png\" title=\"Vote this thread UP. (click again to undo)\" />\n {% endifequal %}\n {% ifequal thisUserDownVote 0 %}\n <img class=\"vote-down\" src = \"images/vote-down-off.png\" title=\"Vote this thread DOWN if it is innapropriate or incorrect. (click again to undo)\" />\n {% else %}\n <img class=\"vote-down selected\" src = \"images/vote-down-on.png\" title=\"Vote this thread DOWN if it is innapropriate or incorrect. (click again to undo)\" />\n {% endifequal %}\n </div> <!-- .votebuttons -->\n\nThe jQuery:\n$(document).ready(function() {\n\n $('div.vote-buttons img.vote-up').click(function() {\n\n var id = {{ thread.id }};\n var vote_type = 'up';\n\n if ($(this).hasClass('selected')) {\n var vote_action = 'recall-vote'\n $.post('/ajax/thread/vote', {id:id, type:vote_type, action:vote_action}, function(response) {\n if (isInt(response)) {\n $('img.vote-up').removeAttr('src')\n .attr('src', 'images/vote-up-off.png')\n .removeClass('selected');\n $('div.vote-tally span.num').html(response);\n }\n });\n } else {\n\n var vote_action = 'vote'\n $.post('/ajax/thread/vote', {id:id, type:vote_type, action:vote_action}, function(response) {\n if (isInt(response)) {\n $('img.vote-up').removeAttr('src')\n .attr('src', 'images/vote-up-on.png')\n .addClass('selected');\n $('div.vote-tally span.num').html(response);\n }\n });\n }\n });\n\nThe Django view that handles the AJAX request:\ndef vote(request):\n thread_id = int(request.POST.get('id'))\n vote_type = request.POST.get('type')\n vote_action = request.POST.get('action')\n\n thread = get_object_or_404(Thread, pk=thread_id)\n\n thisUserUpVote = thread.userUpVotes.filter(id = request.user.id).count()\n thisUserDownVote = thread.userDownVotes.filter(id = request.user.id).count()\n\n if (vote_action == 'vote'):\n if (thisUserUpVote == 0) and (thisUserDownVote == 0):\n if (vote_type == 'up'):\n thread.userUpVotes.add(request.user)\n elif (vote_type == 'down'):\n thread.userDownVotes.add(request.user)\n else:\n return HttpResponse('error-unknown vote type')\n else:\n return HttpResponse('error - already voted', thisUserUpVote, thisUserDownVote)\n elif (vote_action == 'recall-vote'):\n if (vote_type == 'up') and (thisUserUpVote == 1):\n thread.userUpVotes.remove(request.user)\n elif (vote_type == 'down') and (thisUserDownVote ==1):\n thread.userDownVotes.remove(request.user)\n else:\n return HttpResponse('error - unknown vote type or no vote to recall')\n else:\n return HttpResponse('error - bad action')\n\n\n num_votes = thread.userUpVotes.count() - thread.userDownVotes.count()\n\n return HttpResponse(num_votes)\n\nAnd the relevant parts of the Thread model:\nclass Thread(models.Model):\n # ...\n userUpVotes = models.ManyToManyField(User, blank=True, related_name='threadUpVotes')\n userDownVotes = models.ManyToManyField(User, blank=True, related_name='threadDownVotes')\n\n", "Just plug and play: \n\nRedditStyleVoting\n Implementing reddit style voting for any Model with django-voting\nhttp://code.google.com/p/django-voting/wiki/RedditStyleVoting\n\n", "Whatever you do, make sure that it's submitted by POST and not GET; GET requests should never alter database information.\n", "\nAs a link, button or something else?\n\nSomething else, what about an image?\n\nWhen they click it, it performs the method and refreshes the page?\n\nPerhaps you could better use ajax to invoke a method to save the vote, and not refresh anything at all.\nThis is what comes to my mind.\n\n" ]
[ 39, 14, 11, 8 ]
[]
[]
[ "django", "python", "voting" ]
stackoverflow_0001528583_django_python_voting.txt
Q: how to create new file using python how can i create new file in /var/log directory using python language in OSX leopard? i tried to do it using os.open function but i get "permission denied" thanks in advance A: Only root can write in /var/log/ on Mac OS X...: $ ls -ld /var/log drwxr-xr-x 60 root wheel 2040 Oct 6 17:00 /var/log Maybe consider using the syslog module in the standard library... A: It probably failed because /var/log has user set to root and group set to wheel. Try running your python code as root and it will probably work. A: You can create the log file as root and then change the owner to the user your script is run as # touch /var/log/mylogfile # chown myuser /var/log/mylogfile where mylogfile is your logfile and myuser is the user the script will be run as also look into logrotate
how to create new file using python
how can i create new file in /var/log directory using python language in OSX leopard? i tried to do it using os.open function but i get "permission denied" thanks in advance
[ "Only root can write in /var/log/ on Mac OS X...:\n$ ls -ld /var/log\ndrwxr-xr-x 60 root wheel 2040 Oct 6 17:00 /var/log\n\nMaybe consider using the syslog module in the standard library...\n", "It probably failed because /var/log has user set to root and group set to wheel. Try running your python code as root and it will probably work.\n", "You can create the log file as root and then change the owner to the user your script is run as\n# touch /var/log/mylogfile\n# chown myuser /var/log/mylogfile\n\nwhere mylogfile is your logfile and myuser is the user the script will be run as\nalso look into logrotate\n" ]
[ 6, 1, 1 ]
[]
[]
[ "macos", "osx_leopard", "python" ]
stackoverflow_0001529584_macos_osx_leopard_python.txt
Q: Cannot shuffle list in Python This is my list: biglist = [ {'title':'U2','link':'u2.com'}, {'title':'beatles','link':'beatles.com'} ] print random.shuffle(biglist) that doesn't work! It returns none. A: random.shuffle shuffles the list, it does not return a new list. So check biglist, not the result of random.shuffle. Documentation for the random module: http://docs.python.org/library/random.html
Cannot shuffle list in Python
This is my list: biglist = [ {'title':'U2','link':'u2.com'}, {'title':'beatles','link':'beatles.com'} ] print random.shuffle(biglist) that doesn't work! It returns none.
[ "random.shuffle shuffles the list, it does not return a new list. So check biglist, not the result of random.shuffle.\nDocumentation for the random module: http://docs.python.org/library/random.html\n" ]
[ 16 ]
[]
[]
[ "list", "python", "random" ]
stackoverflow_0001530161_list_python_random.txt
Q: In Django, the HTML code is shown instead of the actual text & g t ; Welcome How do I show the actual symbol instead? Is it a template filter? Thanks. A: Bit hard to know without more details. If it's from data that you're passing in from the view, you might want to use mark_safe. from django.utils.safestring import mark_safe def your_view(request): ... foo = '&gt;' mark_safe(foo) ... Otherwise, you want the safe filter: {{ myvar|safe }} Obviously, be careful with this, make sure the variable actually is safe, otherwise you'll open yourself up to cross-site scripting attacks and the like. There's a reason Django escapes this stuff.
In Django, the HTML code is shown instead of the actual text
& g t ; Welcome How do I show the actual symbol instead? Is it a template filter? Thanks.
[ "Bit hard to know without more details. If it's from data that you're passing in from the view, you might want to use mark_safe.\nfrom django.utils.safestring import mark_safe\n\ndef your_view(request):\n ...\n foo = '&gt;'\n mark_safe(foo)\n ...\n\nOtherwise, you want the safe filter:\n{{ myvar|safe }}\n\nObviously, be careful with this, make sure the variable actually is safe, otherwise you'll open yourself up to cross-site scripting attacks and the like. There's a reason Django escapes this stuff.\n" ]
[ 5 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001530178_django_python.txt
Q: Close Python when Parent is closed I have a Python program (PP) that loads another Program(AP) via COM, gets its window handle and sets it to be the PP parent. This works pretty well except that I can't control that AP still has their [X] button available in the top left corner. Since this is a pretty obvious place for the user to close when they are done with the program, I tried this and it left the PP in the Task Manager running, but not visible with no possible way to kill it other than through Task Manager. Any ideas on how to handle this? I expect it to be rather Common that the user closes in this manner. Thanks! A: How's PP's control flow? If it's event-driven it could get appropriate events upon closure of that parent window or termination of that AP process; otherwise it could "poll" to check if the window or process are still around. A: As you said you get AP's handle and pass it to PP, so PP has that handle around, so when AP is closed PP can check if that window handle exists using windows API IsWindow or IsWindowVisible depending on you needs import win32gui win32gui.IsWindow(handle) win32gui.IsWindowVisible(handle)
Close Python when Parent is closed
I have a Python program (PP) that loads another Program(AP) via COM, gets its window handle and sets it to be the PP parent. This works pretty well except that I can't control that AP still has their [X] button available in the top left corner. Since this is a pretty obvious place for the user to close when they are done with the program, I tried this and it left the PP in the Task Manager running, but not visible with no possible way to kill it other than through Task Manager. Any ideas on how to handle this? I expect it to be rather Common that the user closes in this manner. Thanks!
[ "How's PP's control flow? If it's event-driven it could get appropriate events upon closure of that parent window or termination of that AP process; otherwise it could \"poll\" to check if the window or process are still around.\n", "As you said you get AP's handle and pass it to PP, so PP has that handle around, so when AP is closed PP can check if that window handle exists using windows API IsWindow or IsWindowVisible depending on you needs\nimport win32gui\nwin32gui.IsWindow(handle)\nwin32gui.IsWindowVisible(handle)\n\n" ]
[ 1, 0 ]
[]
[]
[ "com", "parent", "python", "wxpython" ]
stackoverflow_0001521670_com_parent_python_wxpython.txt
Q: Sharing widgets between PyQT and Boost.Python I was wondering if it was possible to share widgets between PyQt and Boost.Python. I will be embedding a Python interpreter into an application of mine that uses Qt. I would like users of my application to be able to embed their own UI widgets into UI widgets programmed in C++ and exposed via Boost.Python. Is this possible and how would one go about doing this? A: I've tried to write some proxying for this, but I haven't succeeded completely. Here's a start that tries to solve this, but the dir() won't work. Calling functions directly works somewhat. The idea was to create an additional python object wrapped in SIP and forward any calls/attributes to that object if the original boost.python object does not have any matching attribute. I'm not enough of a Python guru to make this work properly, though. :( (I'm turning this into a wiki, so ppl can edit and update here, as this code is just half-baked boilerplate.) c++: #include "stdafx.h" #include <QtCore/QTimer> class MyWidget : public QTimer { public: MyWidget() {} void foo() { std::cout << "yar\n"; } unsigned long myself() { return reinterpret_cast<unsigned long>(this); } }; #ifdef _DEBUG BOOST_PYTHON_MODULE(PyQtBoostPythonD) #else BOOST_PYTHON_MODULE(PyQtBoostPython) #endif { using namespace boost::python; class_<MyWidget, bases<>, MyWidget*, boost::noncopyable>("MyWidget"). def("foo", &MyWidget::foo). def("myself", &MyWidget::myself); } Python: from PyQt4.Qt import * import sys import sip from PyQtBoostPythonD import * # the module compiled from cpp file above a = QApplication(sys.argv) w = QWidget() f = MyWidget() def _q_getattr(self, attr): if type(self) == type(type(MyWidget)): raise AttributeError else: print "get %s" % attr value = getattr(sip.wrapinstance(self.myself(), QObject), attr) print "get2 %s returned %s" % (attr, value) return value MyWidget.__getattr__ = _q_getattr def _q_dir(self): r = self.__dict__ r.update(self.__class__.__dict__) wrap = sip.wrapinstance(self.myself(), QObject) r.update(wrap.__dict__) r.update(wrap.__class__.__dict__) return r MyWidget.__dir__ = _q_dir f.start() f.foo() print dir(f)
Sharing widgets between PyQT and Boost.Python
I was wondering if it was possible to share widgets between PyQt and Boost.Python. I will be embedding a Python interpreter into an application of mine that uses Qt. I would like users of my application to be able to embed their own UI widgets into UI widgets programmed in C++ and exposed via Boost.Python. Is this possible and how would one go about doing this?
[ "I've tried to write some proxying for this, but I haven't succeeded completely. Here's a start that tries to solve this, but the dir() won't work. Calling functions directly works somewhat.\nThe idea was to create an additional python object wrapped in SIP and forward any calls/attributes to that object if the original boost.python object does not have any matching attribute.\nI'm not enough of a Python guru to make this work properly, though. :(\n(I'm turning this into a wiki, so ppl can edit and update here, as this code is just half-baked boilerplate.)\nc++:\n#include \"stdafx.h\" \n#include <QtCore/QTimer>\n\nclass MyWidget : public QTimer\n{\npublic:\n MyWidget() {}\n void foo() { std::cout << \"yar\\n\"; }\n unsigned long myself() { return reinterpret_cast<unsigned long>(this); }\n};\n\n#ifdef _DEBUG\nBOOST_PYTHON_MODULE(PyQtBoostPythonD)\n#else\nBOOST_PYTHON_MODULE(PyQtBoostPython)\n#endif\n{\n using namespace boost::python;\n\n class_<MyWidget, bases<>, MyWidget*, boost::noncopyable>(\"MyWidget\").\n def(\"foo\", &MyWidget::foo).\n def(\"myself\", &MyWidget::myself);\n} \n\nPython:\nfrom PyQt4.Qt import *\nimport sys\n\nimport sip\nfrom PyQtBoostPythonD import * # the module compiled from cpp file above\n\na = QApplication(sys.argv)\nw = QWidget()\nf = MyWidget()\n\ndef _q_getattr(self, attr):\n if type(self) == type(type(MyWidget)):\n raise AttributeError\n else:\n print \"get %s\" % attr\n value = getattr(sip.wrapinstance(self.myself(), QObject), attr)\n print \"get2 %s returned %s\" % (attr, value)\n return value\n\nMyWidget.__getattr__ = _q_getattr\n\ndef _q_dir(self):\n r = self.__dict__\n r.update(self.__class__.__dict__)\n wrap = sip.wrapinstance(self.myself(), QObject)\n r.update(wrap.__dict__)\n r.update(wrap.__class__.__dict__)\n return r\n\nMyWidget.__dir__ = _q_dir\n\nf.start()\nf.foo()\nprint dir(f)\n\n" ]
[ 2 ]
[]
[]
[ "boost_python", "pyqt", "python", "python_sip", "qt" ]
stackoverflow_0001436514_boost_python_pyqt_python_python_sip_qt.txt
Q: How do I change my float into a two decimal number with a comma as a decimal point separator in python? I have a float: 1.2333333 How do I change it into a two decimal number with a comma as a decimal point separator, eg 1,23? A: To get two decimals, use '%.2f' % 1.2333333 To get a comma, use replace(): ('%.2f' % 1.2333333).replace('.', ',') A second option would be to change the locale to some place which uses a comma and then use locale.format(): locale.setlocale(locale.LC_ALL, 'FR') locale.format('%.2f', 1.2333333) A: The locale module can help you with reading and writing numbers in the locale's format. >>> import locale >>> locale.setlocale(locale.LC_ALL, "") 'sv_SE.UTF-8' >>> locale.format("%f", 2.2) '2,200000' >>> locale.format("%g", 2.2) '2,2' >>> locale.atof("3,1415926") 3.1415926000000001 A: If you don't want to mess with the locale, you can of course do the formatting yourself. This might serve as a starting point: def formatFloat(value, decimals = 2, sep = ","): return "%s%s%0*u" % (int(value), sep, decimals, (10 ** decimals) * (value - int(value))) Note that this will always truncate the fraction part (i.e. 1.04999 will print as 1,04).
How do I change my float into a two decimal number with a comma as a decimal point separator in python?
I have a float: 1.2333333 How do I change it into a two decimal number with a comma as a decimal point separator, eg 1,23?
[ "To get two decimals, use\n'%.2f' % 1.2333333\n\nTo get a comma, use replace():\n('%.2f' % 1.2333333).replace('.', ',')\n\nA second option would be to change the locale to some place which uses a comma and then use locale.format():\nlocale.setlocale(locale.LC_ALL, 'FR')\nlocale.format('%.2f', 1.2333333)\n\n", "The locale module can help you with reading and writing numbers in the locale's format.\n>>> import locale\n>>> locale.setlocale(locale.LC_ALL, \"\")\n'sv_SE.UTF-8'\n>>> locale.format(\"%f\", 2.2)\n'2,200000'\n>>> locale.format(\"%g\", 2.2)\n'2,2'\n>>> locale.atof(\"3,1415926\")\n3.1415926000000001\n\n", "If you don't want to mess with the locale, you can of course do the formatting yourself. This might serve as a starting point:\ndef formatFloat(value, decimals = 2, sep = \",\"):\n return \"%s%s%0*u\" % (int(value), sep, decimals, (10 ** decimals) * (value - int(value)))\n\nNote that this will always truncate the fraction part (i.e. 1.04999 will print as 1,04).\n" ]
[ 16, 8, 2 ]
[]
[]
[ "decimal", "floating_point", "python" ]
stackoverflow_0001530430_decimal_floating_point_python.txt
Q: Ruby methods equivalent of "if a in list" in python? In python I can use this to check if the element in list a: >>> a = range(10) >>> 5 in a True >>> 16 in a False How this can be done in Ruby? A: Use the include?() method: (1..10).include?(5) #=>true (1..10).include?(16) #=>false EDIT: (1..10) is Range in Ruby , in the case you want an Array(list) : (1..10).to_a #=> [1,2,3,4,5,6,7,8,9,10] A: Range has the === method, which checks whether the argument is part of the range. You use it like this: (1..10) === 5 #=> true (1..10) === 15 #=> false or as you wrote it: a= (1..10) a === 5 #=> true a === 16 #=> false You must be sure the values of the range and the value you are testing are of compatible type, otherwise an Exception will be thrown. (2.718..3.141) === 3 #=> true (23..42) === "foo" # raises exception This is done in O(1), as Range#===(value) only compares value with Range#first and Range#last. If you first call Range#to_a and then Array#include?, it runs in O(n), as Range#to_a, needs to fill an array with n elements, and Array#include? needs to search through the n elements again. If you want to see the difference, open irb and type: (1..10**9) === 5 #=> true (1..10**9).to_a.include?(5) # wait some time until your computer is out of ram and freezess
Ruby methods equivalent of "if a in list" in python?
In python I can use this to check if the element in list a: >>> a = range(10) >>> 5 in a True >>> 16 in a False How this can be done in Ruby?
[ "Use the include?() method:\n(1..10).include?(5) #=>true\n(1..10).include?(16) #=>false\n\nEDIT:\n(1..10) is Range in Ruby , in the case you want an Array(list) :\n(1..10).to_a #=> [1,2,3,4,5,6,7,8,9,10]\n\n", "Range has the === method, which checks whether the argument is part of the range.\nYou use it like this:\n(1..10) === 5 #=> true\n(1..10) === 15 #=> false\n\nor as you wrote it:\na= (1..10)\na === 5 #=> true\na === 16 #=> false\n\nYou must be sure the values of the range and the value you are testing are of compatible type, otherwise an Exception will be thrown.\n(2.718..3.141) === 3 #=> true\n(23..42) === \"foo\" # raises exception\n\n\nThis is done in O(1), as Range#===(value) only compares value with Range#first and Range#last.\nIf you first call Range#to_a and then Array#include?, it runs in O(n), as Range#to_a, needs to fill an array with n elements, and Array#include? needs to search through the n elements again.\n\nIf you want to see the difference, open irb and type:\n(1..10**9) === 5 #=> true\n(1..10**9).to_a.include?(5) # wait some time until your computer is out of ram and freezess\n\n" ]
[ 27, 10 ]
[]
[]
[ "python", "ruby", "syntax" ]
stackoverflow_0001529986_python_ruby_syntax.txt
Q: flup/fastcgi cpu usage under no-load conditions I'm running Django as threaded fastcgi via flup, served by lighttpd, communicating via sockets. What is the expected CPU usage for each fastcgi thread under no load? On startup, each thread runs at 3-4% cpu usage for a while, and then backs off to around .5% over the course of a couple of hours. It doesn't sink below this level. Is this much CPU usage normal? Do I have some bug in my code that is causing the idle loop to require more processing than it should? I expected the process to use no measurable CPU when it was completely idle. I'm not doing anything ridiculously complicated with Django, definitely nothing that should require extended processing. I realize that this isn't a lot of load, but if it's a bug I introduced, I would like to fix it. A: I've looked at this on django running as fastcgi on both Slicehost (django 1.1, python 2.6) and Dreamhost (django 1.0, python 2.5), and I can say this: Running the top command shows the processes use a large amount of CPU to start up for ~2-3 seconds, then drop down to 0 almost immediately. Running the ps aux command after starting the django app shows something similar to what you describe, however this is actually misleading. From the Ubuntu man pages for ps: CPU usage is currently expressed as the percentage of time spent running during the entire lifetime of a process. This is not ideal, and it does not conform to the standards that ps otherwise conforms to. CPU usage is unlikely to add up to exactly 100%. Basically, the %CPU column shown by ps is actually an average over the time the process has been running. The decay you see is due to the high initial spike followed by inactivity being averaged over time. A: Your fast-cgi threads must not consume any (noticeable) CPU if there are no requests to process. You should investigate the load you are describing. I use the same architecture and my threads are completely idle.
flup/fastcgi cpu usage under no-load conditions
I'm running Django as threaded fastcgi via flup, served by lighttpd, communicating via sockets. What is the expected CPU usage for each fastcgi thread under no load? On startup, each thread runs at 3-4% cpu usage for a while, and then backs off to around .5% over the course of a couple of hours. It doesn't sink below this level. Is this much CPU usage normal? Do I have some bug in my code that is causing the idle loop to require more processing than it should? I expected the process to use no measurable CPU when it was completely idle. I'm not doing anything ridiculously complicated with Django, definitely nothing that should require extended processing. I realize that this isn't a lot of load, but if it's a bug I introduced, I would like to fix it.
[ "I've looked at this on django running as fastcgi on both Slicehost (django 1.1, python 2.6) and Dreamhost (django 1.0, python 2.5), and I can say this:\nRunning the top command shows the processes use a large amount of CPU to start up for ~2-3 seconds, then drop down to 0 almost immediately.\nRunning the ps aux command after starting the django app shows something similar to what you describe, however this is actually misleading. From the Ubuntu man pages for ps:\n\nCPU usage is currently expressed as\n the percentage of time spent running\n during the entire lifetime of a\n process. This is not ideal, and it\n does not conform to the standards that\n ps otherwise conforms to. CPU usage is\n unlikely to add up to exactly 100%.\n\nBasically, the %CPU column shown by ps is actually an average over the time the process has been running. The decay you see is due to the high initial spike followed by inactivity being averaged over time.\n", "Your fast-cgi threads must not consume any (noticeable) CPU if there are no requests to process.\nYou should investigate the load you are describing. I use the same architecture and my threads are completely idle.\n" ]
[ 2, 0 ]
[]
[]
[ "django", "fastcgi", "flup", "lighttpd", "python" ]
stackoverflow_0001522844_django_fastcgi_flup_lighttpd_python.txt
Q: Using callback function in pyevent I want to detect pressing the "snapshot" button on the top of a webcam in linux. The button has this entry in /dev: /dev/input/by-id/usb-PixArt_Imaging_Inc._USB2.0_UVC_VGA-event-if00 I am using the "rel" wrapper, at the moment, because it handles exceptions better. Before the following code executes, self.s.cam_btn is assigned the /dev entry for the button. rel.override() rel.init() rel.read(self.s.cam_btn, self.snap) rel.dispatch() self.snap() is the callback. It captures a screen shot from mplayer and feeds the image to an OCR program. Everything mostly works until the callback returns. Here is the problem: If self.snap() returns nothing, the program stops and will not service any more button events. If self.snap() returns 1, the program continues servicing the same button event in an infinite loop, rather than waiting for a new event. Documentation for pyevent is a little sparse so any help gratefully received. Clinton A: Never used pyevent, but would try rescheduling the event at the end of the handler: def snap(self): # ... code ... rel.read(self.s.cam_btn, self.snap) return False
Using callback function in pyevent
I want to detect pressing the "snapshot" button on the top of a webcam in linux. The button has this entry in /dev: /dev/input/by-id/usb-PixArt_Imaging_Inc._USB2.0_UVC_VGA-event-if00 I am using the "rel" wrapper, at the moment, because it handles exceptions better. Before the following code executes, self.s.cam_btn is assigned the /dev entry for the button. rel.override() rel.init() rel.read(self.s.cam_btn, self.snap) rel.dispatch() self.snap() is the callback. It captures a screen shot from mplayer and feeds the image to an OCR program. Everything mostly works until the callback returns. Here is the problem: If self.snap() returns nothing, the program stops and will not service any more button events. If self.snap() returns 1, the program continues servicing the same button event in an infinite loop, rather than waiting for a new event. Documentation for pyevent is a little sparse so any help gratefully received. Clinton
[ "Never used pyevent, but would try rescheduling the event at the end of the handler:\ndef snap(self):\n # ... code ...\n rel.read(self.s.cam_btn, self.snap)\n return False\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0001528915_python.txt
Q: Coming from a Visual Studio background, what do you recommend I use to start my VERY FIRST Python project? I'm locked in using C# and I don't like it one bit. I have to start branching out to better myself as a professional and as a person, so I've decided to start making things in my own time using Python. The problem is, I've basically programmed only in C#. What IDE should I use to make programs using Python? My goal is to make a sort of encyclopedic program for a game I'm playing right now, displaying hero information, names, stats, picture, etc. All of this information I'm going to parse from an XML file. My plan is for this application to be able to run under Windows, Linux and Mac (I'm under the impression that any code written in Python works 100% cross-platform, right?) Thanks a lot for your tremendous help brothers of SO. :P Edit: I guess I should clarify that I'm looking for an IDE that supports drag and drop GUI design. I'm used to using VS and I'm not really sure how you can do it any other way. A: How about IronPython As of VS 2010 it will become a first class .Net language Or currently in a VS2008 shell IronPythonStudio Not that I have used any of these In hindsight this may not make for a very good cross platform solution, but it will allow you to leverage your VS experience A: You don't really need an IDE for Python; just a good text editor. An IDE you might like though is Editra. It is actually written in Python itself, so you can use it on Linux, Mac, and Windows! I used Editra as my Python IDE for about 6-10 months. It gives you all you need and nothing more: syntax highlighting, code folding, auto indenting, and optional plugins to integrate a Python shell right into the editing window. You'll definitely want auto indenting when you are coding in Python. As for designing GUIs visually, I suggest you check out Glade. It allows you to design GUIs easily with the GTK+ toolkit. (GTK+ GUIs work on Linux, Mac, and Windows!) It will take a bit more effort to integrate them into your Python programs than it does in Microsoft's Visual languages, but it isn't that bad once you learn it. The nice thing about using GTK+ and Glade is that you design your interface using containers, padding properties, and things like that. It is possible to design them dragging and dropping anywhere on the grid like in Visual Studio, but who wants to do that? Once you learn your way around containers and padding, you'll be very happy with them. It's much easier to make everything even, and to have similar widgets grouped together for hiding/disabling and things like that. Good luck in your Python journey! :) A: I think Wing IDE also deserves to be mentioned. I was a VIM user for years, but am currently thinking about changing to Wing. It costs money, but after evaluating for about a week (you can do a 30-day eval), I feel it will be well worth it. I do not have any experience using the other IDEs (Komodo, Eclipse) mentioned. So they might be even better than Wing. It would be interesting if someone who has experience with all of them could describe some of their differences, strengths and weaknesses. That being said, I recommend learning Python using a basic approach - use a text editor like Notepad++, VIM or emacs to learn the basics. Learn to use the standard Python debugger, pdb, from the command line. And use the interactive shell when learning (use IPython for interactive work). Switch to an IDE when you master the basics. There is also a very basic IDE in the Python distro: IDLE. There are a lot of great tutorials and books on Python available. Start with the standard documentation. A lot of people like Dive into Python. I also recommend Python in a nutshell. A: Good IDE for python are Komodo or Eclipse with PyDev. But even Notepad++ or any other text editor will enough to get you started, since you don't need to compile your code, just have a good editor. The benefit of the above IDEs, is that you can use them to manage a large scale project and debug your code. As for the cross platform issue, as long as you don't use the specific os libs (such as win32api), you are safe in being cross platform. Seems like a very large project for a first time. Is it going to be web based or desktop? Since it will greatly change your design and choice of python libs. A: Python is so simple that IDE's aren't as necessary as they are with C# and VB. A "complaint" is that the Python IDE's don't do very much. This shouldn't be counted as a complaint -- it's a virtue of the language. We use Komodo Edit for professional work. It does much of what we need. A: I'd vote for Eclipse +pydev (especially that pydev extensions were released recently as open source). You can also use either VIM or emacs for python development. Also, I'd recomment the great Dive Into Python A: For dynamically typed languages power editors like Vim and Emacs make excellent IDEs. You can use GUI tools to make your layout and still use Vim/Emacs for development. Because there is no compile it is very fast to test your code e.g. :! python % A: Eclipse + Pydev is currently the gold standard IDE for Python. It's cross platform and since it's a general purpose IDE it has support for just about every other programming activity you might want to consider. Eclipse is not bad for C++, and very mature for Java developers. It's quite amazing when you realize that all this great stuff costs nothing. A: I suspect you'll have a hard time finding an IDE with an integrated GUI designer, I think. But most GUI toolkits have drag and drop designers you can use to design dialogboxes and windows, and then use from Python, even if it's not integrated with the GUI. You'll learn soon enough. Here is a question asking for GUI designers for Python: Delphi-like GUI designer for Python A: I find SciTE to be a good alternative to Notepad++. It's very lightweight, but has very good support for language highlighting, and in-editor script execution. It also has one of my favorite editing gestures from Visual Studio: Ctrl-F3, picks the word at the edit cursor, makes it the search text, and searches for the next occurrence. PyScripter is the next step up IDE I would suggest, giving a nice class browser window, just like VS. For interactive debugging, I use winpdb (which, despite the name, is not a Windows-only utility). A: In terms of GUI editing, take a look at wxwidgets, and in particular XRCed. XRCed is an application for generating interfaces (not quite drag and drop, but close) which are then saved as XML files. Using wxPython you can then load the XML file and it will rebuild the interface for you. You then just need to obtain references to each of your UI elements (by name) and you can get on with the real work.
Coming from a Visual Studio background, what do you recommend I use to start my VERY FIRST Python project?
I'm locked in using C# and I don't like it one bit. I have to start branching out to better myself as a professional and as a person, so I've decided to start making things in my own time using Python. The problem is, I've basically programmed only in C#. What IDE should I use to make programs using Python? My goal is to make a sort of encyclopedic program for a game I'm playing right now, displaying hero information, names, stats, picture, etc. All of this information I'm going to parse from an XML file. My plan is for this application to be able to run under Windows, Linux and Mac (I'm under the impression that any code written in Python works 100% cross-platform, right?) Thanks a lot for your tremendous help brothers of SO. :P Edit: I guess I should clarify that I'm looking for an IDE that supports drag and drop GUI design. I'm used to using VS and I'm not really sure how you can do it any other way.
[ "How about IronPython\nAs of VS 2010 it will become a first class .Net language\nOr currently in a VS2008 shell IronPythonStudio\nNot that I have used any of these\nIn hindsight this may not make for a very good cross platform solution, but it will allow you to leverage your VS experience\n", "You don't really need an IDE for Python; just a good text editor. An IDE you might like though is Editra. It is actually written in Python itself, so you can use it on Linux, Mac, and Windows! I used Editra as my Python IDE for about 6-10 months. It gives you all you need and nothing more: syntax highlighting, code folding, auto indenting, and optional plugins to integrate a Python shell right into the editing window. You'll definitely want auto indenting when you are coding in Python.\nAs for designing GUIs visually, I suggest you check out Glade. It allows you to design GUIs easily with the GTK+ toolkit. (GTK+ GUIs work on Linux, Mac, and Windows!) It will take a bit more effort to integrate them into your Python programs than it does in Microsoft's Visual languages, but it isn't that bad once you learn it. The nice thing about using GTK+ and Glade is that you design your interface using containers, padding properties, and things like that. It is possible to design them dragging and dropping anywhere on the grid like in Visual Studio, but who wants to do that? Once you learn your way around containers and padding, you'll be very happy with them. It's much easier to make everything even, and to have similar widgets grouped together for hiding/disabling and things like that.\nGood luck in your Python journey! :)\n", "I think Wing IDE also deserves to be mentioned. I was a VIM user for years, but am currently thinking about changing to Wing. It costs money, but after evaluating for about a week (you can do a 30-day eval), I feel it will be well worth it.\nI do not have any experience using the other IDEs (Komodo, Eclipse) mentioned. So they might be even better than Wing. It would be interesting if someone who has experience with all of them could describe some of their differences, strengths and weaknesses.\nThat being said, I recommend learning Python using a basic approach - use a text editor like Notepad++, VIM or emacs to learn the basics. Learn to use the standard Python debugger, pdb, from the command line. And use the interactive shell when learning (use IPython for interactive work).\nSwitch to an IDE when you master the basics.\nThere is also a very basic IDE in the Python distro: IDLE.\nThere are a lot of great tutorials and books on Python available. Start with the standard documentation. A lot of people like Dive into Python. I also recommend Python in a nutshell.\n", "Good IDE for python are Komodo or Eclipse with PyDev.\nBut even Notepad++ or any other text editor will enough to get you started, since you don't need to compile your code, just have a good editor.\nThe benefit of the above IDEs, is that you can use them to manage a large scale project and debug your code.\nAs for the cross platform issue, as long as you don't use the specific os libs (such as win32api), you are safe in being cross platform.\nSeems like a very large project for a first time. Is it going to be web based or desktop? Since it will greatly change your design and choice of python libs.\n", "Python is so simple that IDE's aren't as necessary as they are with C# and VB.\nA \"complaint\" is that the Python IDE's don't do very much. This shouldn't be counted as a complaint -- it's a virtue of the language.\nWe use Komodo Edit for professional work. It does much of what we need.\n", "I'd vote for Eclipse +pydev (especially that pydev extensions were released recently as open source). You can also use either VIM or emacs for python development.\nAlso, I'd recomment the great Dive Into Python\n", "For dynamically typed languages power editors like Vim and Emacs make excellent IDEs. You can use GUI tools to make your layout and still use Vim/Emacs for development. Because there is no compile it is very fast to test your code e.g. :! python % \n\n", "Eclipse + Pydev is currently the gold standard IDE for Python. It's cross platform and since it's a general purpose IDE it has support for just about every other programming activity you might want to consider. \nEclipse is not bad for C++, and very mature for Java developers. It's quite amazing when you realize that all this great stuff costs nothing. \n", "I suspect you'll have a hard time finding an IDE with an integrated GUI designer, I think. But most GUI toolkits have drag and drop designers you can use to design dialogboxes and windows, and then use from Python, even if it's not integrated with the GUI. You'll learn soon enough. \nHere is a question asking for GUI designers for Python:\nDelphi-like GUI designer for Python\n", "I find SciTE to be a good alternative to Notepad++. It's very lightweight, but has very good support for language highlighting, and in-editor script execution. It also has one of my favorite editing gestures from Visual Studio: Ctrl-F3, picks the word at the edit cursor, makes it the search text, and searches for the next occurrence.\nPyScripter is the next step up IDE I would suggest, giving a nice class browser window, just like VS.\nFor interactive debugging, I use winpdb (which, despite the name, is not a Windows-only utility).\n", "In terms of GUI editing, take a look at wxwidgets, and in particular XRCed. \nXRCed is an application for generating interfaces (not quite drag and drop, but close) which are then saved as XML files. Using wxPython you can then load the XML file and it will rebuild the interface for you.\nYou then just need to obtain references to each of your UI elements (by name) and you can get on with the real work.\n" ]
[ 5, 3, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "ide", "python", "vim" ]
stackoverflow_0001517428_c#_ide_python_vim.txt
Q: how do I sign data with pyme? I just installed pyme on my ubuntu system. it was easy (thanks apt-get) and I can reproduce the example code (encrypting using a public key in my keyring). now I would like to sign some data and I didn't manage to find any example code nor much documentation. this is what I've been doing: >>> plain = pyme.core.Data('this is just some sample text\n') >>> cipher = pyme.core.Data() >>> c = pyme.core.Context() >>> c.set_armor(1) >>> name='[email protected]' >>> c.op_keylist_start(name, 0) >>> r = c.op_keylist_next() >>> c.op_sign(???) I don't know what to give as parameters, the op_sign method tells me >>> help(c.op_sign) Help on function _funcwrap in module pyme.util: _funcwrap(*args, **kwargs) gpgme_op_sign(ctx, plain, sig, mode) -> gpgme_error_t but I do not know how to create such objects. A: You can follow example from pyme doc and modify it a bit: import pyme.core import pyme.pygpgme plaintext = pyme.core.Data('this is a test message') ciphertext = pyme.core.Data() ctx = pyme.core.Context() ctx.set_armor(1) name = '[email protected]' ctx.op_keylist_start(name, 0) key = ctx.op_keylist_next() # first argument is message to sign, second argument is buffer where to write # the signature, third argument is signing mode, see # http://www.gnupg.org/documentation/manuals/gpgme/Creating-a-Signature.html#Creating-a-Signature for more details. ctx.op_sign(plaintext, ciphertext, pyme.pygpgme.GPGME_SIG_MODE_CLEAR) ciphertext.seek(0, 0) print ciphertext.read()
how do I sign data with pyme?
I just installed pyme on my ubuntu system. it was easy (thanks apt-get) and I can reproduce the example code (encrypting using a public key in my keyring). now I would like to sign some data and I didn't manage to find any example code nor much documentation. this is what I've been doing: >>> plain = pyme.core.Data('this is just some sample text\n') >>> cipher = pyme.core.Data() >>> c = pyme.core.Context() >>> c.set_armor(1) >>> name='[email protected]' >>> c.op_keylist_start(name, 0) >>> r = c.op_keylist_next() >>> c.op_sign(???) I don't know what to give as parameters, the op_sign method tells me >>> help(c.op_sign) Help on function _funcwrap in module pyme.util: _funcwrap(*args, **kwargs) gpgme_op_sign(ctx, plain, sig, mode) -> gpgme_error_t but I do not know how to create such objects.
[ "You can follow example from pyme doc and modify it a bit:\nimport pyme.core\nimport pyme.pygpgme\n\nplaintext = pyme.core.Data('this is a test message')\nciphertext = pyme.core.Data()\nctx = pyme.core.Context()\nctx.set_armor(1)\nname = '[email protected]'\nctx.op_keylist_start(name, 0)\nkey = ctx.op_keylist_next()\n# first argument is message to sign, second argument is buffer where to write\n# the signature, third argument is signing mode, see\n# http://www.gnupg.org/documentation/manuals/gpgme/Creating-a-Signature.html#Creating-a-Signature for more details.\nctx.op_sign(plaintext, ciphertext, pyme.pygpgme.GPGME_SIG_MODE_CLEAR)\nciphertext.seek(0, 0)\nprint ciphertext.read()\n\n" ]
[ 0 ]
[]
[]
[ "encryption", "gnupg", "gpgme", "pyme", "python" ]
stackoverflow_0001530797_encryption_gnupg_gpgme_pyme_python.txt
Q: Python HTML Minimizer I have a cherrypy web server that uses larges amounts of HTML data. Is there anyway in Python to minimize the HTML so that all comments, spaces, ext, are removed? A: Not what you mean, but: Gzip. (Assuming you aren't already serving through a compressing front-end.) Compression will zip away whitespace to almost nothing; unless you have excessively large comments this will be more effective than minification. A: there are bindings to tidy for python, called mxTidy from eGenix (Marc André Lemburg) A: HTML Tidy's libtidy doesn't seem to have python bindings (bit it does have perl and c++ etc), but ought to be easy to run as an exe in a pipe. Or ideally, use it to 'tidy' all static html files once so they don't need to be tidied each time they are served.
Python HTML Minimizer
I have a cherrypy web server that uses larges amounts of HTML data. Is there anyway in Python to minimize the HTML so that all comments, spaces, ext, are removed?
[ "Not what you mean, but: Gzip. (Assuming you aren't already serving through a compressing front-end.) Compression will zip away whitespace to almost nothing; unless you have excessively large comments this will be more effective than minification.\n", "there are bindings to tidy for python, called mxTidy from eGenix (Marc André Lemburg)\n", "HTML Tidy's libtidy doesn't seem to have python bindings (bit it does have perl and c++ etc), but ought to be easy to run as an exe in a pipe.\nOr ideally, use it to 'tidy' all static html files once so they don't need to be tidied each time they are served.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "html", "minimize", "python" ]
stackoverflow_0001437357_html_minimize_python.txt
Q: BioPython: Skipping over bad GIDs with Entrez.esummary/Entrez.read Sorry about the odd title. I am using eSearch & eSummary to go from Accession Number --> gID --> TaxID Assume that 'accessions' is a list of 20 accession numbers (I do 20 at a time because that's the maximum that NCBI will allow). I do: handle = Entrez.esearch(db="nucleotide", rettype="xml", term=accessions) record = Entrez.read(handle) gids = ",".join(record[u'IdList']) This gives me 20 correspoding GIDs from those 20 accession numbers. Followed by: handle = Entrez.esummary(db="nucleotide", id=gids) record = Entrez.read(handle) Which gives me this error because one of the GIDs in gids has been removed from NCBI: File ".../biopython-1.52/build/lib.macosx-10.6-universal-2.6/Bio/Entrez/Parser.py", line 191, in endElement value = IntegerElement(value) ValueError: invalid literal for int() with base 10: '' I could do try:, except: except that would skip the other 19 GIDs which are okay. My question is: How do I read 20 records at a time with Entrez.read and skip over the ones that are missing without sacrificing the other 20? I could do one at a time but that would be incredibly slow (I have 300,000 accession numbers, and NCBI only allows you to do 3 queries per second but in reality it's more like 1 query per second). A: I sent a message out to the BioPython mailing list.Apparently it's a bug & they're working on it. A: I'd have a look at Parser.py and see what is being parsed. It looks like you are getting a result from the NCBI ok, but the format of one record is tripping up the parser. It may be possible to subclass/monkeypatch the parser to get it past the exception.
BioPython: Skipping over bad GIDs with Entrez.esummary/Entrez.read
Sorry about the odd title. I am using eSearch & eSummary to go from Accession Number --> gID --> TaxID Assume that 'accessions' is a list of 20 accession numbers (I do 20 at a time because that's the maximum that NCBI will allow). I do: handle = Entrez.esearch(db="nucleotide", rettype="xml", term=accessions) record = Entrez.read(handle) gids = ",".join(record[u'IdList']) This gives me 20 correspoding GIDs from those 20 accession numbers. Followed by: handle = Entrez.esummary(db="nucleotide", id=gids) record = Entrez.read(handle) Which gives me this error because one of the GIDs in gids has been removed from NCBI: File ".../biopython-1.52/build/lib.macosx-10.6-universal-2.6/Bio/Entrez/Parser.py", line 191, in endElement value = IntegerElement(value) ValueError: invalid literal for int() with base 10: '' I could do try:, except: except that would skip the other 19 GIDs which are okay. My question is: How do I read 20 records at a time with Entrez.read and skip over the ones that are missing without sacrificing the other 20? I could do one at a time but that would be incredibly slow (I have 300,000 accession numbers, and NCBI only allows you to do 3 queries per second but in reality it's more like 1 query per second).
[ "I sent a message out to the BioPython mailing list.Apparently it's a bug & they're working on it.\n", "I'd have a look at Parser.py and see what is being parsed. It looks like you are getting a result from the NCBI ok, but the format of one record is tripping up the parser.\nIt may be possible to subclass/monkeypatch the parser to get it past the exception.\n" ]
[ 3, 0 ]
[]
[]
[ "bioinformatics", "biopython", "python" ]
stackoverflow_0001523571_bioinformatics_biopython_python.txt
Q: Double import in grok This is a normal case of mutual import. Suppose you have the following layout ./test.py ./one ./one/__init__.py ./one/two ./one/two/__init__.py ./one/two/m.py ./one/two/three ./one/two/three/__init__.py ./one/two/three/four ./one/two/three/four/__init__.py ./one/two/three/four/e.py ./one/two/u.py And you have test.py from one.two.three.four import e one/two/three/four/e.py from one.two import m one/two/m.py print "m" import u one/two/u.py print "u" import m When you run the test.py program, you expect, of course: python test.py m u Which is the expected behavior. Modules have already been imported, and they are only once. In Grok, this does not happen. Suppose to have the following app.py import os; import sys; sys.path.insert(1,os.path.dirname( os.path.realpath( __file__ ) )) import grok from one.two.three.four import e class Sample(grok.Application, grok.Container): pass what you obtain when you run paster is: $ bin/paster serve parts/etc/deploy.ini 2009-10-07 15:26:57,154 WARNING [root] Developer mode is enabled: this is a security risk and should NOT be enabled on production servers. Developer mode can be turned off in etc/zope.conf m u m u What's going on in here ? from a pdb stack trace, both cases are imported by martian: /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/core.py(204)grok_package() -> grok_module(module_info, grokker, **kw) /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/core.py(209)grok_module() -> grokker.grok(module_info.dotted_name, module_info.getModule(), /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/scan.py(118)getModule() -> self._module = resolve(self.dotted_name) /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/scan.py(191)resolve() -> __import__(used) The only difference between the first case and the second one is that the first shows the progressive import of e and then of m. In the second case it directly imports m. Thanks for the help A: This could possibly be a side-effect of the introspection Grok does, I'm not sure. Try to put a pdb.set_trace() in m, and check at the stack trace to see what is importing the modules.
Double import in grok
This is a normal case of mutual import. Suppose you have the following layout ./test.py ./one ./one/__init__.py ./one/two ./one/two/__init__.py ./one/two/m.py ./one/two/three ./one/two/three/__init__.py ./one/two/three/four ./one/two/three/four/__init__.py ./one/two/three/four/e.py ./one/two/u.py And you have test.py from one.two.three.four import e one/two/three/four/e.py from one.two import m one/two/m.py print "m" import u one/two/u.py print "u" import m When you run the test.py program, you expect, of course: python test.py m u Which is the expected behavior. Modules have already been imported, and they are only once. In Grok, this does not happen. Suppose to have the following app.py import os; import sys; sys.path.insert(1,os.path.dirname( os.path.realpath( __file__ ) )) import grok from one.two.three.four import e class Sample(grok.Application, grok.Container): pass what you obtain when you run paster is: $ bin/paster serve parts/etc/deploy.ini 2009-10-07 15:26:57,154 WARNING [root] Developer mode is enabled: this is a security risk and should NOT be enabled on production servers. Developer mode can be turned off in etc/zope.conf m u m u What's going on in here ? from a pdb stack trace, both cases are imported by martian: /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/core.py(204)grok_package() -> grok_module(module_info, grokker, **kw) /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/core.py(209)grok_module() -> grokker.grok(module_info.dotted_name, module_info.getModule(), /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/scan.py(118)getModule() -> self._module = resolve(self.dotted_name) /Users/sbo/.buildout/eggs/martian-0.11-py2.4.egg/martian/scan.py(191)resolve() -> __import__(used) The only difference between the first case and the second one is that the first shows the progressive import of e and then of m. In the second case it directly imports m. Thanks for the help
[ "This could possibly be a side-effect of the introspection Grok does, I'm not sure.\nTry to put a pdb.set_trace() in m, and check at the stack trace to see what is importing the modules.\n" ]
[ 0 ]
[]
[]
[ "grok", "python" ]
stackoverflow_0001531647_grok_python.txt
Q: Django: apply "same parent" constraint to ManyToManyField mapping to self I have a model where tasks are pieces of work that each may depend on some number of other tasks to complete before it can start. Tasks are grouped into jobs, and I want to disallow dependencies between jobs. This is the relevant subset of my model: class Job(models.Model): name = models.CharField(max_length=60, unique=True) class Task(models.Model): job = models.ForeignKey(Job) prerequisites = models.ManyToManyField( 'self', symmetrical=False, related_name="dependents", blank=True) Is there any way I can express the constraint that all prerequisite tasks must have the same job? I could enforce this at the view level, but I would really like to get it to work at the model level so that the admin interface will display appropriate options when choosing prerequisites for a task. I thought I could use "limit_choices_to", but on closer inspection it seems to require a static query, not something dependent on the values in this task object. A: There are two separate issues here. If you want to enforce this constraint at the model level, you might have to define an explicit "through" model and override its save() method (you can't just override Task.save() as that isn't necessarily invoked for adding entries to an M2M). Django 1.2 will have a fuller model validation framework, more like form validation. If you want only certain choices to appear in the admin, that's a form-level issue. You can dynamically set the queryset attribute of the ModelMultipleChoiceField in a form's init method: class TaskForm(forms.ModelForm): class Meta: model = Task def __init__(self, *args, **kwargs): super(TaskForm, self).__init__(*args, **kwargs) self.fields['prerequisites'].queryset = Task.objects.filter(job=self.instance.job) You may need to introduce some additional checking here to handle the case of creating a new Task (in that case "self.instance.job" will likely be None); what set of available prerequisites you want there is not clearly defined, since a new Task doesn't yet have a job.
Django: apply "same parent" constraint to ManyToManyField mapping to self
I have a model where tasks are pieces of work that each may depend on some number of other tasks to complete before it can start. Tasks are grouped into jobs, and I want to disallow dependencies between jobs. This is the relevant subset of my model: class Job(models.Model): name = models.CharField(max_length=60, unique=True) class Task(models.Model): job = models.ForeignKey(Job) prerequisites = models.ManyToManyField( 'self', symmetrical=False, related_name="dependents", blank=True) Is there any way I can express the constraint that all prerequisite tasks must have the same job? I could enforce this at the view level, but I would really like to get it to work at the model level so that the admin interface will display appropriate options when choosing prerequisites for a task. I thought I could use "limit_choices_to", but on closer inspection it seems to require a static query, not something dependent on the values in this task object.
[ "There are two separate issues here.\nIf you want to enforce this constraint at the model level, you might have to define an explicit \"through\" model and override its save() method (you can't just override Task.save() as that isn't necessarily invoked for adding entries to an M2M). Django 1.2 will have a fuller model validation framework, more like form validation.\nIf you want only certain choices to appear in the admin, that's a form-level issue. You can dynamically set the queryset attribute of the ModelMultipleChoiceField in a form's init method:\nclass TaskForm(forms.ModelForm):\n class Meta:\n model = Task\n\n def __init__(self, *args, **kwargs):\n super(TaskForm, self).__init__(*args, **kwargs)\n self.fields['prerequisites'].queryset = Task.objects.filter(job=self.instance.job)\n\nYou may need to introduce some additional checking here to handle the case of creating a new Task (in that case \"self.instance.job\" will likely be None); what set of available prerequisites you want there is not clearly defined, since a new Task doesn't yet have a job.\n" ]
[ 3 ]
[]
[]
[ "constraints", "django", "orm", "python" ]
stackoverflow_0001531065_constraints_django_orm_python.txt
Q: I heard that Python has automated "garbage collection" , but C++ does not. What does that mean? I heard that Python has automated "garbage collection" , but C++ does not. What does that mean? A: Try reading up on it. A: That means that python user doesn't need to clean his dynamic created objects, like you're obligated to do it in C/C++. Example in C++: char *ch = new char[100]; ch[0]='a'; ch[1]='b'; //.... // somewhere else in your program you need to release the alocated memory. delete [] ch; // use *delete ch;* if you've initialized *ch with new char; in python: def fun(): a=[1, 2] #dynamic allocation a.append(3) return a[0] python takes care about "a" object by itself. A: From Wikipedia http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29: ... Garbage collection frees the programmer from manually dealing with memory allocation and deallocation. As a result, certain categories of bugs are eliminated or substantially reduced: Dangling pointer bugs, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is used. Double free bugs, which occur when the program attempts to free a region of memory that is already free. Certain kinds of memory leaks, in which a program fails to free memory that is no longer referenced by any variable, leading, over time, to memory exhaustion. ... The basic principles of garbage collection are: Find data objects in a program that cannot be accessed in the future Reclaim the resources used by those objects A: Others already answered the main question, but I'd like to add that garbage collection is possible in C++. It's not that automatic like Python's, but it's doable. Smart pointers are probably the simplest form of C++ garbage collecting - std::auto_ptr, boost::scoped_ptr, boost::scoped_array that release memory after being destroyed. There's an example in one of the earlier answers, that could be rewritten as: boost::scoped_array<char> ch(new char[100]); ch[0] = 'a'; ch[1] = 'b'; // ... // boost::scoped_array will be destroyed when out of scope, or during unwind // (i.e. when exception is thrown), releasing the array's memory There are also boost::shared_ptr, boost::shared_array that implement reference counting (like Python). And there are full-blown garbage collectors that are meant to replace standard memory allocators, e.g. Boehm gc. A: It basically means the way they handle memory resources. When you need memory you usually ask for it to the OS and then return it back. With python you don't need to worry about returning it, with C++ you need to track what you asked and return it back, one is easier, the other performant, you choose your tool. A: As you have got your answer, now it's better to know the cons of automated garbage collection: it requires large amounts of extra memory and not suitable for hard real-time deadline applications.
I heard that Python has automated "garbage collection" , but C++ does not. What does that mean?
I heard that Python has automated "garbage collection" , but C++ does not. What does that mean?
[ "Try reading up on it.\n", "That means that python user doesn't need to clean his dynamic created objects, like you're obligated to do it in C/C++.\nExample in C++:\nchar *ch = new char[100];\nch[0]='a';\nch[1]='b';\n//....\n// somewhere else in your program you need to release the alocated memory.\ndelete [] ch; \n// use *delete ch;* if you've initialized *ch with new char; \n\nin python:\ndef fun():\n a=[1, 2] #dynamic allocation\n a.append(3)\n return a[0]\n\npython takes care about \"a\" object by itself.\n", "From Wikipedia http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29:\n...\nGarbage collection frees the programmer from manually dealing with memory allocation and deallocation. As a result, certain categories of bugs are eliminated or substantially reduced:\n\nDangling pointer bugs, which occur\nwhen a piece of memory is freed while\nthere are still pointers to it, and\none of those pointers is used.\nDouble free bugs, which occur when\nthe program attempts to free a\nregion of memory that is already\nfree.\nCertain kinds of memory leaks, in\nwhich a program fails to free\nmemory that is no longer referenced\nby any variable, leading, over time,\nto memory exhaustion.\n\n...\nThe basic principles of garbage collection are:\n\nFind data objects in a program that cannot be accessed in the future\nReclaim the resources used by those objects\n\n", "Others already answered the main question, but I'd like to add that garbage collection is possible in C++. It's not that automatic like Python's, but it's doable.\nSmart pointers are probably the simplest form of C++ garbage collecting - std::auto_ptr, boost::scoped_ptr, boost::scoped_array that release memory after being destroyed. There's an example in one of the earlier answers, that could be rewritten as:\nboost::scoped_array<char> ch(new char[100]);\nch[0] = 'a';\nch[1] = 'b';\n// ...\n// boost::scoped_array will be destroyed when out of scope, or during unwind\n// (i.e. when exception is thrown), releasing the array's memory\n\nThere are also boost::shared_ptr, boost::shared_array that implement reference counting (like Python). And there are full-blown garbage collectors that are meant to replace standard memory allocators, e.g. Boehm gc.\n", "It basically means the way they handle memory resources. When you need memory you usually ask for it to the OS and then return it back.\nWith python you don't need to worry about returning it, with C++ you need to track what you asked and return it back, one is easier, the other performant, you choose your tool.\n", "As you have got your answer, now it's better to know the cons of automated garbage collection:\nit requires large amounts of extra memory and not suitable for hard real-time deadline applications.\n" ]
[ 12, 9, 4, 3, 2, 0 ]
[]
[]
[ "c++", "garbage_collection", "python" ]
stackoverflow_0001530245_c++_garbage_collection_python.txt
Q: How to read lines from a file into a multidimensional array (or an array of lists) in python I have a file with a format similar to this: a,3,4,2,1 3,2,1,a,2 I want to read the file and create an array of lists in a way that: array[0] = ['a','3','4','2','1'] array[1] = ['3','2','1','a','2'] How can I do that? So far I am stuck with: f = open('./urls-eu.csv', 'r') for line in f: arr = line.split(',') print arr I am really new to python. A: Batteries included: >>> import csv >>> array = list( csv.reader( open( r'./urls-eu.csv' ) ) ) >>> array[0] ['a', '3', '4', '2', '1'] >>> array[1] ['3', '2', '1', 'a', '2'] A: you're almost there, you just need to do: arr = [line.split(',') for line in open('./urls-eu.csv')] it iteratively process file line by line, splits each line by comma and accumulates resulting lists into a list of lists. you can drop opening mode ('r') since it's a default one.
How to read lines from a file into a multidimensional array (or an array of lists) in python
I have a file with a format similar to this: a,3,4,2,1 3,2,1,a,2 I want to read the file and create an array of lists in a way that: array[0] = ['a','3','4','2','1'] array[1] = ['3','2','1','a','2'] How can I do that? So far I am stuck with: f = open('./urls-eu.csv', 'r') for line in f: arr = line.split(',') print arr I am really new to python.
[ "Batteries included:\n>>> import csv\n>>> array = list( csv.reader( open( r'./urls-eu.csv' ) ) )\n>>> array[0]\n['a', '3', '4', '2', '1']\n>>> array[1]\n['3', '2', '1', 'a', '2']\n\n", "you're almost there, you just need to do:\narr = [line.split(',') for line in open('./urls-eu.csv')]\n\nit iteratively process file line by line, splits each line by comma and accumulates resulting lists into a list of lists. you can drop opening mode ('r') since it's a default one.\n" ]
[ 16, 7 ]
[]
[]
[ "list", "multidimensional_array", "python" ]
stackoverflow_0001532810_list_multidimensional_array_python.txt
Q: Debugging a scripting language like ruby I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer. A: Your sequence seems entirely backwards to me. Here's how I do it: I write a test for the functionality I want. I start writing the script, executing bits and verifying test results. I review what I'd done to document and publish. Specifically, I execute before I complete. It's way too late by then. There are debuggers, of course, but with good tests and good design, I've almost never needed one. A: Here's a screencast on ruby debugging with ruby-debug. A: Seems like the problem here is that your environment (Visual Studio) doesn't support these languages, not that these languages don't support debuggers in general. Perl, Python, and Ruby all have fully-featured debuggers; you can find other IDEs that help you, too. For Ruby, there's RubyMine; for Perl, there's Komodo. And that's just off the top of my head. A: There is a nice gentle introduction to the Python debugger here A: If you're working with Python then you can find a list of debugging tools here to which I just want to add Eclipse with the Pydev extension, which makes working with breakpoints etc. also very simple. A: Script languages have no differences compared with other languages in the sense that you still have to break your problems into manageable pieces -- that is, functions. So, instead of testing the whole script after finishing the whole script, I prefer to test those small functions before integrating them. TDD always helps. A: My question is, is there any better way of debugging?" Yes. Your approach, "1. I complete a large script, 2. Comment everything but the portion I want to check, 3. Execute the script" is not really the best way to write any software in any language (sorry, but that's the truth.) Do not write a large anything. Ever. Do this. Decompose your problem into classes of objects. For each class, write the class by 2a. Outline the class, focus on the external interface, not the implementation details. 2b. Write tests to prove that interface works. 2c. Run the tests. They'll fail, since you only outlined the class. 2d. Fix the class until it passes the test. 2e. At some points, you'll realize your class designs aren't optimal. Refactor your design, assuring your tests still pass. Now, write your final script. It should be short. All the classes have already been tested. 3a. Outline the script. Indeed, you can usually write the script. 3b. Write some test cases that prove the script works. 3c. Runt the tests. They may pass. You're done. 3d. If the tests don't pass, fix things until they do. Write many small things. It works out much better in the long run that writing a large thing and commenting parts of it out. A: There's a SO question on Ruby IDEs here - and searching for "ruby IDE" offers more. I complete a large script That's what caught my eye: "complete", to me, means "done", "finished", "released". Whether or not you write tests before writing the functions that pass them, or whether or not you write tests at all (and I recommend that you do) you should not be writing code that can't be run (which is a test in itself) until it's become large. Ruby and Python offer a multitude of ways to write small, individually-testable (or executable) pieces of code, so that you don't have to wait for (?) days before you can run the thing. I'm building a (Ruby) database translation/transformation script at the moment - it's up to about 1000 lines and still not done. I seldom go more than 5 minutes without running it, or at least running the part on which I'm working. When it breaks (I'm not perfect, it breaks a lot ;-p) I know where the problem must be - in the code I wrote in the last 5 minutes. Progress is pretty fast. I'm not asserting that IDEs/debuggers have no place: some problems don't surface until a large body of code is released: it can be really useful on occasion to drop the whole thing into a debugging environment to find out what is going on. When third-party libraries and frameworks are involved it can be extremely useful to debug into their code to locate problems (which are usually - but not always - related to faulty understanding of the library function). A: You can debug your Python scripts using the included pdb module. If you want a visual debugger, you can download winpdb - don't be put off by that "win" prefix, winpdb is cross-platform. A: The debugging method you described is perfect for a static language like C++, but given that the language is so different, the coding methods are similarly different. One of the big very important things in a dynamic language such as Python or Ruby is the interactive toplevel (what you get by typing, say python on the command line). This means that running a part of your program is very easy. Even if you've written a large program before testing (which is a bad idea), it is hopefully separated into many functions. So, open up your interactive toplevel, do an import thing (for whatever thing happens to be) and then you can easily start testing your functions one by one, just calling them on the toplevel. Of course, for a more mature project, you probably want to write out an actual test suite, and most languages have a method to do that (in Python, this is doctest and nose, don't know about other languages). At first, though, when you're writing something not particularly formal, just remember a few simple rules of debugging dynamic languages: Start small. Don't write large programs and test them. Test each function as you write it, at least cursorily. Use the toplevel. Running small pieces of code in a language like Python is extremely lightweight: fire up the toplevel and run it. Compare with writing a complete program and the compile-running it in, say, C++. Use that fact that you can quickly change the correctness of any function. Debuggers are handy. But often, so are print statements. If you're only running a single function, debugging with print statements isn't that inconvenient, and also frees you from dragging along an IDE. A: There's a lot of good advice here, i recommend going through some best practices: http://github.com/edgecase/ruby_koans http://blog.rubybestpractices.com/ http://on-ruby.blogspot.com/2009/01/ruby-best-practices-mini-interview-2.html (and read Greg Brown's book, it's superb) You talk about large scripts. A lot of my workflow is working out logic in irb or the python shell, then capturing them into a cascade of small, single-task focused methods, with appropriate tests (not 100% coverage, more focus on edge and corner cases). http://binstock.blogspot.com/2008/04/perfecting-oos-small-classes-and-short.html
Debugging a scripting language like ruby
I am basically from the world of C language programming, now delving into the world of scripting languages like Ruby and Python. I am wondering how to do debugging. At present the steps I follow is, I complete a large script, Comment everything but the portion I want to check Execute the script Though it works, I am not able to debug like how I would do in, say, a VC++ environment or something like that. My question is, is there any better way of debugging? Note: I guess it may be a repeated question, if so, please point me to the answer.
[ "Your sequence seems entirely backwards to me. Here's how I do it:\n\nI write a test for the functionality I want.\nI start writing the script, executing bits and verifying test results.\nI review what I'd done to document and publish.\n\nSpecifically, I execute before I complete. It's way too late by then.\nThere are debuggers, of course, but with good tests and good design, I've almost never needed one.\n", "Here's a screencast on ruby debugging with ruby-debug.\n", "Seems like the problem here is that your environment (Visual Studio) doesn't support these languages, not that these languages don't support debuggers in general.\nPerl, Python, and Ruby all have fully-featured debuggers; you can find other IDEs that help you, too. For Ruby, there's RubyMine; for Perl, there's Komodo. And that's just off the top of my head.\n", "There is a nice gentle introduction to the Python debugger here\n", "If you're working with Python then you can find a list of debugging tools here to which I just want to add Eclipse with the Pydev extension, which makes working with breakpoints etc. also very simple.\n", "Script languages have no differences compared with other languages in the sense that you still have to break your problems into manageable pieces -- that is, functions. So, instead of testing the whole script after finishing the whole script, I prefer to test those small functions before integrating them. TDD always helps.\n", "My question is, is there any better way of debugging?\"\nYes.\nYour approach, \"1. I complete a large script, 2. Comment everything but the portion I want to check, 3. Execute the script\" is not really the best way to write any software in any language (sorry, but that's the truth.)\nDo not write a large anything. Ever.\nDo this.\n\nDecompose your problem into classes of objects.\nFor each class, write the class by\n2a. Outline the class, focus on the external interface, not the implementation details.\n2b. Write tests to prove that interface works.\n2c. Run the tests. They'll fail, since you only outlined the class.\n2d. Fix the class until it passes the test.\n2e. At some points, you'll realize your class designs aren't optimal. Refactor your design, assuring your tests still pass.\nNow, write your final script. It should be short. All the classes have already been tested.\n3a. Outline the script. Indeed, you can usually write the script.\n3b. Write some test cases that prove the script works.\n3c. Runt the tests. They may pass. You're done.\n3d. If the tests don't pass, fix things until they do.\n\nWrite many small things. It works out much better in the long run that writing a large thing and commenting parts of it out.\n", "There's a SO question on Ruby IDEs here - and searching for \"ruby IDE\" offers more.\n\nI complete a large script\n\nThat's what caught my eye: \"complete\", to me, means \"done\", \"finished\", \"released\". Whether or not you write tests before writing the functions that pass them, or whether or not you write tests at all (and I recommend that you do) you should not be writing code that can't be run (which is a test in itself) until it's become large. Ruby and Python offer a multitude of ways to write small, individually-testable (or executable) pieces of code, so that you don't have to wait for (?) days before you can run the thing.\nI'm building a (Ruby) database translation/transformation script at the moment - it's up to about 1000 lines and still not done. I seldom go more than 5 minutes without running it, or at least running the part on which I'm working. When it breaks (I'm not perfect, it breaks a lot ;-p) I know where the problem must be - in the code I wrote in the last 5 minutes. Progress is pretty fast.\nI'm not asserting that IDEs/debuggers have no place: some problems don't surface until a large body of code is released: it can be really useful on occasion to drop the whole thing into a debugging environment to find out what is going on. When third-party libraries and frameworks are involved it can be extremely useful to debug into their code to locate problems (which are usually - but not always - related to faulty understanding of the library function).\n", "You can debug your Python scripts using the included pdb module. If you want a visual debugger, you can download winpdb - don't be put off by that \"win\" prefix, winpdb is cross-platform.\n", "The debugging method you described is perfect for a static language like C++, but given that the language is so different, the coding methods are similarly different. One of the big very important things in a dynamic language such as Python or Ruby is the interactive toplevel (what you get by typing, say python on the command line). This means that running a part of your program is very easy.\nEven if you've written a large program before testing (which is a bad idea), it is hopefully separated into many functions. So, open up your interactive toplevel, do an import thing (for whatever thing happens to be) and then you can easily start testing your functions one by one, just calling them on the toplevel.\nOf course, for a more mature project, you probably want to write out an actual test suite, and most languages have a method to do that (in Python, this is doctest and nose, don't know about other languages). At first, though, when you're writing something not particularly formal, just remember a few simple rules of debugging dynamic languages:\n\nStart small. Don't write large programs and test them. Test each function as you write it, at least cursorily.\nUse the toplevel. Running small pieces of code in a language like Python is extremely lightweight: fire up the toplevel and run it. Compare with writing a complete program and the compile-running it in, say, C++. Use that fact that you can quickly change the correctness of any function.\nDebuggers are handy. But often, so are print statements. If you're only running a single function, debugging with print statements isn't that inconvenient, and also frees you from dragging along an IDE.\n\n", "There's a lot of good advice here, i recommend going through some best practices:\nhttp://github.com/edgecase/ruby_koans\nhttp://blog.rubybestpractices.com/\nhttp://on-ruby.blogspot.com/2009/01/ruby-best-practices-mini-interview-2.html\n(and read Greg Brown's book, it's superb)\n\nYou talk about large scripts. A lot of my workflow is working out logic in irb or the python shell, then capturing them into a cascade of small, single-task focused methods, with appropriate tests (not 100% coverage, more focus on edge and corner cases).\nhttp://binstock.blogspot.com/2008/04/perfecting-oos-small-classes-and-short.html\n" ]
[ 10, 6, 4, 3, 2, 2, 2, 0, 0, 0, 0 ]
[]
[]
[ "python", "ruby", "scripting_language" ]
stackoverflow_0001529896_python_ruby_scripting_language.txt
Q: Access is Denied loading a dll with ctypes on Vista I'm having issues with using ctypes. I'm trying to get the following project running on Vista. http://sourceforge.net/projects/fractalfrost/ I've used the project before on Vista and had no problems. I don't see any think changed in svn that cause this I'm thinking it's something local to this machine. In fact I'm not able to load dll's with ctypes at all. Bobby@Teresa-PC ~/fr0st-exe/fr0st/pyflam3/win32_dlls $ ls Flam4CUDA_LIB.dll cudart.dll glew32.dll libflam3.dll pthreadVC2.dll Bobby@Teresa-PC ~/fr0st-exe/fr0st/pyflam3/win32_dlls $ python Python 2.6.3 (r263rc1:75186, Oct 2 2009, 20:40:30) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> flam3_dll = CDLL('libflam3.dll') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> flam3_dll = CDLL('.\\libflam3.dll') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> import os >>> flam3_dll = CDLL(os.path.abspath('libflam3.dll')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> Any ideas what would be causing this and better yet someway around it? A: I know it sounds like a silly thing, but since you didn't explicitly mention it: Did you check the permissions on the file you're trying to access? Perhaps you, you know, don't have read or execute access to the file.
Access is Denied loading a dll with ctypes on Vista
I'm having issues with using ctypes. I'm trying to get the following project running on Vista. http://sourceforge.net/projects/fractalfrost/ I've used the project before on Vista and had no problems. I don't see any think changed in svn that cause this I'm thinking it's something local to this machine. In fact I'm not able to load dll's with ctypes at all. Bobby@Teresa-PC ~/fr0st-exe/fr0st/pyflam3/win32_dlls $ ls Flam4CUDA_LIB.dll cudart.dll glew32.dll libflam3.dll pthreadVC2.dll Bobby@Teresa-PC ~/fr0st-exe/fr0st/pyflam3/win32_dlls $ python Python 2.6.3 (r263rc1:75186, Oct 2 2009, 20:40:30) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> flam3_dll = CDLL('libflam3.dll') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> flam3_dll = CDLL('.\\libflam3.dll') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> import os >>> flam3_dll = CDLL(os.path.abspath('libflam3.dll')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python26\lib\ctypes\__init__.py", line 353, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 5] Access is denied >>> Any ideas what would be causing this and better yet someway around it?
[ "I know it sounds like a silly thing, but since you didn't explicitly mention it:\nDid you check the permissions on the file you're trying to access? Perhaps you, you know, don't have read or execute access to the file.\n" ]
[ 2 ]
[]
[]
[ "ctypes", "python", "windows_vista" ]
stackoverflow_0001533466_ctypes_python_windows_vista.txt
Q: What is the best way to distribute a python program extended with custom c modules? I've explored python for several years, but now I'm slowly learning how to work with c. Using the python documentation, I learned how to extend my python programs with some c, since this seemed like the logical way to start playing with it. My question now is how to distribute a program like this. I suppose the heart of my question is how to compile things. I can do this easily on my own machine (gentoo), but a binary distribution like Ubuntu probably doesn't have a compiler available by default. Plus, I have a few friends who are mac users. My instinct says that I can't just compile with my own machine and then run it on another. Anyone know what I can do, or some online resources for learning things like this? A: Please read up on distutils. Specifically, the section on Extension Modules. Making assumptions about compilers is bad policy; your instinct may not have all the facts. You could do some marketplace survey -- ask what they can handle regarding source distribution of extension modules. It's relatively easy to create the proper distutils setup.py and see who can run it and who can't. Built binary distributions are pretty common. Perhaps you can sign up some users will help create binary distributions -- with OS-native installers -- for some considerations. A: S. Lott is right. You should first look at distutils. After you've learned what you need from distutils, look at setuptools. Setuptools is built on top of distutils and makes installation easy for your users. Have you ever used easy_install or Python Eggs? That's what's next.
What is the best way to distribute a python program extended with custom c modules?
I've explored python for several years, but now I'm slowly learning how to work with c. Using the python documentation, I learned how to extend my python programs with some c, since this seemed like the logical way to start playing with it. My question now is how to distribute a program like this. I suppose the heart of my question is how to compile things. I can do this easily on my own machine (gentoo), but a binary distribution like Ubuntu probably doesn't have a compiler available by default. Plus, I have a few friends who are mac users. My instinct says that I can't just compile with my own machine and then run it on another. Anyone know what I can do, or some online resources for learning things like this?
[ "Please read up on distutils. Specifically, the section on Extension Modules.\nMaking assumptions about compilers is bad policy; your instinct may not have all the facts. You could do some marketplace survey -- ask what they can handle regarding source distribution of extension modules.\nIt's relatively easy to create the proper distutils setup.py and see who can run it and who can't.\nBuilt binary distributions are pretty common. Perhaps you can sign up some users will help create binary distributions -- with OS-native installers -- for some considerations. \n", "S. Lott is right. You should first look at distutils. After you've learned what you need from distutils, look at setuptools. Setuptools is built on top of distutils and makes installation easy for your users. Have you ever used easy_install or Python Eggs? That's what's next.\n" ]
[ 6, 1 ]
[]
[]
[ "c", "linux", "macos", "python", "software_distribution" ]
stackoverflow_0000294766_c_linux_macos_python_software_distribution.txt
Q: Why does my function "hangs" def retCursor(): host = "localhost" user = "disappearedng" db = "gupan_crawling3" conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db) cursor = conn.cursor() return cursor singleCur = retCursor() def checkTemplateBuilt(netlocH): """Used by crawler specifically, this check directly whether template has been built""" singleCur.execute( """SELECT templateBuilt FROM templateEnough WHERE netloc=%s""", [ netlocH]) r = singleCur.fetchone() if r: if bool( r[0]): return True return False Hi everyone I am currently using MySQLdb. For some reason, after perhaps 30 mins of running my app comes to a complete halt. It appears that this function is blocking me. (DOn't know for what reason) Traceback (most recent call last): File "/usr/lib/python2.6/multiprocessing/process.py", line 231, in _bootstrap self.run() File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 117, in run self.get_check_put() File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 66, in get_check_put if not self.checkLinkCrawlability(linkS, priority): File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 53, in checkLinkCrawlability if checkTemplateBuilt( getNetLoc( link)): File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/publicapi/publicfunc.py", line 71, in checkTemplateBuilt singleCur.execute( """SELECT templateBuilt FROM templateEnough WHERE netloc=%s""", [ netlocH]) File "/var/lib/python-support/python2.6/MySQLdb/cursors.py", line 153, in execute r = self._query(query) KeyboardInterrupt Btw this is the table: CREATE TABLE templateEnough( `netloc` INT(32) unsigned NOT NULL, `count` SMALLINT(32) unsigned NOT NULL, `templateBuilt` TINYINT(1) unsigned DEFAULT 0 NOT NULL, PRIMARY KEY ( netloc ) ) ENGINE=MEMORY DEFAULT CHARSET=utf8 ; Any ideas? A: There might be a lock on the table preventing the query from completing. A: Try logging the query string to a file right before you execute it. Then when you think it is hung, you can look at the query and see if it works manually A: According to your traceback, you interrupted the script during the execution of checkTemplateBuilt, not enoughPassedForTemplate. I think the problem lies in a different part of the code; maybe there is an infinite loop somewhere? Maybe in the run function?
Why does my function "hangs"
def retCursor(): host = "localhost" user = "disappearedng" db = "gupan_crawling3" conn = MySQLdb.connect( host=host, user=user, passwd=passwd, db=db) cursor = conn.cursor() return cursor singleCur = retCursor() def checkTemplateBuilt(netlocH): """Used by crawler specifically, this check directly whether template has been built""" singleCur.execute( """SELECT templateBuilt FROM templateEnough WHERE netloc=%s""", [ netlocH]) r = singleCur.fetchone() if r: if bool( r[0]): return True return False Hi everyone I am currently using MySQLdb. For some reason, after perhaps 30 mins of running my app comes to a complete halt. It appears that this function is blocking me. (DOn't know for what reason) Traceback (most recent call last): File "/usr/lib/python2.6/multiprocessing/process.py", line 231, in _bootstrap self.run() File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 117, in run self.get_check_put() File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 66, in get_check_put if not self.checkLinkCrawlability(linkS, priority): File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/crawler/crawler.py", line 53, in checkLinkCrawlability if checkTemplateBuilt( getNetLoc( link)): File "/mount/950gb/gupan5/disappearedng_temp/code_temp_NEWBRANCH/gupan5-yahoo/crawling/templateCrawling/TemplateCrawler/publicapi/publicfunc.py", line 71, in checkTemplateBuilt singleCur.execute( """SELECT templateBuilt FROM templateEnough WHERE netloc=%s""", [ netlocH]) File "/var/lib/python-support/python2.6/MySQLdb/cursors.py", line 153, in execute r = self._query(query) KeyboardInterrupt Btw this is the table: CREATE TABLE templateEnough( `netloc` INT(32) unsigned NOT NULL, `count` SMALLINT(32) unsigned NOT NULL, `templateBuilt` TINYINT(1) unsigned DEFAULT 0 NOT NULL, PRIMARY KEY ( netloc ) ) ENGINE=MEMORY DEFAULT CHARSET=utf8 ; Any ideas?
[ "There might be a lock on the table preventing the query from completing.\n", "Try logging the query string to a file right before you execute it.\nThen when you think it is hung, you can look at the query and see if it works manually\n", "According to your traceback, you interrupted the script during the execution of checkTemplateBuilt, not enoughPassedForTemplate.\nI think the problem lies in a different part of the code; maybe there is an infinite loop somewhere? Maybe in the run function?\n" ]
[ 1, 1, 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001532474_mysql_python.txt
Q: Good high-level python ftp/http lib? I'm looking for a good, high-level python ftp client/server library. I'm working on a project that has "evolved" a small http/ftp library on top of ftplib/urllib/urllib2 from what was originally one function, and almost none of it was designed to be built upon. So now it's time to refactor kind of seriously, and I'd like to just switch to a library. The thing I'd most like to not deal with is robust-retry logic (like, keep retrying 15 times, or keep retrying until 12pm). The problem that we've got right now is that we have about 10 separate grab() and put() functions. Aesthetically speaking, I'd rather have one of each with optional arguments along the lines of try_until=datetime(2009, 10, 7, 19) or retrys=15. We work with both binary and text data, so the functions would have to be reasonably smart about that. And we do way more grabbing than putting, so I can deal without the puts. urlgrabber looks like exactly what I want, but there doesn't seem to have been any development for the last couple years and I'm not sure how compatible it is with 2.6. Anybody got much experience with this? Or opinions? A: URLgrabber appears to be very mature, and since it's used by yum (and thus many Unix systems), I would expect it to be very stable. Python 2.x is largely backward compatible. You might encounter some warnings, but I would expect it to work suitably under Python 2.6. A: Depending on the sort of application you are writing, you might want to consider twisted python, as it has http server and client code built in. However, it is a rather large departure from standard procedural python programming. The big advantage of twisted for you is that it can handle your client requests in the background, handles retries and is very scalable. Update For a quick script that interacts with servers, see this serverfault answer: https://serverfault.com/questions/66336/script-automation-login-enter-password-run-commands-save-output-locally It recomends the tool expect Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc. Expect really makes this stuff trivial. Expect is also useful for testing these same applications. And by adding Tk, you can also wrap interactive applications in X11 GUIs. Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily. Sounds good to me!
Good high-level python ftp/http lib?
I'm looking for a good, high-level python ftp client/server library. I'm working on a project that has "evolved" a small http/ftp library on top of ftplib/urllib/urllib2 from what was originally one function, and almost none of it was designed to be built upon. So now it's time to refactor kind of seriously, and I'd like to just switch to a library. The thing I'd most like to not deal with is robust-retry logic (like, keep retrying 15 times, or keep retrying until 12pm). The problem that we've got right now is that we have about 10 separate grab() and put() functions. Aesthetically speaking, I'd rather have one of each with optional arguments along the lines of try_until=datetime(2009, 10, 7, 19) or retrys=15. We work with both binary and text data, so the functions would have to be reasonably smart about that. And we do way more grabbing than putting, so I can deal without the puts. urlgrabber looks like exactly what I want, but there doesn't seem to have been any development for the last couple years and I'm not sure how compatible it is with 2.6. Anybody got much experience with this? Or opinions?
[ "URLgrabber appears to be very mature, and since it's used by yum (and thus many Unix systems), I would expect it to be very stable. Python 2.x is largely backward compatible. You might encounter some warnings, but I would expect it to work suitably under Python 2.6.\n", "Depending on the sort of application you are writing, you might want to consider twisted python, as it has http server and client code built in. However, it is a rather large departure from standard procedural python programming. \nThe big advantage of twisted for you is that it can handle your client requests in the background, handles retries and is very scalable.\nUpdate\nFor a quick script that interacts with servers, see this serverfault answer:\nhttps://serverfault.com/questions/66336/script-automation-login-enter-password-run-commands-save-output-locally\nIt recomends the tool expect\n\nExpect is a tool for automating\n interactive applications such as\n telnet, ftp, passwd, fsck, rlogin,\n tip, etc. Expect really makes this\n stuff trivial. Expect is also useful\n for testing these same applications.\n And by adding Tk, you can also wrap\n interactive applications in X11 GUIs.\nExpect can make easy all sorts of\n tasks that are prohibitively difficult\n with anything else. You will find that\n Expect is an absolutely invaluable\n tool - using it, you will be able to\n automate tasks that you've never even\n thought of before - and you'll be able\n to do this automation quickly and\n easily.\n\nSounds good to me!\n" ]
[ 4, 0 ]
[]
[]
[ "ftp", "http", "python" ]
stackoverflow_0001532760_ftp_http_python.txt
Q: Python object initialization bug. Or am I misunderstanding how objects work? 1 import sys 2 3 class dummy(object): 4 def __init__(self, val): 5 self.val = val 6 7 class myobj(object): 8 def __init__(self, resources): 9 self._resources = resources 10 11 class ext(myobj): 12 def __init__(self, resources=[]): 13 #myobj.__init__(self, resources) 14 self._resources = resources 15 16 one = ext() 17 one._resources.append(1) 18 two = ext() 19 20 print one._resources 21 print two._resources 22 23 sys.exit(0) This will print the reference to the object assigned to one._resources for both one and two objects. I would think that two would be an empty array as it is clearly setting it as such if it's not defined when creating the object. Uncommenting myobj.__init__(self, resources) does the same thing. Using super(ext, self).__init__(resources) also does the same thing. The only way I can get it to work is if I use the following: two = ext(dummy(2)) I shouldn't have to manually set the default value when creating the object to make this work. Or maybe I do. Any thoughts? I tried this using Python 2.5 and 2.6. A: You should change def __init__(self, resources=[]): self._resources = resources to def __init__(self, resources=None): if resources is None: resources = [] self._resources = resources and all will be better. This is a detail in the way default arguments are handled if they're mutable. There's some more information in the discussion section of this page. A: Your problem is that the default value is evaluated at function definition time. This means that the same list object is shared between instances. See the answer to this question for more discussion. A: Please read this answer for a discussion of how to setup a class from __init__(). You have encountered a well-known quirk of Python: you are trying to set up a mutable, and your mutable is being evaluated once when __init__() is compiled. The standard workaround is: class ext(myobj): def __init__(self, resources=None): if resources is None: resources = [] #myobj.__init__(self, resources) self._resources = resources A: From http://docs.python.org/3.1/tutorial/controlflow.html: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. A: This is a known Python gotcha. You have to avoid using a mutable object on the call of a function/method. The objects that provide the default values are not created at the time that the function/method is called. They are created at the time that the statement that defines the function is executed. (See the discussion at Default arguments in Python: two easy blunders: "Expressions in default arguments are calculated when the function is defined, not when it’s called.") This behavior is not a wart in the Python language. It really is a feature, not a bug. There are times when you really do want to use mutable default arguments. One thing they can do (for example) is retain a list of results from previous invocations, something that might be very handy. But for most programmers — especially beginning Pythonistas — this behavior is a gotcha. So for most cases we adopt the following rules. Never use a mutable object — that is: a list, a dictionary, or a class instance — as the default value of an argument. Ignore rule 1 only if you really, really, REALLY know what you're doing. So... we plan always to follow rule #1. Now, the question is how to do it... how to code functionF in order to get the behavior that we want. Fortunately, the solution is straightforward. The mutable objects used as defaults are replaced by None, and then the arguments are tested for None. So how can one do it correctly? One solution is avoid using mutable default values for arguments. But this is hardly satisfactory, as from time to time a new list is a useful default. There are some complex solutions like defining a decorator for functions that deep-copies all arguments. This is an overkill, and the problem can be solved easily as follows: class ext(myobj): def __init__(self, resources=None): if not resources: resources = [] self._resources = resources
Python object initialization bug. Or am I misunderstanding how objects work?
1 import sys 2 3 class dummy(object): 4 def __init__(self, val): 5 self.val = val 6 7 class myobj(object): 8 def __init__(self, resources): 9 self._resources = resources 10 11 class ext(myobj): 12 def __init__(self, resources=[]): 13 #myobj.__init__(self, resources) 14 self._resources = resources 15 16 one = ext() 17 one._resources.append(1) 18 two = ext() 19 20 print one._resources 21 print two._resources 22 23 sys.exit(0) This will print the reference to the object assigned to one._resources for both one and two objects. I would think that two would be an empty array as it is clearly setting it as such if it's not defined when creating the object. Uncommenting myobj.__init__(self, resources) does the same thing. Using super(ext, self).__init__(resources) also does the same thing. The only way I can get it to work is if I use the following: two = ext(dummy(2)) I shouldn't have to manually set the default value when creating the object to make this work. Or maybe I do. Any thoughts? I tried this using Python 2.5 and 2.6.
[ "You should change\ndef __init__(self, resources=[]):\n self._resources = resources\n\nto\ndef __init__(self, resources=None):\n if resources is None:\n resources = []\n self._resources = resources\n\nand all will be better. This is a detail in the way default arguments are handled if they're mutable. There's some more information in the discussion section of this page.\n", "Your problem is that the default value is evaluated at function definition time. This means that the same list object is shared between instances. See the answer to this question for more discussion.\n", "Please read this answer for a discussion of how to setup a class from __init__(). You have encountered a well-known quirk of Python: you are trying to set up a mutable, and your mutable is being evaluated once when __init__() is compiled. The standard workaround is:\nclass ext(myobj):\n def __init__(self, resources=None):\n if resources is None:\n resources = []\n #myobj.__init__(self, resources)\n self._resources = resources\n\n", "From http://docs.python.org/3.1/tutorial/controlflow.html:\n\nThe default value is evaluated only\n once. This makes a difference when the\n default is a mutable object such as a\n list, dictionary, or instances of most\n classes.\n\n", "This is a known Python gotcha.\nYou have to avoid using a mutable object on the call of a function/method.\n\nThe objects that provide the default values are not created at the time that the function/method is called. They are created at the time that the statement that defines the function is executed. (See the discussion at Default arguments in Python: two easy blunders: \"Expressions in default arguments are calculated when the function is defined, not when it’s called.\")\nThis behavior is not a wart in the Python language. It really is a feature, not a bug. There are times when you really do want to use mutable default arguments. One thing they can do (for example) is retain a list of results from previous invocations, something that might be very handy.\nBut for most programmers — especially beginning Pythonistas — this behavior is a gotcha. So for most cases we adopt the following rules.\n\nNever use a mutable object — that is: a list, a dictionary, or a class instance — as the default value of an argument.\nIgnore rule 1 only if you really, really, REALLY know what you're doing.\n\nSo... we plan always to follow rule #1. Now, the question is how to do it... how to code functionF in order to get the behavior that we want.\nFortunately, the solution is straightforward. The mutable objects used as defaults are replaced by None, and then the arguments are tested for None.\n\n\n\nSo how can one do it correctly? One solution is avoid using mutable default values for arguments. But this is hardly satisfactory, as from time to time a new list is a useful default. There are some complex solutions like defining a decorator for functions that deep-copies all arguments. This is an overkill, and the problem can be solved easily as follows:\n\nclass ext(myobj):\n def __init__(self, resources=None):\n if not resources:\n resources = []\n self._resources = resources\n\n" ]
[ 8, 6, 2, 1, 0 ]
[]
[]
[ "arguments", "mutable", "python" ]
stackoverflow_0001534407_arguments_mutable_python.txt
Q: Does a Python 3 SOAP client module exist? Possible Duplicate: What’s the best SOAP library for Python 3.x? I couldn't find one that works with Python 3.1. Any suggestions for a WSDL-consuming Python 3 SOAP client module/library? A: You could port an existing library that you like and provide your changes to the author of the package.
Does a Python 3 SOAP client module exist?
Possible Duplicate: What’s the best SOAP library for Python 3.x? I couldn't find one that works with Python 3.1. Any suggestions for a WSDL-consuming Python 3 SOAP client module/library?
[ "You could port an existing library that you like and provide your changes to the author of the package.\n" ]
[ 3 ]
[]
[]
[ "python", "python_3.x", "soap" ]
stackoverflow_0001534554_python_python_3.x_soap.txt
Q: Calling a RPC function in a running Windows service (process) using Python I have Windows service (acts as a server) that I want to test using Python scripts. This service is written in C++ and exposes several RPC functions that other services consume. I want to mock those other services using my Python program and call those RPC functions from the script. This is the first stage. The second stage happens when the server service responds to its caller service by another RPC call. How can this be done in Python? If Python (or any of its extensions) can't call/receive the RPCs, can this be done if I change the main server service code and added whatever code is necessary, which will end up calling the same functionality the RPCs used to execute but will be callable from Python? Note: The server service implements the RPC functions using raw Windows RPC implemented with IDL files. Other services, which are written in C++ too, interested in consuming those RPCs are using the IDL file to generate the needed interface to do the communication. Using XML-RPC or other technologies aren't an option. A: What kind of RPC are you thinking of? If it is XML-RPC, then Python comes with the SimpleXMLRPCServer module, which, well, allows you to write RPC servers in Python. If the remote server uses DCOM, you can use PythonCOM.
Calling a RPC function in a running Windows service (process) using Python
I have Windows service (acts as a server) that I want to test using Python scripts. This service is written in C++ and exposes several RPC functions that other services consume. I want to mock those other services using my Python program and call those RPC functions from the script. This is the first stage. The second stage happens when the server service responds to its caller service by another RPC call. How can this be done in Python? If Python (or any of its extensions) can't call/receive the RPCs, can this be done if I change the main server service code and added whatever code is necessary, which will end up calling the same functionality the RPCs used to execute but will be callable from Python? Note: The server service implements the RPC functions using raw Windows RPC implemented with IDL files. Other services, which are written in C++ too, interested in consuming those RPCs are using the IDL file to generate the needed interface to do the communication. Using XML-RPC or other technologies aren't an option.
[ "What kind of RPC are you thinking of? If it is XML-RPC, then Python comes with the SimpleXMLRPCServer module, which, well, allows you to write RPC servers in Python.\nIf the remote server uses DCOM, you can use PythonCOM.\n" ]
[ 1 ]
[]
[]
[ "interprocess", "python", "rpc" ]
stackoverflow_0001534686_interprocess_python_rpc.txt
Q: C++ or Python for C# programmer? I am a corporate C# programmer. I found some time to invest into myself and stumbed upon a dilemma. Where to go from now? C#/.NET is easy to learn, develop for, etc. In future I would want to apply to Microsoft or Google, and want to invest spare time wisely, so what I will learn will flourish in future. So: Python or C++ for a C# programmer? I am a little scared of C++ because developing anything in it takes ages. Python is easy, but I take it as a child-play language, which still need lots of patching to be some mature development tool/language. Any C# developers having same dilemma? A: I Am a little scared of C++ because developing anything in it takes ages. I'm not sure how you can say that when you say yourself that you have no experience in the language. C++ is a good tool for some things, Python is good for other things. What you want to do should be driving this decision, not the technology in and of itself. C# programmer or not, I would assume that you can pick up any language, but a language is just a tool, so your question is difficult to answer. A: Python may be easier to get started with, but a dynamically typed scripting language is a very different language from C# or C++. You will learn more about programming learning it than you will by hopping to a close cousin of a language you already know. Really, solid familiarity with at least one scripting language (Python, Perl and Ruby are the favorites) should be a requirement for all programmers. A: If you want to apply to Google then Python might be the one to go for, surely MS would like the C# already. If nothing else the competition would not be as fierce as there are much more folk out there with multi years of C++ experience. Also Python gives you a broader language skill and would be a good path to more languages and scripting. But as said and will be said again, choose your tool wisely and see whether it's a nail or a screw you're trying to secure. A: C# is a little closer to Java and C++ than it is to Python, so learn Python first out of the two. However, my advice would be: Stick with your current language and learn more techniques, such as a wider range of algorithms, functional programming, design by contract, unit testing, OOAD, etc. learn C (focus on figuring out pointers, multi-dimensional arrays, data structures like linked lists, and resource management like memory allocation/deallocation, file handles, etc) learn Assembly (on a modern platform with a flat memory architecture, but doing low-level stuff like talking to hardware or drawing on a canvas) learn Python or Ruby. Chances are, you'll stick with one of these for a while, knowing all of the above, unless some hot new language has come along by then. A: Why not learn some of each. Studying a language for a week or so won't make you an expert, but it will answer a lot of questions in your head and plant a seed for the future. It's important to not just read through exercises. Find some simple problems that can be programmed in a page or two at most and solve them with each language. That will help you to learn the strengths and weaknesses in the context of the way you think and how you solve problems. A: C++ is usually used when speed, and low-level OS access is involved. It's a good skill to have if you want to expand. Python allows you to do thing quickly, and it's quite easy to learn, and provides more power than you'd expect from a scripting language, and probably one of the fastest ones out there. C++ isn't exactly slow to develop, if you've got an IDE, it's not hard to write per-se, but the syntax is going to get you. A: If you want to apply to Google and/ or Microsoft then I'd say that of the two you need both! Given more choice, probably C++ and one other language - either dynamic, functional, or both (Scala might be a good choice too). It's not necessarily about whether you'd use the languages themselves but more about the different approaches they require and encourage. If you continue to be "scared" by C++ you're probably going to struggle applying as a dev at either of those organisations - unless you are highly specialised elsewhere. A: I think you just asked wrong question. It is not about the tool itself. It should be about what kind of software do you really find enjoying to create. C++ is used in creating different types of applications that are written in C# or Python. Please mind, that C# or .NET itself is not easy to learn. It may be quite easy to develop something that works somehow, but if you just delve into the details... Anyway, my point is: if you're interested in developing web solutions: go for Python. There is a lot of hype about Python at the moment, and even Microsoft realized the power of this language (you may use your knowledge of .NET and Python programming using IronPython). C++ is at the moment used in some specific areas. Business apps are written mostly in Java or .NET, and C++ is still great for more low level programming, in areas where performance is the crucial thing (and I mean 'performance' as performance of language/platform itself). The good example is game industry: Java and C# are definitely easier to learn than C++, but... how many 'big games' have been created entirely in C#/Java? I have another advice for you: if you want to work for Microsoft or Google, do not focus on language itselft. It is NOT the most important thing. Focus on problem solving, algorithms and other stuff (Stevie Yegge's post about how to prepare for an interview at Google). Oh, and of course as a fan of C++ (and C# too) I must admit that it is not true, that developing anything in C++ takes ages. You probably think of C++ as of "C with clasees" - take a look at STL, templates, advanced templates, Boost... Somehow all those people working in games industry manage to create better and better games in not so looooong time that takes others to create 'boring and easy' business app in Java/C#. A: As someone familiar with C# and .NET you should consider IronPython. Python for .NET. This would be a good way to leverage what you know and learn a new dynamic language at the same time. A: You might be interested in looking at Windows Powershell. It's the latest scripting technology from Microsoft, built on .NET, and can be extended via C#. Granted, it's not as portable as C++ or Python, but it would leverage your C#/.NET experience more readily. Otherwise, I would suggest C++ (and possibly C). Microsoft builds a lot more of its products with C/C++ than with Python.
C++ or Python for C# programmer?
I am a corporate C# programmer. I found some time to invest into myself and stumbed upon a dilemma. Where to go from now? C#/.NET is easy to learn, develop for, etc. In future I would want to apply to Microsoft or Google, and want to invest spare time wisely, so what I will learn will flourish in future. So: Python or C++ for a C# programmer? I am a little scared of C++ because developing anything in it takes ages. Python is easy, but I take it as a child-play language, which still need lots of patching to be some mature development tool/language. Any C# developers having same dilemma?
[ "\nI Am a little scared of C++ because developing anything in it takes ages.\n\nI'm not sure how you can say that when you say yourself that you have no experience in the language. C++ is a good tool for some things, Python is good for other things. What you want to do should be driving this decision, not the technology in and of itself.\nC# programmer or not, I would assume that you can pick up any language, but a language is just a tool, so your question is difficult to answer.\n", "Python may be easier to get started with, but a dynamically typed scripting language is a very different language from C# or C++. You will learn more about programming learning it than you will by hopping to a close cousin of a language you already know. Really, solid familiarity with at least one scripting language (Python, Perl and Ruby are the favorites) should be a requirement for all programmers.\n", "If you want to apply to Google then Python might be the one to go for, surely MS would like the C# already. If nothing else the competition would not be as fierce as there are much more folk out there with multi years of C++ experience. Also Python gives you a broader language skill and would be a good path to more languages and scripting.\nBut as said and will be said again, choose your tool wisely and see whether it's a nail or a screw you're trying to secure.\n", "C# is a little closer to Java and C++ than it is to Python, so learn Python first out of the two.\nHowever, my advice would be:\n\nStick with your current language and learn more techniques, such as a wider range of algorithms, functional programming, design by contract, unit testing, OOAD, etc.\nlearn C (focus on figuring out pointers, multi-dimensional arrays, data structures like linked lists, and resource management like memory allocation/deallocation, file handles, etc)\nlearn Assembly (on a modern platform with a flat memory architecture, but doing low-level stuff like talking to hardware or drawing on a canvas)\nlearn Python or Ruby. Chances are, you'll stick with one of these for a while, knowing all of the above, unless some hot new language has come along by then.\n\n", "Why not learn some of each. Studying a language for a week or so won't make you an expert, but it will answer a lot of questions in your head and plant a seed for the future.\nIt's important to not just read through exercises. Find some simple problems that can be programmed in a page or two at most and solve them with each language. That will help you to learn the strengths and weaknesses in the context of the way you think and how you solve problems.\n", "C++ is usually used when speed, and low-level OS access is involved.\nIt's a good skill to have if you want to expand.\nPython allows you to do thing quickly, and it's quite easy to learn, and provides more power than you'd expect from a scripting language, and probably one of the fastest ones out there.\nC++ isn't exactly slow to develop, if you've got an IDE, it's not hard to write per-se, but the syntax is going to get you.\n", "If you want to apply to Google and/ or Microsoft then I'd say that of the two you need both!\nGiven more choice, probably C++ and one other language - either dynamic, functional, or both (Scala might be a good choice too).\nIt's not necessarily about whether you'd use the languages themselves but more about the different approaches they require and encourage.\nIf you continue to be \"scared\" by C++ you're probably going to struggle applying as a dev at either of those organisations - unless you are highly specialised elsewhere.\n", "I think you just asked wrong question. It is not about the tool itself. It should be about what kind of software do you really find enjoying to create. C++ is used in creating different types of applications that are written in C# or Python. Please mind, that C# or .NET itself is not easy to learn. It may be quite easy to develop something that works somehow, but if you just delve into the details...\nAnyway, my point is: if you're interested in developing web solutions: go for Python. There is a lot of hype about Python at the moment, and even Microsoft realized the power of this language (you may use your knowledge of .NET and Python programming using IronPython).\nC++ is at the moment used in some specific areas. Business apps are written mostly in Java or .NET, and C++ is still great for more low level programming, in areas where performance is the crucial thing (and I mean 'performance' as performance of language/platform itself). The good example is game industry: Java and C# are definitely easier to learn than C++, but... how many 'big games' have been created entirely in C#/Java?\nI have another advice for you: if you want to work for Microsoft or Google, do not focus on language itselft. It is NOT the most important thing. Focus on problem solving, algorithms and other stuff (Stevie Yegge's post about how to prepare for an interview at Google).\nOh, and of course as a fan of C++ (and C# too) I must admit that it is not true, that developing anything in C++ takes ages. You probably think of C++ as of \"C with clasees\" - take a look at STL, templates, advanced templates, Boost... Somehow all those people working in games industry manage to create better and better games in not so looooong time that takes others to create 'boring and easy' business app in Java/C#.\n", "As someone familiar with C# and .NET you should consider IronPython. Python for .NET. This would be a good way to leverage what you know and learn a new dynamic language at the same time.\n", "You might be interested in looking at Windows Powershell. It's the latest scripting technology from Microsoft, built on .NET, and can be extended via C#.\nGranted, it's not as portable as C++ or Python, but it would leverage your C#/.NET experience more readily. Otherwise, I would suggest C++ (and possibly C). Microsoft builds a lot more of its products with C/C++ than with Python.\n" ]
[ 7, 5, 3, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "c#", "c++", "python" ]
stackoverflow_0001534450_c#_c++_python.txt
Q: Need a workaround: Python's select.select() doesn't work with subprocess' stdout? From within my master python program, I am spawning a child program with this code: child = subprocess.Popen(..., stdout=subprocess.PIPE, stdin=subprocess.PIPE) FWIW, the child is a PHP script which needs to communicate back and forth with the python program. The master python program actually needs to listen for communication from several other channels - other PHP scripts spawned using the same code, or socket objects coming from socket.accept(), and I would like to use select.select() as that is the most efficient way to wait for input from a variety of sources. The problem I have, is that select.select() under Windows does not work with the subprocess' stdout file descriptor (this is documented), and it looks I will be forced to: A) Poll the PHP scripts to see if they have written anything to stdout. (This system needs to be very responsive, I would need to poll at least 1,000 times per second!) B) Have the PHP scripts connect to the master process and communicate via sockets instead of stdout/stdin. I will probably go with solution (B), because I can't bring myself to make the system poll at such a high frequency, but it seems a sad waste of resources to reconnect with sockets when stdout/stdin would have done just fine. Is there some alternative solution which would allow me to use stdout and select.select()? A: Unfortunately, many uses of pipes on Windows don't work as nicely as they do on Unix, and this is one of them. On Windows, the better solution is probably to have your master program spawn threads to listen to each of its subprocesses. If you know the granularity of data that you expect back from your subprocess, you can do blocking reads in each of your threads, and then the thread will come alive when the IO unblocks. Alternatively, (I have no idea if this is viable for your project), you could look into using a Unix-like system, or a Unix-like layer on top of Windows (e.g. Cygwin), where select.select() will work on subprocess pipes.
Need a workaround: Python's select.select() doesn't work with subprocess' stdout?
From within my master python program, I am spawning a child program with this code: child = subprocess.Popen(..., stdout=subprocess.PIPE, stdin=subprocess.PIPE) FWIW, the child is a PHP script which needs to communicate back and forth with the python program. The master python program actually needs to listen for communication from several other channels - other PHP scripts spawned using the same code, or socket objects coming from socket.accept(), and I would like to use select.select() as that is the most efficient way to wait for input from a variety of sources. The problem I have, is that select.select() under Windows does not work with the subprocess' stdout file descriptor (this is documented), and it looks I will be forced to: A) Poll the PHP scripts to see if they have written anything to stdout. (This system needs to be very responsive, I would need to poll at least 1,000 times per second!) B) Have the PHP scripts connect to the master process and communicate via sockets instead of stdout/stdin. I will probably go with solution (B), because I can't bring myself to make the system poll at such a high frequency, but it seems a sad waste of resources to reconnect with sockets when stdout/stdin would have done just fine. Is there some alternative solution which would allow me to use stdout and select.select()?
[ "Unfortunately, many uses of pipes on Windows don't work as nicely as they do on Unix, and this is one of them. On Windows, the better solution is probably to have your master program spawn threads to listen to each of its subprocesses. If you know the granularity of data that you expect back from your subprocess, you can do blocking reads in each of your threads, and then the thread will come alive when the IO unblocks.\nAlternatively, (I have no idea if this is viable for your project), you could look into using a Unix-like system, or a Unix-like layer on top of Windows (e.g. Cygwin), where select.select() will work on subprocess pipes.\n" ]
[ 4 ]
[]
[]
[ "python", "select", "sockets", "subprocess" ]
stackoverflow_0001534825_python_select_sockets_subprocess.txt
Q: Turbogears: Can not start paster after updateing to mac osx 10.6 After updateing to mac osx 10.6 I had to switch back to python 2.5 in order to make virtual env work. But still I can not start my turbogears project. Paster is giving this : Traceback (most recent call last): File ".../tg2env/bin/paster", line 5, in <module> from pkg_resources import load_entry_point File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 657, in <module> File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 660, in Environment File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 55, in get_supported_platform File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 186, in get_build_platform File ".../tg2env/lib/python2.5/distutils/__init__.py", line 14, in <module> exec open(os.path.join(distutils_path, '__init__.py')).read() IOError: [Errno 2] No such file or directory: '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/__init__.py' Any ideas? Thanks. A: Why did you need to switch back to 2.5 to make virtualenv work? I have upgraded to 10.6 and am happily using virtualenv in Python 2.6. A: Probably eggs are installed for 2.6 distro. Please run in your terminal: defaults write com.apple.versioner.python Version 2.5 export VERSIONER_PYTHON_VERSION=2.5 sudo easy_install virtualenv Check out the second line, it should change python version for current terminal session. dgl@dgl:~/ > python Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) ... dgl@dgl:~/ > export VERSIONER_PYTHON_VERSION=2.5 dgl@dgl:~/ > python Python 2.5.4 (r254:67916, Jul 7 2009, 23:51:24) ... A: As you've seen, in Snow Leopard 10.6, Apple supplies both a Python 2.6.2 (the default for /usr/bin/python) and a legacy Python 2.5.4 (/usr/bin/python2.5). The heart of both live in the /System/Library/Frameworks/Python.framework. In general, everything under /System is supplied and managed by Apple; it should not be modified by anyone else. If that message is to be believed, your 10.6 installation is faulty. $ cd /System/Library/Frameworks/Python.framework/Versions $ ls -l total 8 drwxr-xr-x 5 root wheel 272 Sep 5 10:18 2.3/ drwxr-xr-x 9 root wheel 408 Sep 5 10:43 2.5/ drwxr-xr-x 9 root wheel 408 Sep 5 10:43 2.6/ lrwxr-xr-x 1 root wheel 3 Sep 5 10:18 Current@ -> 2.6 $ ls -l 2.5/lib/python2.5/distutils/__init__.py -rw-r--r-- 1 root wheel 635 Jul 7 23:55 2.5/lib/python2.5/distutils/__init__.py $ /usr/bin/python2.5 Python 2.5.4 (r254:67916, Jul 7 2009, 23:51:24) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import distutils >>> Check to see that file exists and with the correct permissions. If not, you should figure out what else is wrong with your /System and consider restoring from a backup or just reinstalling Snow Leopard.
Turbogears: Can not start paster after updateing to mac osx 10.6
After updateing to mac osx 10.6 I had to switch back to python 2.5 in order to make virtual env work. But still I can not start my turbogears project. Paster is giving this : Traceback (most recent call last): File ".../tg2env/bin/paster", line 5, in <module> from pkg_resources import load_entry_point File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 657, in <module> File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 660, in Environment File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 55, in get_supported_platform File ".../tg2env/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg/pkg_resources.py", line 186, in get_build_platform File ".../tg2env/lib/python2.5/distutils/__init__.py", line 14, in <module> exec open(os.path.join(distutils_path, '__init__.py')).read() IOError: [Errno 2] No such file or directory: '/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/__init__.py' Any ideas? Thanks.
[ "Why did you need to switch back to 2.5 to make virtualenv work? I have upgraded to 10.6 and am happily using virtualenv in Python 2.6.\n", "Probably eggs are installed for 2.6 distro. Please run in your terminal:\ndefaults write com.apple.versioner.python Version 2.5\nexport VERSIONER_PYTHON_VERSION=2.5\nsudo easy_install virtualenv\n\nCheck out the second line, it should change python version for current terminal session.\ndgl@dgl:~/ > python\nPython 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) \n...\ndgl@dgl:~/ > export VERSIONER_PYTHON_VERSION=2.5\ndgl@dgl:~/ > python\nPython 2.5.4 (r254:67916, Jul 7 2009, 23:51:24) \n...\n\n", "As you've seen, in Snow Leopard 10.6, Apple supplies both a Python 2.6.2 (the default for /usr/bin/python) and a legacy Python 2.5.4 (/usr/bin/python2.5). The heart of both live in the /System/Library/Frameworks/Python.framework. In general, everything under /System is supplied and managed by Apple; it should not be modified by anyone else.\nIf that message is to be believed, your 10.6 installation is faulty.\n$ cd /System/Library/Frameworks/Python.framework/Versions\n$ ls -l\ntotal 8\ndrwxr-xr-x 5 root wheel 272 Sep 5 10:18 2.3/\ndrwxr-xr-x 9 root wheel 408 Sep 5 10:43 2.5/\ndrwxr-xr-x 9 root wheel 408 Sep 5 10:43 2.6/\nlrwxr-xr-x 1 root wheel 3 Sep 5 10:18 Current@ -> 2.6\n$ ls -l 2.5/lib/python2.5/distutils/__init__.py\n-rw-r--r-- 1 root wheel 635 Jul 7 23:55 2.5/lib/python2.5/distutils/__init__.py\n\n$ /usr/bin/python2.5\nPython 2.5.4 (r254:67916, Jul 7 2009, 23:51:24) \n[GCC 4.2.1 (Apple Inc. build 5646)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import distutils\n>>>\n\nCheck to see that file exists and with the correct permissions. If not, you should figure out what else is wrong with your /System and consider restoring from a backup or just reinstalling Snow Leopard.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "macos", "python", "turbogears2" ]
stackoverflow_0001533388_macos_python_turbogears2.txt
Q: Is there any way to affect locals at runtime? I actually want to create a new local. I know it sounds dubious, but I think I have a nice use case for this. Essentially my problem is that this code throws "NameError: global name 'eggs' is not defined" when I try to print eggs: def f(): import inspect frame_who_called = inspect.stack()[1][0] frame_who_called.f_locals['eggs'] = 123 def g(): f() print(eggs) g() I found this old thing: http://mail.python.org/pipermail/python-dev/2005-January/051018.html Which would mean I might be able to do it using ctypes and calling some secret function, though they only talked about updating a value. But maybe there's an easier way? A: I am highly curious as to your use case. Why on Earth are you trying to poke a new local into the caller's frame, rather than simply doing something like this: def f(): return 123 def g(): eggs = f() print(eggs) After all, you can return a tuple with as many values as you like: def f(): return 123, 456, 789 def g(): eggs, ham, bacon = f() print(eggs, ham, bacon) A: As Greg Hewgill mentioned in a comment on the question, I answered another question about modifying locals in Python 3, and I'll give a bit of a recap here. There is a post on the Python 3 bug list about this issue -- it's somewhat poorly documented in the Python 3 manuals. Python 3 uses an array for locals instead of a dictionary like in Python 2 -- the advantage is a faster lookup time for local variables (Lua does this too). Basically, the array is defined at "bytecode-compile-time" and cannot be modified at runtime. See specifically the last paragraph in Georg Brandl's post on the bug list for finer details about why this cannot (and probably never will) work in Python 3. A: In Python 2.*, you can get such code to work by defeating the normal optimization of locals: >>> def g(): ... exec 'pass' ... f() ... print(eggs) The presence of an exec statement causes Python 2 to compile g in a totally non-optimized fashion, so locals are in a dict instead of in an array as they normally would be. (The performance hit can be considerable). This "de-optimization" does not exist in Python 3, where exec is not a statement any more (not even a keyword, just a function) -- even putting parentheses after it doesn't help...: >>> def x(): ... exec('a=23') ... print(a) ... >>> x() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in x NameError: global name 'a' is not defined >>> i.e., not even exec can now "create locals" that were not know at def-time (i.e., when the compiler did its pass to turn the function body into bytecode). Your best bet would be to give up. Second best would be to have your f function inject new names into the caller's globals -- those are still a dict, after all.
Is there any way to affect locals at runtime?
I actually want to create a new local. I know it sounds dubious, but I think I have a nice use case for this. Essentially my problem is that this code throws "NameError: global name 'eggs' is not defined" when I try to print eggs: def f(): import inspect frame_who_called = inspect.stack()[1][0] frame_who_called.f_locals['eggs'] = 123 def g(): f() print(eggs) g() I found this old thing: http://mail.python.org/pipermail/python-dev/2005-January/051018.html Which would mean I might be able to do it using ctypes and calling some secret function, though they only talked about updating a value. But maybe there's an easier way?
[ "I am highly curious as to your use case. Why on Earth are you trying to poke a new local into the caller's frame, rather than simply doing something like this:\ndef f():\n return 123\n\ndef g():\n eggs = f()\n print(eggs)\n\nAfter all, you can return a tuple with as many values as you like:\ndef f():\n return 123, 456, 789\n\ndef g():\n eggs, ham, bacon = f()\n print(eggs, ham, bacon)\n\n", "As Greg Hewgill mentioned in a comment on the question, I answered another question about modifying locals in Python 3, and I'll give a bit of a recap here.\nThere is a post on the Python 3 bug list about this issue -- it's somewhat poorly documented in the Python 3 manuals. Python 3 uses an array for locals instead of a dictionary like in Python 2 -- the advantage is a faster lookup time for local variables (Lua does this too). Basically, the array is defined at \"bytecode-compile-time\" and cannot be modified at runtime.\nSee specifically the last paragraph in Georg Brandl's post on the bug list for finer details about why this cannot (and probably never will) work in Python 3.\n", "In Python 2.*, you can get such code to work by defeating the normal optimization of locals:\n>>> def g():\n... exec 'pass'\n... f()\n... print(eggs)\n\nThe presence of an exec statement causes Python 2 to compile g in a totally non-optimized fashion, so locals are in a dict instead of in an array as they normally would be. (The performance hit can be considerable).\nThis \"de-optimization\" does not exist in Python 3, where exec is not a statement any more (not even a keyword, just a function) -- even putting parentheses after it doesn't help...:\n>>> def x():\n... exec('a=23')\n... print(a)\n... \n>>> x()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 3, in x\nNameError: global name 'a' is not defined\n>>> \n\ni.e., not even exec can now \"create locals\" that were not know at def-time (i.e., when the compiler did its pass to turn the function body into bytecode).\nYour best bet would be to give up. Second best would be to have your f function inject new names into the caller's globals -- those are still a dict, after all.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "locals", "python", "python_3.x" ]
stackoverflow_0001534368_locals_python_python_3.x.txt
Q: How to remove these duplicates in a list (python) biglist = [ {'title':'U2 Band','link':'u2.com'}, {'title':'ABC Station','link':'abc.com'}, {'title':'Live Concert by U2','link':'u2.com'} ] I would like to remove the THIRD element inside the list...because it has "u2.com" as a duplicate. I don't want duplicate "link" element. What is the most efficient code to do this so that it results in this: biglist = [ {'title':'U2','link':'u2.com'}, {'title':'ABC','link':'abc.com'} ] I have tried many ways, including using many nested "for ...in ...." but this is very inefficient and too long. A: Probably the fastest approach, for a really big list, if you want to preserve the exact order of the items that remain, is the following...: biglist = [ {'title':'U2 Band','link':'u2.com'}, {'title':'ABC Station','link':'abc.com'}, {'title':'Live Concert by U2','link':'u2.com'} ] known_links = set() newlist = [] for d in biglist: link = d['link'] if link in known_links: continue newlist.append(d) known_links.add(link) biglist[:] = newlist A: Make a new dictionary, with 'u2.com' and 'abc.com' as the keys, and your list elements as the values. The dictionary will enforce uniqueness. Something like this: uniquelist = dict((element['link'], element) for element in reversed(biglist)) (The reversed is there so that the first elements in the list will be the ones that remain in the dictionary. If you take that out, then you will get the last element instead). Then you can get elements back into a list like this: biglist = uniquelist.values() A: You can sort the list, using the link field of each dictionary as the sort key, then iterate through the list once and remove duplicates (or rather, create a new list with duplicates removed, as is the Python idiom), like so: # sort the list using the 'link' item as the sort key biglist.sort(key=lambda elt: elt['link']) newbiglist = [] for item in biglist: if newbiglist == [] or item['link'] != newbiglist[-1]['link']: newbiglist.append(item) This code will give you the first element (relative ordering in the original biglist) for any group of "duplicates". This is true because the .sort() algorithm used by Python is guaranteed to be a stable sort -- it does not change the order of elements determined to be equal to one another (in this case, elements with the same link). A: biglist = \ [ {'title':'U2 Band','link':'u2.com'}, {'title':'ABC Station','link':'abc.com'}, {'title':'Live Concert by U2','link':'u2.com'} ] def dedupe(lst): d = {} for x in lst: link = x["link"] if link in d: continue d[link] = x return d.values() lst = dedupe(biglist) dedupe() keeps the first of any duplicates. A: You can use defaultdict to group items by link, then removed duplicates if you want to. from collections import defaultdict nodupes = defaultdict(list) for d in biglist: nodupes[d['url']].append(d['title'] This will give you: defaultdict(<type 'list'>, {'abc.com': ['ABC Station'], 'u2.com': ['U2 Band', 'Live Concert by U2']})
How to remove these duplicates in a list (python)
biglist = [ {'title':'U2 Band','link':'u2.com'}, {'title':'ABC Station','link':'abc.com'}, {'title':'Live Concert by U2','link':'u2.com'} ] I would like to remove the THIRD element inside the list...because it has "u2.com" as a duplicate. I don't want duplicate "link" element. What is the most efficient code to do this so that it results in this: biglist = [ {'title':'U2','link':'u2.com'}, {'title':'ABC','link':'abc.com'} ] I have tried many ways, including using many nested "for ...in ...." but this is very inefficient and too long.
[ "Probably the fastest approach, for a really big list, if you want to preserve the exact order of the items that remain, is the following...:\nbiglist = [ \n {'title':'U2 Band','link':'u2.com'}, \n {'title':'ABC Station','link':'abc.com'}, \n {'title':'Live Concert by U2','link':'u2.com'} \n]\n\nknown_links = set()\nnewlist = []\n\nfor d in biglist:\n link = d['link']\n if link in known_links: continue\n newlist.append(d)\n known_links.add(link)\n\nbiglist[:] = newlist\n\n", "Make a new dictionary, with 'u2.com' and 'abc.com' as the keys, and your list elements as the values. The dictionary will enforce uniqueness. Something like this:\nuniquelist = dict((element['link'], element) for element in reversed(biglist))\n\n(The reversed is there so that the first elements in the list will be the ones that remain in the dictionary. If you take that out, then you will get the last element instead).\nThen you can get elements back into a list like this:\nbiglist = uniquelist.values()\n\n", "You can sort the list, using the link field of each dictionary as the sort key, then iterate through the list once and remove duplicates (or rather, create a new list with duplicates removed, as is the Python idiom), like so:\n# sort the list using the 'link' item as the sort key\nbiglist.sort(key=lambda elt: elt['link'])\n\nnewbiglist = []\nfor item in biglist:\n if newbiglist == [] or item['link'] != newbiglist[-1]['link']:\n newbiglist.append(item)\n\nThis code will give you the first element (relative ordering in the original biglist) for any group of \"duplicates\". This is true because the .sort() algorithm used by Python is guaranteed to be a stable sort -- it does not change the order of elements determined to be equal to one another (in this case, elements with the same link).\n", "biglist = \\\n[ \n {'title':'U2 Band','link':'u2.com'}, \n {'title':'ABC Station','link':'abc.com'}, \n {'title':'Live Concert by U2','link':'u2.com'} \n]\n\ndef dedupe(lst):\n d = {}\n for x in lst:\n link = x[\"link\"]\n if link in d:\n continue\n d[link] = x\n return d.values()\n\nlst = dedupe(biglist)\n\ndedupe() keeps the first of any duplicates.\n", "You can use defaultdict to group items by link, then removed duplicates if you want to.\nfrom collections import defaultdict\n\nnodupes = defaultdict(list)\nfor d in biglist:\n nodupes[d['url']].append(d['title']\n\nThis will give you:\ndefaultdict(<type 'list'>, {'abc.com': ['ABC Station'], 'u2.com': ['U2 Band', \n'Live Concert by U2']})\n\n" ]
[ 8, 3, 2, 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001534736_list_python.txt
Q: Python instance method making multiple instance method calls Here is some snippet code. I have tested the methods listed and they work correctly, yet when I run and test this method (countLOC) it only seems to initialize the first variable that has an instance method call (i = self.countBlankLines()). Anyone know the obvious reason I'm obviously missing? def countLOC(self): i = self.countBlankLines() j = self.countDocStringLines() k = self.countLines() p = self.countCommentLines() return k-i-j-p This returns -3 because countBlankLines() returns 3 (correctly). however, it should return 37 as countDocStringLines() = 6 and countCommentLines() = 4 while countLines() = 50. Thanks. A: If local variables were not initialized (impossible given your code!) they wouldn't be 0 -- rather, you'd get a NameError exception when you try to use them. It's 100% certain that those other method calls (except the first one) are returning 0 (or numbers totaling to 0 in the expression). Hard to guess, not being shown their code, but from your comment my crystal ball tells me you have an iterator as an instance variable: the first method to iterate on it exhausts it, the other methods therefore find it empty.
Python instance method making multiple instance method calls
Here is some snippet code. I have tested the methods listed and they work correctly, yet when I run and test this method (countLOC) it only seems to initialize the first variable that has an instance method call (i = self.countBlankLines()). Anyone know the obvious reason I'm obviously missing? def countLOC(self): i = self.countBlankLines() j = self.countDocStringLines() k = self.countLines() p = self.countCommentLines() return k-i-j-p This returns -3 because countBlankLines() returns 3 (correctly). however, it should return 37 as countDocStringLines() = 6 and countCommentLines() = 4 while countLines() = 50. Thanks.
[ "If local variables were not initialized (impossible given your code!) they wouldn't be 0 -- rather, you'd get a NameError exception when you try to use them. It's 100% certain that those other method calls (except the first one) are returning 0 (or numbers totaling to 0 in the expression).\nHard to guess, not being shown their code, but from your comment my crystal ball tells me you have an iterator as an instance variable: the first method to iterate on it exhausts it, the other methods therefore find it empty.\n" ]
[ 5 ]
[]
[]
[ "method_call", "methods", "python" ]
stackoverflow_0001535313_method_call_methods_python.txt
Q: Need help in designing a phone book application on python running on google app engine Hi I want some help in building a Phone book application on python and put it on google app engine. I am running a huge db of 2 million user lists and their contacts in phonebook. I want to upload all that data from my servers directly onto the google servers and then use a UI to retrieve the phone book contacts of each user based on his name. I am using MS SQL sever 2005 as my DB. Please help in putting together this application. Your inputs are much appreciated. A: For building your UI, AppEngine has it's own web framework called webapp that is pretty easy to get working. I've also had a good experience using the Jinja2 templating engine, which you can include in your source, or package as a zip file (example shows Django, you can do the same type of thing for Jinja). As for loading all of your data into the DataStore, you should take a look at the bulk uploader documentation. A: I think you're going to need to be more specific as to what problem you're having. As far as bulk loading goes, there's lots of bulkloader documentation around; or are you asking about model design? If so, we need to know more about how you plan to search for users. Do you need partial string matches? Sorting? Fuzzy matching?
Need help in designing a phone book application on python running on google app engine
Hi I want some help in building a Phone book application on python and put it on google app engine. I am running a huge db of 2 million user lists and their contacts in phonebook. I want to upload all that data from my servers directly onto the google servers and then use a UI to retrieve the phone book contacts of each user based on his name. I am using MS SQL sever 2005 as my DB. Please help in putting together this application. Your inputs are much appreciated.
[ "For building your UI, AppEngine has it's own web framework called webapp that is pretty easy to get working. I've also had a good experience using the Jinja2 templating engine, which you can include in your source, or package as a zip file (example shows Django, you can do the same type of thing for Jinja).\nAs for loading all of your data into the DataStore, you should take a look at the bulk uploader documentation.\n", "I think you're going to need to be more specific as to what problem you're having. As far as bulk loading goes, there's lots of bulkloader documentation around; or are you asking about model design? If so, we need to know more about how you plan to search for users. Do you need partial string matches? Sorting? Fuzzy matching?\n" ]
[ 1, 0 ]
[]
[]
[ "bulk_load", "google_app_engine", "python" ]
stackoverflow_0001518725_bulk_load_google_app_engine_python.txt
Q: Jump to Model/View/Controller in emacs Most rails modes for emacs have this kind of functionality. You are in a controller file over a function "kaboosh" in "app/controller/bla.rb" and with a keyboard shortcut you switch to "app/views/kaboosh.erb" or to app/models/bla.rb". A similar functionality exists for .c and .h files using ff-find-other-file. I checked jump.el and findr.el but all seems a little bit complicated. I tried searching this for django projects (it would have helped a lot) but no luck. Anyone knows a simple way (etags?) that it's easy to define the connection between current file/function and target file and then bind it to a keyboard shortcut? A: Tags is set up well to jump you to the definition of a function. M-. will take you to the first occurrence of a function definition, C-u M-. will take you to the next (and one after that, and after that...). Perhaps the C-u M-. solves some of your problem. Regarding associations between files, and wanting a rails like interface, it looks like you could pull out the appropriate chunks of code and customize them for python. Specifically, you'll need the jump package, make a similar jump schema like the rinari-jump-schema (found in rinari.el). The jump schema uses 'ruby-add-log-current-method (found in ruby-mode.el), and you'd just need to customize that to return the current method name for Python syntax. It doesn't look like anyone has done this for Python yet, you could be the first. I believe those are the only pieces you'll need.
Jump to Model/View/Controller in emacs
Most rails modes for emacs have this kind of functionality. You are in a controller file over a function "kaboosh" in "app/controller/bla.rb" and with a keyboard shortcut you switch to "app/views/kaboosh.erb" or to app/models/bla.rb". A similar functionality exists for .c and .h files using ff-find-other-file. I checked jump.el and findr.el but all seems a little bit complicated. I tried searching this for django projects (it would have helped a lot) but no luck. Anyone knows a simple way (etags?) that it's easy to define the connection between current file/function and target file and then bind it to a keyboard shortcut?
[ "Tags is set up well to jump you to the definition of a function. M-. will take you to the first occurrence of a function definition, C-u M-. will take you to the next (and one after that, and after that...). Perhaps the C-u M-. solves some of your problem.\nRegarding associations between files, and wanting a rails like interface, it looks like you could pull out the appropriate chunks of code and customize them for python.\nSpecifically, you'll need the jump package, make a similar jump schema like the rinari-jump-schema (found in rinari.el). The jump schema uses 'ruby-add-log-current-method (found in ruby-mode.el), and you'd just need to customize that to return the current method name for Python syntax.\nIt doesn't look like anyone has done this for Python yet, you could be the first. I believe those are the only pieces you'll need.\n" ]
[ 1 ]
[]
[]
[ "emacs", "python" ]
stackoverflow_0001534969_emacs_python.txt
Q: How to do email-confirmation after registration in Django? I am using Django's authentication system. Is there an easy way to do this? A: Have a look at James Bennett's django-registration project.
How to do email-confirmation after registration in Django?
I am using Django's authentication system. Is there an easy way to do this?
[ "Have a look at James Bennett's django-registration project.\n" ]
[ 8 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001536766_django_python.txt
Q: Can't Delete Function Call This question is just out of general curiosity. I've just noticed it when working on my current project (surprisingly I haven't came across before today). Take this code: List = ["hi","stack","over","flow","how","you","doing"] del List(len(List)-1) Error: SyntaxError: can't delete function call I don't understand why you aren't allowed to delete an index of a list by referencing a call to a function? Do I just shut up and accept you can't do it or am I doing something fundamentally wrong? I apologise if there is an easy answer to this but either Google is getting less helpful or this is so blatantly obvious I need help. A: You meant to delete the last element of the list, not somehow call List as a function: del List[len(List)-1] Python's del statement must take specific forms like deleting a variable, list[element], or object.property. These forms can be nested deeply, but must be followed. It parallels the assignment statement -- you'll get a similar syntax error if you try to assign to a function call. Of course, what you really want in this case is del List[-1] which means the last element of the list, and is way more Pythonic. A: You are calling a function List() when you should be indexing a list, List[]. In Python, Round parenthesis, (), are used to call functions, while square brackets, [] are used to index lists and other sequences. Try: del List[len(List) - 1] or even better, use the fact that Python allows negative indexes, which count from the end: del List[-1] Also, you might want to make the list's name not so close to the built-in list type name, for clarity. A: You are allowed. However, you are using the wrong syntax. Correct syntax is: del List[-1] Notice that the "len(List) part is useless.
Can't Delete Function Call
This question is just out of general curiosity. I've just noticed it when working on my current project (surprisingly I haven't came across before today). Take this code: List = ["hi","stack","over","flow","how","you","doing"] del List(len(List)-1) Error: SyntaxError: can't delete function call I don't understand why you aren't allowed to delete an index of a list by referencing a call to a function? Do I just shut up and accept you can't do it or am I doing something fundamentally wrong? I apologise if there is an easy answer to this but either Google is getting less helpful or this is so blatantly obvious I need help.
[ "You meant to delete the last element of the list, not somehow call List as a function:\ndel List[len(List)-1]\n\nPython's del statement must take specific forms like deleting a variable, list[element], or object.property. These forms can be nested deeply, but must be followed. It parallels the assignment statement -- you'll get a similar syntax error if you try to assign to a function call.\nOf course, what you really want in this case is\ndel List[-1]\n\nwhich means the last element of the list, and is way more Pythonic.\n", "You are calling a function List() when you should be indexing a list, List[].\nIn Python, Round parenthesis, (), are used to call functions, while square brackets, [] are used to index lists and other sequences.\nTry:\ndel List[len(List) - 1]\n\nor even better, use the fact that Python allows negative indexes, which count from the end:\ndel List[-1]\n\nAlso, you might want to make the list's name not so close to the built-in list type name, for clarity.\n", "You are allowed. However, you are using the wrong syntax. Correct syntax is:\ndel List[-1]\n\nNotice that the \"len(List) part is useless.\n" ]
[ 13, 5, 1 ]
[]
[]
[ "python", "syntax_error" ]
stackoverflow_0001536890_python_syntax_error.txt
Q: Python not a standardized language? I stumbled upon this 'list of programming' languages and found that popular languages like Python are not standardized? Why is that, and what does 'Standardized' mean anyway? A: "Standardized" means that the language has a formal, approved standard, generally written by ISO or ANSI or ECMA. Many modern open-source languages, like Python, Perl, are not formally standardized by an external body, and instead have a de-facto standard: whatever the original working implementation does. The benefits of standardizing a language are a) you know the language won't randomly change on you, b) if you want to write your own compiler/interpreter for the language, you have a very clear document that tells you what behavior everything should do, rather than having to test that behavior yourself in the original implementation. Because of this, standardized languages change slowly, and often have multiple major implementations. A language doesn't really have to be standardized to be useful. Most nonstandard languages won't just make random backwards-incompatable changes for no reason (and if they do, they take ten years to decide how to *cough*Perl6*cough*), and nonstandard languages can add cool new experimental features much faster (and more portably) than standardized languages. A few standardized languages: C (ISO/IEC) C++ (ISO/IEC) Common Lisp (ISO/IEC) Scheme (IEEE) JavaScript (ECMA-262) C# (ECMA-334) Ruby (ISO/IEC) Non-standardized languages: Perl Python PHP Objective-C A full list is on Wikipedia. A: There are "standards" and there are "standards". Most folks mostly mean that a standard is passed by a standards-writing organization: ISO, ECMA, EIA, etc. Lawyers call these De Jure Standards. There's the force of law. Further, there are "De Facto Standards". Some folks, also, corrupt the word by adding "Industry Standard", or "vendorname Standard". This is just meaningless marketing noise. A De Facto Standard is something that's standardized in practice (because everyone does it and agrees they're doing it that way) but it is not backed by some standards organization. Python has a De Facto Standard, not a De Jure Standard. A: Standardized means there exists a specification for the language (a "standard"). Java, for example, has a specification. Perl 5 does not (the source code is the "standard") but Perl 6 will. See Is there a Python language specification?
Python not a standardized language?
I stumbled upon this 'list of programming' languages and found that popular languages like Python are not standardized? Why is that, and what does 'Standardized' mean anyway?
[ "\"Standardized\" means that the language has a formal, approved standard, generally written by ISO or ANSI or ECMA. Many modern open-source languages, like Python, Perl, are not formally standardized by an external body, and instead have a de-facto standard: whatever the original working implementation does.\nThe benefits of standardizing a language are a) you know the language won't randomly change on you, b) if you want to write your own compiler/interpreter for the language, you have a very clear document that tells you what behavior everything should do, rather than having to test that behavior yourself in the original implementation. Because of this, standardized languages change slowly, and often have multiple major implementations.\nA language doesn't really have to be standardized to be useful. Most nonstandard languages won't just make random backwards-incompatable changes for no reason (and if they do, they take ten years to decide how to *cough*Perl6*cough*), and nonstandard languages can add cool new experimental features much faster (and more portably) than standardized languages.\nA few standardized languages:\n\nC (ISO/IEC)\nC++ (ISO/IEC)\nCommon Lisp (ISO/IEC)\nScheme (IEEE)\nJavaScript (ECMA-262)\nC# (ECMA-334)\nRuby (ISO/IEC)\n\nNon-standardized languages:\n\nPerl\nPython\nPHP\nObjective-C\n\nA full list is on Wikipedia.\n", "There are \"standards\" and there are \"standards\".\nMost folks mostly mean that a standard is passed by a standards-writing organization: ISO, ECMA, EIA, etc. Lawyers call these De Jure Standards. There's the force of law.\nFurther, there are \"De Facto Standards\".\nSome folks, also, corrupt the word by adding \"Industry Standard\", or \"vendorname Standard\". This is just meaningless marketing noise. \nA De Facto Standard is something that's standardized in practice (because everyone does it and agrees they're doing it that way) but it is not backed by some standards organization.\nPython has a De Facto Standard, not a De Jure Standard.\n", "Standardized means there exists a specification for the language (a \"standard\"). Java, for example, has a specification. Perl 5 does not (the source code is the \"standard\") but Perl 6 will.\nSee Is there a Python language specification?\n" ]
[ 48, 7, 4 ]
[]
[]
[ "python", "standardized" ]
stackoverflow_0001535702_python_standardized.txt
Q: Python: how to show results on a web page? Most likely it's a dumb question for those who knows the answer, but I'm a beginner, and here it goes: I have a Python script which I run in a command-line with some parameter, and it prints me some results. Let's say results are some HTML code. I never done any Python programming for web, and couldn't figure it out... I need to have a page (OK, I know how to upload files to a server, Apache is running, Python is installed on the server...) with an edit field, which will accept that parameter, and Submit button, and I need it to "print" the results on a web page after the user submitted a proper parameter, or show any output that in a command-line situation are printed. I've read Dive Into Python's chapters about "HTML Processing" and "HTTP Web Services", but they do not describe what I'm looking for. If the answer isn't short, I would very much appreciate links to the more relevant stuff to read or maybe the key words to google for it. A: For such a simple task, you probably don't need more than CGI. Luckily Python has a built-in cgi module which should do what you want. Or you could look into some of the minimal web frameworks, such as web.py. A: Sounds like you just need to enable CGI on apache, which pretty much will redirect your output to the webserver output. Python does have CGI library you may take a look at it. Here's an essay by Guido ( a bit dated ) And an interactive instruction that looks promising. p.s. Perhaps you may also want to see Google's offering for this Google app engine ( it may not be what you want though ) A: Whose web server? If it is a web server provided by a web hosting company or someone else and you don't have control over it, you need to ascertain in what way they support the use of Python for writing web applications. It isn't enough to know just that they have Python available. As pointed out by others, is likely that at least CGI scripts which use Python will be supported. CGI however is not really practical for running large Python web frameworks such as Django. It is possible though that the web server might also support FASTCGI. If that is the case it becomes possible to use such larger Python web frameworks as FASTCGI uses persistent processes where as CGI creates a process for each request, where the overhead of large Python web frameworks generally makes the use of CGI impractical as a result. If the Apache server is controlled by others using a standalone Python web server such as wsgiref and proxying to it from Apache isn't going to be doable either as you can't setup Apache to do it. So, find out how use of Python for web applications is supported and work out whether you are just a user of the Apache instance, or whether you have some measure of control of changing its global configuration files and restarting it. This will dictate what you can do and use. A: Sorry to say it, but HTTP web services are what you're looking for. Think of it this way - you put a form on a webpage, and a submit button. That goes to a web service that accepts the request and it chews on that request, and returns a response - which is a page of html. You never have to actually save the html, it is always dynamically generated, but it's a valid page of html nonetheless. Look at django http://www.djangoproject.com/ for a high quality python based web service toolkit. The first step to understanding the request / response idiom is to think of the request as a specifically-formatted (defined by CGI - that is, it's a GET or POST HTML action) set of command line parameters, and the response as specifically-formatted output (HTML) that gets sent, not to stdout, but across "the wire" to some browser. Sending the request reaches out across the wire and executes the script with some parameters, and you recieve back the output - your formatted HTML page. A: I agree completely with Matt -- Django is what you want. Django provides a complete solution, and really wonderful documentation that will help you get something going. If your application grows and becomes more complicated, Django can still work; Django works even for very large and very complicated web sites. Take a look at the free book, The Django Book. Another plus with Django: if you do your web app in Django, you can port it to Google App Engine. A: I've not read DIVE INTO PYTHON, so maybe what I'm saying is redundant. As Daniel said, CGI may work for you. These days this would only the case for simple stuff with a fairly low number of hits. The Python CGI module is documented here. I have always used that module for form processing and just done prints for the output. As best as I've been able to figure out, that's the usual way of doing things, but I have not done a lot of CGI with Python. You don't say what you're doing, so I'll state what may be obvious: Make sure you're telling the server that you're outputting a web page with your very first output being the content type followed by a blank line. Typically this is done with: print "Content-Type: text/html" print (For Python 2, for 3 you'll need to make your prints into function calls.) After this, you print your html, header, body, etc. tags and the actual content. A: I think Django is overkill. If you are wanting to learn about this stuff, start from something very simple like this: http://www.islascruz.org/html/index.php/blog/show/Python:-Simple-HTTP-Server-on-python..html You can put your code where the do_GET is. A: A current method of creating simple (one-of )Python web page server is the wsgiref module. wsgiref is a reference implementation of the WSGI specification that can be used to add WSGI support to a web server or framework. See some SO qusetions (https://stackoverflow.com/search?q=[python]+wsgiref) for some code examples and more suggestions. The wsgiref example is a good place to start: from wsgiref.simple_server import make_server def hello_world_app(environ, start_response): status = '200 OK' # HTTP Status headers = [('Content-type', 'text/plain')] # HTTP Headers start_response(status, headers) # The returned object is going to be printed return ["Hello World"] httpd = make_server('', 8000, hello_world_app) print "Serving on port 8000..." # Serve until process is killed httpd.serve_forever()
Python: how to show results on a web page?
Most likely it's a dumb question for those who knows the answer, but I'm a beginner, and here it goes: I have a Python script which I run in a command-line with some parameter, and it prints me some results. Let's say results are some HTML code. I never done any Python programming for web, and couldn't figure it out... I need to have a page (OK, I know how to upload files to a server, Apache is running, Python is installed on the server...) with an edit field, which will accept that parameter, and Submit button, and I need it to "print" the results on a web page after the user submitted a proper parameter, or show any output that in a command-line situation are printed. I've read Dive Into Python's chapters about "HTML Processing" and "HTTP Web Services", but they do not describe what I'm looking for. If the answer isn't short, I would very much appreciate links to the more relevant stuff to read or maybe the key words to google for it.
[ "For such a simple task, you probably don't need more than CGI. Luckily Python has a built-in cgi module which should do what you want.\nOr you could look into some of the minimal web frameworks, such as web.py.\n", "Sounds like you just need to enable CGI on apache, which pretty much will redirect your output to the webserver output.\nPython does have CGI library you may take a look at it.\nHere's an essay by Guido ( a bit dated ) \nAnd an interactive instruction that looks promising.\np.s. \nPerhaps you may also want to see Google's offering for this Google app engine ( it may not be what you want though ) \n", "Whose web server? If it is a web server provided by a web hosting company or someone else and you don't have control over it, you need to ascertain in what way they support the use of Python for writing web applications. It isn't enough to know just that they have Python available.\nAs pointed out by others, is likely that at least CGI scripts which use Python will be supported. CGI however is not really practical for running large Python web frameworks such as Django.\nIt is possible though that the web server might also support FASTCGI. If that is the case it becomes possible to use such larger Python web frameworks as FASTCGI uses persistent processes where as CGI creates a process for each request, where the overhead of large Python web frameworks generally makes the use of CGI impractical as a result.\nIf the Apache server is controlled by others using a standalone Python web server such as wsgiref and proxying to it from Apache isn't going to be doable either as you can't setup Apache to do it.\nSo, find out how use of Python for web applications is supported and work out whether you are just a user of the Apache instance, or whether you have some measure of control of changing its global configuration files and restarting it. This will dictate what you can do and use.\n", "Sorry to say it, but HTTP web services are what you're looking for. Think of it this way - you put a form on a webpage, and a submit button. That goes to a web service that accepts the request and it chews on that request, and returns a response - which is a page of html. You never have to actually save the html, it is always dynamically generated, but it's a valid page of html nonetheless.\nLook at django http://www.djangoproject.com/ for a high quality python based web service toolkit.\nThe first step to understanding the request / response idiom is to think of the request as a specifically-formatted (defined by CGI - that is, it's a GET or POST HTML action) set of command line parameters, and the response as specifically-formatted output (HTML) that gets sent, not to stdout, but across \"the wire\" to some browser. Sending the request reaches out across the wire and executes the script with some parameters, and you recieve back the output - your formatted HTML page.\n", "I agree completely with Matt -- Django is what you want. Django provides a complete solution, and really wonderful documentation that will help you get something going. If your application grows and becomes more complicated, Django can still work; Django works even for very large and very complicated web sites.\nTake a look at the free book, The Django Book.\nAnother plus with Django: if you do your web app in Django, you can port it to Google App Engine.\n", "I've not read DIVE INTO PYTHON, so maybe what I'm saying is redundant. As Daniel said, CGI may work for you. These days this would only the case for simple stuff with a fairly low number of hits. The Python CGI module is documented here. I have always used that module for form processing and just done prints for the output. As best as I've been able to figure out, that's the usual way of doing things, but I have not done a lot of CGI with Python.\nYou don't say what you're doing, so I'll state what may be obvious: Make sure you're telling the server that you're outputting a web page with your very first output being the content type followed by a blank line. Typically this is done with:\nprint \"Content-Type: text/html\"\nprint\n\n(For Python 2, for 3 you'll need to make your prints into function calls.) After this, you print your html, header, body, etc. tags and the actual content.\n", "I think Django is overkill. If you are wanting to learn about this stuff, start from something very simple like this:\nhttp://www.islascruz.org/html/index.php/blog/show/Python:-Simple-HTTP-Server-on-python..html\nYou can put your code where the do_GET is.\n", "A current method of creating simple (one-of )Python web page server is the wsgiref module.\n\nwsgiref is a reference implementation of the WSGI specification that can be used to add WSGI support to a web server or framework.\n\nSee some SO qusetions (https://stackoverflow.com/search?q=[python]+wsgiref) for some code examples and more suggestions.\nThe wsgiref example is a good place to start:\nfrom wsgiref.simple_server import make_server\n\ndef hello_world_app(environ, start_response):\n status = '200 OK' # HTTP Status\n headers = [('Content-type', 'text/plain')] # HTTP Headers\n start_response(status, headers)\n\n # The returned object is going to be printed\n return [\"Hello World\"]\n\nhttpd = make_server('', 8000, hello_world_app)\nprint \"Serving on port 8000...\"\n\n# Serve until process is killed\nhttpd.serve_forever()\n\n" ]
[ 4, 4, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001534070_python.txt
Q: Reading from sockets and download speed Just for fun, I develop a download manager and I'd like to know if reading a large stack of data (i.e. 80 or 100KB) from a socket over the net makes the download speed higher, instead of reading 4KB for each loop iteration? (My average download speed is 200KBPS when I download a file with firefox for example) Thanks, Nir Tayeb. A: The answer is NO. your network transfer rate (200kbps) indicates that buffering 4k or 8k or 200k will hardly make a difference. The time spent between reads is too small. The bottleneck seems to be your transfer rate anyway. Let's try with a stackoverflow 30.9MB mp3 podcast: NOTE: This is a unreliable hack whose results can be affected by a lot of factors - useful for demonstration purposes only) import urllib2 import time def report(start, size, text): total = time.time() - start print "%s reading took %d seconds, transfer rate %.2f KBPS" % ( text, total, (size / 1024.0) / total) start = time.time() url = ('http://itc.conversationsnetwork.org/audio/download/' 'ITC.SO-Episode69-2009.09.29.mp3') f = urllib2.urlopen(url) start = time.time() data = f.read() # read all data in a single, blocking operation report(start, len(data), 'All data') f.close() f = urllib2.urlopen(url) start = time.time() while True: chunk = f.read(4096) # read a chunk if not chunk: break report(start, len(data), 'Chunked') f.close() The results in my system: All data reading took 137 seconds, transfer rate 230.46 KBPS Chunked reading took 137 seconds, transfer rate 230.49 KBPS So for my system, 2 megabit connection, file size, server chosen, it is not much of a difference if I used chunked reading or not.
Reading from sockets and download speed
Just for fun, I develop a download manager and I'd like to know if reading a large stack of data (i.e. 80 or 100KB) from a socket over the net makes the download speed higher, instead of reading 4KB for each loop iteration? (My average download speed is 200KBPS when I download a file with firefox for example) Thanks, Nir Tayeb.
[ "The answer is NO.\nyour network transfer rate (200kbps) indicates that buffering 4k or 8k or 200k will hardly make a difference. The time spent between reads is too small. The bottleneck seems to be your transfer rate anyway.\nLet's try with a stackoverflow 30.9MB mp3 podcast:\n\nNOTE: This is a unreliable hack whose results can be affected by a lot\n of factors - useful for demonstration\n purposes only)\n\nimport urllib2\nimport time\n\ndef report(start, size, text):\n total = time.time() - start\n print \"%s reading took %d seconds, transfer rate %.2f KBPS\" % (\n text, total, (size / 1024.0) / total)\n\nstart = time.time()\nurl = ('http://itc.conversationsnetwork.org/audio/download/'\n 'ITC.SO-Episode69-2009.09.29.mp3')\nf = urllib2.urlopen(url)\nstart = time.time()\ndata = f.read() # read all data in a single, blocking operation\nreport(start, len(data), 'All data')\nf.close()\n\nf = urllib2.urlopen(url)\nstart = time.time()\nwhile True:\n chunk = f.read(4096) # read a chunk\n if not chunk:\n break\nreport(start, len(data), 'Chunked')\nf.close()\n\nThe results in my system:\nAll data reading took 137 seconds, transfer rate 230.46 KBPS\nChunked reading took 137 seconds, transfer rate 230.49 KBPS\n\nSo for my system, 2 megabit connection, file size, server chosen, it is not much of a difference if I used chunked reading or not.\n" ]
[ 2 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001537161_python_sockets.txt
Q: Customizing time zones for Django users using geography I am working on a Django app that needs to report the local time relative to the user. I would prefer not ask the user to input the time zone directly because I have his address stored in the database. I am only considering American users. Since most states in the USA are in only one time zone it is possible to calculate the time zone based on the state information for most cases. I want to give a function the name of a state/geographic information and have it return the time offset from UTC of that state considering day light savings time. Is there a library that can do this for python? A: I am not sure such a library exists. Every state in the USA doesn't have only one time zone. Have a look here: List of U.S. states by time zone Many states have more than one. I guess you could still use that list and pick the timezone that the majority of the state uses and then allow the users to customize theirs if it differs. A: I'm not sure how reliable it is, but the HTTP request - encapsulated in Django in the request object - has a TZ header which shows the client's time zone. >>> request.META['TZ'] 'Europe/London' A: Like Andre Miller pointed out, time zones are not tied to state. You're probably better off using the zipcode, because a few cities cross timezone lines. There are a few databases you can purchase that that map zipcodes to time zones. Here's one I found that also includes the UTC shift and Daylight Savings: USA 5-digit Zipcode DB with time zone info. I haven't done the time zone thing before, but I've used this sort of database for lat/log information. The data does change, so plan on updating your database a few times a year. BTW, if anybody knows of a free database like this, please post: I think it would be really handy for the community.
Customizing time zones for Django users using geography
I am working on a Django app that needs to report the local time relative to the user. I would prefer not ask the user to input the time zone directly because I have his address stored in the database. I am only considering American users. Since most states in the USA are in only one time zone it is possible to calculate the time zone based on the state information for most cases. I want to give a function the name of a state/geographic information and have it return the time offset from UTC of that state considering day light savings time. Is there a library that can do this for python?
[ "I am not sure such a library exists. Every state in the USA doesn't have only one time zone.\nHave a look here: List of U.S. states by time zone\nMany states have more than one.\nI guess you could still use that list and pick the timezone that the majority of the state uses and then allow the users to customize theirs if it differs.\n", "I'm not sure how reliable it is, but the HTTP request - encapsulated in Django in the request object - has a TZ header which shows the client's time zone. \n>>> request.META['TZ']\n'Europe/London'\n\n", "Like Andre Miller pointed out, time zones are not tied to state. You're probably better off using the zipcode, because a few cities cross timezone lines. \nThere are a few databases you can purchase that that map zipcodes to time zones. Here's one I found that also includes the UTC shift and Daylight Savings:\nUSA 5-digit Zipcode DB with time zone info.\nI haven't done the time zone thing before, but I've used this sort of database for lat/log information. The data does change, so plan on updating your database a few times a year. \nBTW, if anybody knows of a free database like this, please post: I think it would be really handy for the community.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "django", "python", "timezone" ]
stackoverflow_0001536392_django_python_timezone.txt
Q: Creating Title / Slug based on PK ID What would be the generic way to create record title and slub based on the ID? I am working with django-photologue here. I want to save a record with title and slug based on the PK. The generic problem is that I can't get the PK until the record is saved into database. On the other side, I can't save it without title and slug. What common solution is that that kind of problem? A: Normally you don't use the primary key at all. If your concern is just to automatically generate unique slugs (which is the only reason I can see to do what you're trying to do), then you want an AutoSlugField, which creates a unique slug by increasing an appended number on the slug until it is unique. There's an implementation of AutoSlugField which is part of django-command-extensions. A: If you URIs are supposed to look like "example.com/${obj.id}-${sluggify( obj.title )}" then generate these uri when you use them. This uri contains no data that's not in the DB already, so don't add it again. The slug's sole purpose is making url's look nicer to people and search engines. Take Stackoverflow as an example: Creating Title / Slug based on PK ID If you want to select by the slug only, it has to be a Primary Key, unique and immutable. You should be aware that having another PK, the usual id column, would be unneeded. I don't say slugs are bad, nor that saving slugs is always bad. There are many valid reasons to save them in the database, but then you need to think about what you're doing. Selecting data by their PK (and ignoring the slug) on the other hand requires no thinking, so that should be the default way to go. A: To name the file based on record ID you have several options: a) Try to predict ID: max_pk = self.__class__.objects.aggregate(max_pk=Max('pk'))['max_pk'] or 0 predicted_id = max_pk+1 b) Rename file in post_save when ID is known. You can also use md5 hash or random strings for generating unique file names. Btw, there is separate django-autoslug app.
Creating Title / Slug based on PK ID
What would be the generic way to create record title and slub based on the ID? I am working with django-photologue here. I want to save a record with title and slug based on the PK. The generic problem is that I can't get the PK until the record is saved into database. On the other side, I can't save it without title and slug. What common solution is that that kind of problem?
[ "Normally you don't use the primary key at all. If your concern is just to automatically generate unique slugs (which is the only reason I can see to do what you're trying to do), then you want an AutoSlugField, which creates a unique slug by increasing an appended number on the slug until it is unique.\nThere's an implementation of AutoSlugField which is part of django-command-extensions.\n", "If you URIs are supposed to look like \"example.com/${obj.id}-${sluggify( obj.title )}\" then generate these uri when you use them. This uri contains no data that's not in the DB already, so don't add it again. The slug's sole purpose is making url's look nicer to people and search engines.\nTake Stackoverflow as an example: Creating Title / Slug based on PK ID \nIf you want to select by the slug only, it has to be a Primary Key, unique and immutable. You should be aware that having another PK, the usual id column, would be unneeded.\nI don't say slugs are bad, nor that saving slugs is always bad. There are many valid reasons to save them in the database, but then you need to think about what you're doing. \nSelecting data by their PK (and ignoring the slug) on the other hand requires no thinking, so that should be the default way to go.\n", "To name the file based on record ID you have several options:\na) Try to predict ID:\nmax_pk = self.__class__.objects.aggregate(max_pk=Max('pk'))['max_pk'] or 0\npredicted_id = max_pk+1\n\nb) Rename file in post_save when ID is known.\nYou can also use md5 hash or random strings for generating unique file names.\nBtw, there is separate django-autoslug app.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "django", "django_profiles", "python" ]
stackoverflow_0001537149_django_django_profiles_python.txt
Q: What do I need to know/learn for automated python deployment? I'm starting a new webapp project in Python to get into the Agile mind-set and I'd like to do things "properly" with regards to deployment. However, I'm finding the whole virtualenv/fabric/zc.buildout/etc stuff a little confusing - I'm used to just FTP'ing PHP files to a server and pointing a webserver at it. After deployment the server set-up would look something like: Nginx --proxy-to--> WSGI Webserver (Spawning) --> WSGI Middleware --> WSGI App (probably MNML or similar) with the python webserver being managed by supervisord. What sort of deployment set-up/packages/apps should I be looking into? And is there a specific directory structure I need to stick to with my app to ease deployment? A: Your deployment story depends on your app. Are you using Django? Then the Apache + mod_wsgi deployment docs make for a good starting point. Then you can google around for more detail, such as this 2-part series using pip, virtualenv, git, and fabric. Really, fabric, virtualenv, and all those other tools are designed to make it easier to maintain and automate your deployment. Initially, the steps from the documentation are probably enough. After you get a feel for how things work, you can revisit to improve your process. A: I've heard good things about Fabric: Fabric is a Python library and command-line tool designed to streamline deploying applications or performing system administration tasks via the SSH protocol. It provides tools for running arbitrary shell commands (either as a normal login user, or via sudo), uploading and downloading files, and so forth. A: You already mentioned buildout, and it's all you need. Google for example buildouts for the different parts. Takes a while to set it up the first time, but then you can reuse the setup between different projects too. Let supervisord start everything, not just the python server. Then start supervisord at reboot either fron cron or init.d.
What do I need to know/learn for automated python deployment?
I'm starting a new webapp project in Python to get into the Agile mind-set and I'd like to do things "properly" with regards to deployment. However, I'm finding the whole virtualenv/fabric/zc.buildout/etc stuff a little confusing - I'm used to just FTP'ing PHP files to a server and pointing a webserver at it. After deployment the server set-up would look something like: Nginx --proxy-to--> WSGI Webserver (Spawning) --> WSGI Middleware --> WSGI App (probably MNML or similar) with the python webserver being managed by supervisord. What sort of deployment set-up/packages/apps should I be looking into? And is there a specific directory structure I need to stick to with my app to ease deployment?
[ "Your deployment story depends on your app. Are you using Django? Then the Apache + mod_wsgi deployment docs make for a good starting point. Then you can google around for more detail, such as this 2-part series using pip, virtualenv, git, and fabric.\nReally, fabric, virtualenv, and all those other tools are designed to make it easier to maintain and automate your deployment. Initially, the steps from the documentation are probably enough. After you get a feel for how things work, you can revisit to improve your process.\n", "I've heard good things about Fabric:\n\nFabric is a Python library and\n command-line tool designed to\n streamline deploying applications or\n performing system administration tasks\n via the SSH protocol. It provides\n tools for running arbitrary shell\n commands (either as a normal login\n user, or via sudo), uploading and\n downloading files, and so forth.\n\n", "You already mentioned buildout, and it's all you need. Google for example buildouts for the different parts. Takes a while to set it up the first time, but then you can reuse the setup between different projects too.\nLet supervisord start everything, not just the python server. Then start supervisord at reboot either fron cron or init.d.\n" ]
[ 4, 2, 2 ]
[]
[]
[ "deployment", "python", "virtualenv" ]
stackoverflow_0001537298_deployment_python_virtualenv.txt
Q: Lua as a general-purpose scripting language? When I see Lua, the only thing I ever read is "great for embedding", "fast", "lightweight" and more often than anything else: "World of Warcraft" or in short "WoW". Why is it limited to embedding the whole thing into another application? Why not write general-purpose scripts like you do with Python or Perl? Lua seems to be doing great in aspects like speed and memory-usage (The fastest scripting language afaik) so why is it that I never see Lua being used as a "Desktop scripting-language" to automate tasks? For example: Renaming a bunch of files Download some files from the web Webscraping Is it the lack of the standard library? A: Lua is a cool language, light-weight and extremely fast! But the point is: Is performance so important for those tasks you mentioned? Renaming a bunch of files Download some files from the web Webscraping You write those programs once, and run them once, too maybe. Why do you care about performance so much for a run-once program? For example: Cost 3 hours to write a C/C++ program, to handle data once, the program will take 1 hour to run. Cost 30 Minute to write a Python program to handle data once, the program will take 10 hours to run. If you choose the first, you save the time to run the program, but you cost your time to develop the program. On the other hand, if you choose the second, you waste time to run the program, but you can do other things when the program is running. How about play World of Warcraft, kill monsters with your warlock? Eat my D.O.T! :P That's it! Although Lua is not so difficult to write, everything about Lua is designed to be efficient.And what's more, there are little modules for Lua, but there are so many modules for Python. You don't want to port a C library for Lua just for a run-once program, do you? Instead, choose Python and use those module to achieve your task easily might be a better idea. FYI: Actually, I have tried to use Lua to do webscraping, but finally, I realized I do not have to care so much about language performance. The bottleneck of webscraping is not on the performance of the language. The bottleneck is on network I/O, HTML parsing and multitasking. All I have to do is make sure the program works and find the bottleneck. Finally, I chose Python rather than Lua. There is so many excellent Python modules; I have no reason to build my own. According to my experience about webscraping, I chose Twisted for network I/O and lxml for html parsing as the backend of my webscraping program. I have wrote an article for an introduction to this technology. The best choice to grab data from websites: Python + Twisted + lxml Hope this is helpful. A: Lua has fewer libraries than Python. But be sure to have a look at LuaForge. It has a lot of interesting libs, like LuaCURL, wxLua or getopt. Then, visit LuaRocks, the package management system for Lua. With it, you can search and install most mature Lua modules with dependencies. It feels like RubyGems or aptitude. The site lua-users.org has a lot of interesting resources too, like tutorials or the Lua Wiki. What I like about Lua is not its speed, it's its minimal core language, flexibility and extensibility. That said, I would probably use Python for the tasks you mentionned because of the larger community doing such things in Python. A: Just because it is "marketed" (in some general sense) as a special-purpose language for embedded script engines, does not mean that it is limited to that. In fact, WoW could probably just as well have chosen Python as their embedded scripting language. A: It's probably because Lua was designed as a scripting and extension language. On the official site it's described as a powerful, fast, light-weight, embeddable scripting language. There's nothing stopping you from writing general purpose programs for it (if I recall correctly it ships with an interpreter and compiler), but the language designers intended it to be used mainly as an embedded language (hence being light-weight and all) A: This is a sociological question, not a programming question. I use Lua for general-purpose scripting almost exclusively. But I had to write a few hundred lines of code so that Lua would play better with the shell. This included such tricks as Quoting a string so it is seen as one word by the shell Writing a function to capture the output of a command as in shell $(command) Writing a function to crawl the filesystem using the Lua posix library and expand shell globbing patterns (For those who may be interested, I've left the code in my Lua drop box, which also contains some other stuff. The interesting stuff is probably in osutil in os.quote, os.runf, os.capture, and maybe os.execve. The globbing is in posixutil.lua. They both use Luiz Henrique de Figuereido's Lua Posix library.) To me, the extra effort is worth it because I can deal with simple syntax and great data structures. To others, a more direct connection with the shell might be preferred. A: Definitely a lack of standard libraries. It's also lesser known than Python, Perl or Ruby. A: There has been a recent push to create a batteries included installation for Lua on Windows. The result can be found at the Lua for Windows project at LuaForge. It includes the interpreter and a large collection of extra modules allowing useful scripts and applications to be written and used out of the box. I know that various Linux distros are including Lua and some modules now, and more to come. There are also a couple of proposed module libraries under discussion in the mailing list, but the community hasn't yet settled on one as the "official" mechanism. I use Lua both as a scripting language and as the "main" loop of my typical application, supported by one or more DLLs containing code that was better implemented in C, or wrapping existing libraries or API functions that are needed by a particular project. Used with a GUI toolkit such as IUP or wxLua (a Lua binding for wxWindows), Lua makes writing small to mid-sized GUI applications quite pleasant. A: I think the answer about it being a "marketing" thing is probably correct, along with the lack of a large set of libraries to choose from. I would like to point out another case of this: Ruby. Ruby is meant to be a general purpose scripting language. The problem is that since Ruby on Rails has risen to be so popular, it is becoming hard to find something that is unrelated to Rails. I'm afraid Lua will suffer this as well, being popular because of a few major things using it, but never able to break free of that stigma. A: Lack of standard library. Period. Even listing all the files in a directory require a non-standard module. There are good reasons for that (keeping strict ANSI portability, not requiring POSIX) but the result is that, for general programming, I prefer Python. A: Lua is used in LuaTeX, a TeX extension, as an embedded language, and has gained popularity rapidly among TeX developers because of that. It is used as a scripting language for some utilities in the TeX Live distribution, be it only because now there is a luatex binary, available on all platforms, that can also be used as a Lua interpreter (with some vital modules added – slnunicode, luafilesystem, etc.) That's very important for Windows installations, that relied on additional Unix scripting tools earlier (ActivePerl, etc.) The ConTeXt macro language uses Lua scripts extensively nowadays. That's admittedly a very special field :-) But completely unrelated to games! A: In order for Lua to be easy to embed it has to have few dependencies and be small. That makes it poorly suited as a general purpose scripting language. Because using it as a general purpose script language would require a lot of standard libraries. But if Lua had a lot of standard libraries it would be harder to embed (due to dependencies and memory footprint.)
Lua as a general-purpose scripting language?
When I see Lua, the only thing I ever read is "great for embedding", "fast", "lightweight" and more often than anything else: "World of Warcraft" or in short "WoW". Why is it limited to embedding the whole thing into another application? Why not write general-purpose scripts like you do with Python or Perl? Lua seems to be doing great in aspects like speed and memory-usage (The fastest scripting language afaik) so why is it that I never see Lua being used as a "Desktop scripting-language" to automate tasks? For example: Renaming a bunch of files Download some files from the web Webscraping Is it the lack of the standard library?
[ "Lua is a cool language, light-weight and extremely fast!\nBut the point is: Is performance so important for those\ntasks you mentioned?\n\nRenaming a bunch of files\nDownload some files from the web\nWebscraping\n\nYou write those programs once, and run them once, too maybe.\nWhy do you care about performance so much for a run-once program?\nFor example:\n\nCost 3 hours to write a C/C++ program, to handle data once, the program will take 1 hour to run.\nCost 30 Minute to write a Python program to handle data once, the program will take 10 hours to run.\n\nIf you choose the first, you save the time to run the program,\nbut you cost your time to develop the program.\nOn the other hand, if you choose the second, you waste time to run\nthe program, but you can do other things when the program is\nrunning. How about play World of Warcraft, kill monsters\nwith your warlock? Eat my D.O.T! :P\nThat's it! Although Lua is not so difficult to write, everything about Lua is designed to be efficient.And what's more, there are little modules for Lua, but there are so many modules for Python. You don't want to port a C library for Lua just for a run-once program, do you? Instead, choose Python and use those module to achieve your task easily might be a better idea.\nFYI: Actually, I have tried to use Lua to do webscraping,\nbut finally, I realized I do not have to care so much about language performance. The bottleneck of webscraping is\nnot on the performance of the language. The bottleneck is on\nnetwork I/O, HTML parsing and multitasking. All I have to do\nis make sure the program works and find the bottleneck.\nFinally, I chose Python rather than Lua. There is so\nmany excellent Python modules; I have no reason to build my\nown.\nAccording to my experience about webscraping, I chose\nTwisted for network I/O and lxml for html parsing as the backend\nof my webscraping program. I have wrote an article for an introduction to this technology.\nThe best choice to grab data from websites: Python + Twisted + lxml\nHope this is helpful.\n", "Lua has fewer libraries than Python. But be sure to have a look at LuaForge. It has a lot of interesting libs, like LuaCURL, wxLua or getopt.\nThen, visit LuaRocks, the package management system for Lua. With it, you can search and install most mature Lua modules with dependencies. It feels like RubyGems or aptitude.\nThe site lua-users.org has a lot of interesting resources too, like tutorials or the Lua Wiki.\nWhat I like about Lua is not its speed, it's its minimal core language, flexibility and extensibility.\nThat said, I would probably use Python for the tasks you mentionned because of the larger community doing such things in Python.\n", "Just because it is \"marketed\" (in some general sense) as a special-purpose language for embedded script engines, does not mean that it is limited to that. In fact, WoW could probably just as well have chosen Python as their embedded scripting language.\n", "It's probably because Lua was designed as a scripting and extension language. On the official site it's described as a powerful, fast, light-weight, embeddable scripting language. There's nothing stopping you from writing general purpose programs for it (if I recall correctly it ships with an interpreter and compiler), but the language designers intended it to be used mainly as an embedded language (hence being light-weight and all)\n", "This is a sociological question, not a programming question.\nI use Lua for general-purpose scripting almost exclusively. But I had to write a few hundred lines of code so that Lua would play better with the shell. This included such tricks as \n\nQuoting a string so it is seen as one word by the shell\nWriting a function to capture the output of a command as in shell $(command)\nWriting a function to crawl the filesystem using the Lua posix library and expand shell globbing patterns\n\n(For those who may be interested, I've left the code in my Lua drop box, which also contains some other stuff. The interesting stuff is probably in osutil in os.quote, os.runf, os.capture, and maybe os.execve. The globbing is in posixutil.lua. They both use Luiz Henrique de Figuereido's Lua Posix library.)\nTo me, the extra effort is worth it because I can deal with simple syntax and great data structures. To others, a more direct connection with the shell might be preferred.\n", "Definitely a lack of standard libraries. It's also lesser known than Python, Perl or Ruby.\n", "There has been a recent push to create a batteries included installation for Lua on Windows. The result can be found at the Lua for Windows project at LuaForge. It includes the interpreter and a large collection of extra modules allowing useful scripts and applications to be written and used out of the box.\nI know that various Linux distros are including Lua and some modules now, and more to come.\nThere are also a couple of proposed module libraries under discussion in the mailing list, but the community hasn't yet settled on one as the \"official\" mechanism.\nI use Lua both as a scripting language and as the \"main\" loop of my typical application, supported by one or more DLLs containing code that was better implemented in C, or wrapping existing libraries or API functions that are needed by a particular project. Used with a GUI toolkit such as IUP or wxLua (a Lua binding for wxWindows), Lua makes writing small to mid-sized GUI applications quite pleasant.\n", "I think the answer about it being a \"marketing\" thing is probably correct, along with the lack of a large set of libraries to choose from. I would like to point out another case of this: Ruby. Ruby is meant to be a general purpose scripting language. The problem is that since Ruby on Rails has risen to be so popular, it is becoming hard to find something that is unrelated to Rails. I'm afraid Lua will suffer this as well, being popular because of a few major things using it, but never able to break free of that stigma.\n", "Lack of standard library. Period. Even listing all the files in a directory require a non-standard module.\nThere are good reasons for that (keeping strict ANSI portability, not requiring POSIX) but the result is that, for general programming, I prefer Python.\n", "Lua is used in LuaTeX, a TeX extension, as an embedded language, and has gained popularity rapidly among TeX developers because of that. It is used as a scripting language for some utilities in the TeX Live distribution, be it only because now there is a luatex binary, available on all platforms, that can also be used as a Lua interpreter (with some vital modules added – slnunicode, luafilesystem, etc.) That's very important for Windows installations, that relied on additional Unix scripting tools earlier (ActivePerl, etc.) The ConTeXt macro language uses Lua scripts extensively nowadays.\nThat's admittedly a very special field :-) But completely unrelated to games!\n", "In order for Lua to be easy to embed it has to have few dependencies and be small. That makes it poorly suited as a general purpose scripting language. Because using it as a general purpose script language would require a lot of standard libraries. But if Lua had a lot of standard libraries it would be harder to embed (due to dependencies and memory footprint.)\n" ]
[ 40, 23, 12, 10, 9, 6, 5, 4, 4, 4, 3 ]
[]
[]
[ "lua", "python", "scripting" ]
stackoverflow_0000250151_lua_python_scripting.txt
Q: Identifying twitter user's longitude and latitude As per my requirement in need to search twitter and display user's location on map. Can anyone help me identifying longitude and latitude from search api result? A: Last time I looked it didn't include it officially, unless the specific client (such as an iphone client) specifically updated your location to lat+long coordinates. When I did something similar I passed the Twitter RSS output through an rss geotagging service, then used the output from that to map. It worked, but 99% of the tweets were just city centres, so it was pretty pointless :-( A: Twitter hasn't released their new geocoding API yet. Twitter's API Documentation is available here. http://apiwiki.twitter.com/Twitter-API-Documentation As you can see, most of the current API methods return an empty "geo" tag. Once Twitter has rolled out their geo implementation, this tag will be populated. A: You'd probably be best to see what "location" the user has set then use a geo-coding api to try and convert that location into lat/lng. e.g. use http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-users%C2%A0show to find out the user's location, then use google's geocoding API (http://code.google.com/apis/maps/documentation/geocoding/) to do the rest. Also you would probably want to do some pre-processing on the location to see if it's already a lat/lng pair (as some users update their location in this manner).
Identifying twitter user's longitude and latitude
As per my requirement in need to search twitter and display user's location on map. Can anyone help me identifying longitude and latitude from search api result?
[ "Last time I looked it didn't include it officially, unless the specific client (such as an iphone client) specifically updated your location to lat+long coordinates.\nWhen I did something similar I passed the Twitter RSS output through an rss geotagging service, then used the output from that to map. It worked, but 99% of the tweets were just city centres, so it was pretty pointless :-(\n", "Twitter hasn't released their new geocoding API yet. Twitter's API Documentation is available here.\nhttp://apiwiki.twitter.com/Twitter-API-Documentation\nAs you can see, most of the current API methods return an empty \"geo\" tag. Once Twitter has rolled out their geo implementation, this tag will be populated.\n", "You'd probably be best to see what \"location\" the user has set then use a geo-coding api to try and convert that location into lat/lng.\ne.g. use http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-users%C2%A0show to find out the user's location, then use google's geocoding API (http://code.google.com/apis/maps/documentation/geocoding/) to do the rest.\nAlso you would probably want to do some pre-processing on the location to see if it's already a lat/lng pair (as some users update their location in this manner).\n" ]
[ 1, 1, 0 ]
[]
[]
[ "asp.net", "c#", "python", "twitter" ]
stackoverflow_0001537303_asp.net_c#_python_twitter.txt
Q: Read bytes from string as floats I've got a python webserver where small binary files are POST:ed. The posted data is represented as strings. I want to examine the contents of these strings. But to do that, I need to convert each 4 bytes to floats (little endian). How do you do that? A: You use the struct module: >>> import struct >>> struct.unpack_from("f", "\43\a3\12\32") (8.6198787687447256e-33,) A: While struct is best for unpacking collection of "scalar" binary values, when you what you have is a sequence of 4-byte binary floats in a string one after the other, the array module is ideal. Specifically, it's as simple as: import array thefloats = array.array('f', thestring) If only part of thestring contains the sequence of 4-byte binary floats, you can build the array from that part by using the appropriate slice of the string instead of the entire string. The array instance offers most of the functionality of list (plus handy methods to convert to/from strings of bytes and swap between little-endian and big-endian forms if needed), but it's less flexible (only floats can be in the array) and enormously more compact (can take up 3-4 times less memory than a list with the same items). A: The construct module might also be a handy way to do this. It should be easy to adapt this example to your needs: # [U]nsigned, [L]ittle endian, 16 bit wide integer (parsing) >>> ULInt16("foo").parse("\x01\x02") 513
Read bytes from string as floats
I've got a python webserver where small binary files are POST:ed. The posted data is represented as strings. I want to examine the contents of these strings. But to do that, I need to convert each 4 bytes to floats (little endian). How do you do that?
[ "You use the struct module:\n>>> import struct\n>>> struct.unpack_from(\"f\", \"\\43\\a3\\12\\32\")\n(8.6198787687447256e-33,)\n\n", "While struct is best for unpacking collection of \"scalar\" binary values, when you what you have is a sequence of 4-byte binary floats in a string one after the other, the array module is ideal. Specifically, it's as simple as:\nimport array\nthefloats = array.array('f', thestring)\n\nIf only part of thestring contains the sequence of 4-byte binary floats, you can build the array from that part by using the appropriate slice of the string instead of the entire string. The array instance offers most of the functionality of list (plus handy methods to convert to/from strings of bytes and swap between little-endian and big-endian forms if needed), but it's less flexible (only floats can be in the array) and enormously more compact (can take up 3-4 times less memory than a list with the same items).\n", "The construct module might also be a handy way to do this. It should be easy to adapt this example to your needs:\n# [U]nsigned, [L]ittle endian, 16 bit wide integer (parsing)\n>>> ULInt16(\"foo\").parse(\"\\x01\\x02\")\n513\n\n" ]
[ 7, 5, 1 ]
[]
[]
[ "byte", "python", "string" ]
stackoverflow_0001537862_byte_python_string.txt
Q: QGraphicsView with automatic items placing I would like to write an asset browser using QGraphicsView. It's a little different from examples using QGraphicsView and QGraphicsItems, because I want only one scrollbar and I want items to move automatically, when the viewport size changes. For example, when viewport width is large enough to display 4 asssets, they should be displayed like this: aaaa aaaa aa but when viewport is shrinked and can only contain 3 in a row, it should display them like this: aaa aaa aaa a I wouldn't like to have to move those asset's by myself and let the graphics view manage them all. Is it somehow possible? I have written once such a thing, but using QWidget and paintEvent, drawing all assets myself and keeping track of how many assets can be displayed in a row. Can it be done simpler with QGraphicsView? A: QGraphicsView supports layouts. What you have to do is implement your own layout manager, inheriting from QGraphicsLayout. For the layout you require, take a look at the Flow Layout example of Qt. Converting that example will give you a QGraphicsFlowLayout. Add your QGraphicsItems to this layout and set your QGraphicsView's layout to that layout, and that would do the trick. A: It sounds to me you want a list, not a graphics view. A list can be set to display things wrapping around like you desire. See the puzzle example, paying attention to the list of puzzle pieces on the left. It looks pretty simple to set up for the case presented. Of course, if you really want it in a graphics view, I suppose you could add a list to the view and use it there.
QGraphicsView with automatic items placing
I would like to write an asset browser using QGraphicsView. It's a little different from examples using QGraphicsView and QGraphicsItems, because I want only one scrollbar and I want items to move automatically, when the viewport size changes. For example, when viewport width is large enough to display 4 asssets, they should be displayed like this: aaaa aaaa aa but when viewport is shrinked and can only contain 3 in a row, it should display them like this: aaa aaa aaa a I wouldn't like to have to move those asset's by myself and let the graphics view manage them all. Is it somehow possible? I have written once such a thing, but using QWidget and paintEvent, drawing all assets myself and keeping track of how many assets can be displayed in a row. Can it be done simpler with QGraphicsView?
[ "QGraphicsView supports layouts. What you have to do is implement your own layout manager, inheriting from QGraphicsLayout.\nFor the layout you require, take a look at the Flow Layout example of Qt. Converting that example will give you a QGraphicsFlowLayout. Add your QGraphicsItems to this layout and set your QGraphicsView's layout to that layout, and that would do the trick.\n", "It sounds to me you want a list, not a graphics view. A list can be set to display things wrapping around like you desire. See the puzzle example, paying attention to the list of puzzle pieces on the left. It looks pretty simple to set up for the case presented.\nOf course, if you really want it in a graphics view, I suppose you could add a list to the view and use it there.\n" ]
[ 5, 1 ]
[ "I would use a custom layout to do this. Try to create your custom Layout class that inherits from QGraphicsLayout and manage the way it is placing items.\n" ]
[ -1 ]
[ "pyqt", "python", "qgraphicsview", "qt" ]
stackoverflow_0001538308_pyqt_python_qgraphicsview_qt.txt
Q: Is the single underscore "_" a built-in variable in Python? I don't understand what this single underscore means. Is it a magic variable? I can't see it in locals() and globals(). >>> 'abc' 'abc' >>> len(_) 3 >>> A: In the standard Python REPL, _ represents the last returned value -- at the point where you called len(_), _ was the value 'abc'. For example: >>> 10 10 >>> _ 10 >>> _ + 5 15 >>> _ + 5 20 This is handled by sys.displayhook, and the _ variable goes in the builtins namespace with things like int and sum, which is why you couldn't find it in globals(). Note that there is no such functionality within Python scripts. In a script, _ has no special meaning and will not be automatically set to the value produced by the previous statement. Also, beware of reassigning _ in the REPL if you want to use it like above! >>> _ = "underscore" >>> 10 10 >>> _ + 5 Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> _ + 5 TypeError: cannot concatenate 'str' and 'int' objects This creates a global variable that hides the _ variable in the built-ins. To undo the assignment (and remove the _ from globals), you'll have to: >>> del _ then the functionality will be back to normal (the builtins._ will be visible again). A: Why you can't see it? It is in __builtins__ >>> __builtins__._ is _ True So it's neither global nor local. 1 And where does this assignment happen? sys.displayhook: >>> import sys >>> help(sys.displayhook) Help on built-in function displayhook in module sys: displayhook(...) displayhook(object) -> None Print an object to sys.stdout and also save it in __builtin__. 1 2012 Edit : I'd call it "superglobal" since __builtin__'s members are available everywhere, in any module. A: Usually, we are using _ in Python to bind a ugettext function.
Is the single underscore "_" a built-in variable in Python?
I don't understand what this single underscore means. Is it a magic variable? I can't see it in locals() and globals(). >>> 'abc' 'abc' >>> len(_) 3 >>>
[ "In the standard Python REPL, _ represents the last returned value -- at the point where you called len(_), _ was the value 'abc'.\nFor example:\n>>> 10\n10\n>>> _\n10\n>>> _ + 5\n15\n>>> _ + 5\n20\n\nThis is handled by sys.displayhook, and the _ variable goes in the builtins namespace with things like int and sum, which is why you couldn't find it in globals().\nNote that there is no such functionality within Python scripts. In a script, _ has no special meaning and will not be automatically set to the value produced by the previous statement.\nAlso, beware of reassigning _ in the REPL if you want to use it like above!\n>>> _ = \"underscore\"\n>>> 10\n10\n>>> _ + 5\n\nTraceback (most recent call last):\n File \"<pyshell#6>\", line 1, in <module>\n _ + 5\nTypeError: cannot concatenate 'str' and 'int' objects\n\nThis creates a global variable that hides the _ variable in the built-ins. To undo the assignment (and remove the _ from globals), you'll have to:\n>>> del _\n\nthen the functionality will be back to normal (the builtins._ will be visible again).\n", "Why you can't see it? It is in __builtins__\n>>> __builtins__._ is _\nTrue\n\nSo it's neither global nor local. 1\nAnd where does this assignment happen? sys.displayhook:\n>>> import sys\n>>> help(sys.displayhook)\nHelp on built-in function displayhook in module sys:\n\ndisplayhook(...)\n displayhook(object) -> None\n\n Print an object to sys.stdout and also save it in __builtin__.\n\n1 2012 Edit : I'd call it \"superglobal\" since __builtin__'s members are available everywhere, in any module.\n", "Usually, we are using _ in Python to bind a ugettext function.\n" ]
[ 53, 18, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001538832_python.txt
Q: Is there a way around coding in Python without the tab, indent & whitespace criteria? I want to start using Python for small projects but the fact that a misplaced tab or indent can throw a compile error is really getting on my nerves. Is there some type of setting to turn this off? I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting? A: The answer is no. At least, not until something like the following is implemented: from __future__ import braces A: No. Indentation-as-grammar is an integral part of the Python language, for better and worse. A: Emacs! Seriously, its use of "tab is a command, not a character", is absolutely perfect for python development. A: All of the whitespace issues I had when I was starting Python were the result mixing tabs and spaces. Once I configured everything to just use one or the other, I stopped having problems. In my case I configured UltraEdit & vim to use spaces in place of tabs. A: It's possible to write a pre-processor which takes randomly-indented code with pseudo-python keywords like "endif" and "endwhile" and properly indents things. I had to do this when using python as an "ASP-like" language, because the whole notion of "indentation" gets a bit fuzzy in such an environment. Of course, even with such a thing you really ought to indent sanely, at which point the conveter becomes superfluous. A: I find it hard to understand when people flag this as a problem with Python. I took to it immediately and actually find it's one of my favourite 'features' of the language :) In other languages I have two jobs: 1. Fix the braces so the computer can parse my code 2. Fix the indentation so I can parse my code. So in Python I have half as much to worry about ;-) (nb the only time I ever have problem with indendation is when Python code is in a blog and a forum that messes with the white-space but this is happening less and less as the apps get smarter) A: I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting? I liked pydev extensions of eclipse for that. A: I do not believe so, as Python is a whitespace-delimited language. Perhaps a text editor or IDE with auto-indentation would be of help. What are you currently using? A: No, there isn't. Indentation is syntax for Python. You can: Use tabnanny.py to check your code Use a syntax-aware editor that highlights such mistakes (vi does that, emacs I bet it does, and then, most IDEs do too) (far-fetched) write a preprocessor of your own to convert braces (or whatever block delimiters you love) into indentation A: You should disable tab characters in your editor when you're working with Python (always, actually, IMHO, but especially when you're working with Python). Look for an option like "Use spaces for tabs": any decent editor should have one. A: Not really. There are a few ways to modify whitespace rules for a given line of code, but you will still need indent levels to determine scope. You can terminate statements with ; and then begin a new statement on the same line. (Which people often do when golfing.) If you want to break up a single line into multiple lines you can finish a line with the \ character which means the current line effectively continues from the first non-whitespace character of the next line. This visually appears violate the usual whitespace rules but is legal. My advice: don't use tabs if you are having tab/space confusion. Use spaces, and choose either 2 or 3 spaces as your indent level. A good editor will make it so you don't have to worry about this. (python-mode for emacs, for example, you can just use the tab key and it will keep you honest). A: I agree with justin and others -- pick a good editor and use spaces rather than tabs for indentation and the whitespace thing becomes a non-issue. I only recently started using Python, and while I thought the whitespace issue would be a real annoyance it turns out to not be the case. For the record I'm using emacs though I'm sure there are other editors out there that do an equally fine job. If you're really dead-set against it, you can always pass your scripts through a pre-processor but that's a bad idea on many levels. If you're going to learn a language, embrace the features of that language rather than try to work around them. Otherwise, what's the point of learning a new language? A: Getting your indentation to work correctly is going to be important in any language you use. Even though it won't affect the execution of the program in most other languages, incorrect indentation can be very confusing for anyone trying to read your program, so you need to invest the time in figuring out how to configure your editor to align things correctly. Python is pretty liberal in how it lets you indent. You can pick between tabs and spaces (but you really should use spaces) and can pick how many spaces. The only thing it requires is that you are consistent which ultimately is important no matter what language you use. A: Tabs and spaces confusion can be fixed by setting your editor to use spaces instead of tabs. To make whitespace completely intuitive, you can use a stronger code editor or an IDE (though you don't need a full-blown IDE if all you need is proper automatic code indenting). A list of editors can be found in the Python wiki, though that one is a bit too exhausting: - http://wiki.python.org/moin/PythonEditors There's already a question in here which tries to slim that down a bit: https://stackoverflow.com/questions/60784/poll-which-python-ideeditor-is-the-best Maybe you should add a more specific question on that: "Which Python editor or IDE do you prefer on Windows - and why?" A: I was a bit reluctant to learn Python because of tabbing. However, I almost didn't notice it when I used Vim. A: If you don't want to use an IDE/text editor with automatic indenting, you can use the pindent.py script that comes in the Tools\Scripts directory. It's a preprocessor that can convert code like: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 end if else: print 'oops!' end if end def foobar into: def foobar(a, b): if a == b: a = a+1 elif a < b: b = b-1 if b > a: a = a-1 # end if else: print 'oops!' # end if # end def foobar Which is valid python. A: Nope, there's no way around it, and it's by design: >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance Most Python programmers simply don't use tabs, but use spaces to indent instead, that way there's no editor-to-editor inconsistency. A: I'm surprised no one has mentioned IDLE as a good default python editor. Nice syntax colors, handles indents, has intellisense, easy to adjust fonts, and it comes with the default download of python. Heck, I write mostly IronPython, but it's so nice & easy to edit in IDLE and run ipy from a command prompt. Oh, and what is the big deal about whitespace? Most easy to read C or C# is well indented, too, python just enforces a really simple formatting rule. A: Many Python IDEs and generally-capable text/source editors can handle the whitespace for you. However, it is best to just "let go" and enjoy the whitespace rules of Python. With some practice, they won't get into your way at all, and you will find they have many merits, the most important of which are: Because of the forced whitespace, Python code is simpler to understand. You will find that as you read code written by others, it is easier to grok than code in, say, Perl or PHP. Whitespace saves you quite a few keystrokes of control characters like { and }, which litter code written in C-like languages. Less {s and }s means, among other things, less RSI and wrist pain. This is not a matter to take lightly. A: In Python, indentation is a semantic element as well as providing visual grouping for readability. Both space and tab can indicate indentation. This is unfortunate, because: The interpretation(s) of a tab varies among editors and IDEs and is often configurable (and often configured). OTOH, some editors are not configurable but apply their own rules for indentation. Different sequences of spaces and tabs may be visually indistinguishable. Cut and pastes can alter whitespace. So, unless you know that a given piece of code will only be modified by yourself with a single tool and an unvarying config, you must avoid tabs for indentation (configure your IDE) and make sure that you are warned if they are introduced (search for tabs in leading whitespace). And you can still expect to be bitten now and then, as long as arbitrary semantics are applied to control characters. A: Check the options of your editor or find an editor/IDE that allows you to convert TABs to spaces. I usually set the options of my editor to substitute the TAB character with 4 spaces, and I never run into any problems. A: Yes, there is a way. I hate these "no way" answers, there is no way until you discover one. And in that case, whatever it is worth, there is one. I read once about a guy who designed a way to code so that a simple script could re-indent the code properly. I didn't managed to find any links today, though, but I swear I read it. The main tricks are to always use return at the end of a function, always use pass at the end of an if or at the end of a class definition, and always use continue at the end of a while. Of course, any other no-effect instruction would fit the purpose. Then, a simple awk script can take your code and detect the end of block by reading pass/continue/return instructions, and the start of code with if/def/while/... instructions. Of course, because you'll develop your indenting script, you'll see that you don't have to use continue after a return inside the if, because the return will trigger the indent-back mechanism. The same applies for other situations. Just get use to it. If you are diligent, you'll be able to cut/paste and add/remove if and correct the indentations automagically. And incidentally, pasting code from the web will require you to understand a bit of it so that you can adapt it to that "non-classical" setting.
Is there a way around coding in Python without the tab, indent & whitespace criteria?
I want to start using Python for small projects but the fact that a misplaced tab or indent can throw a compile error is really getting on my nerves. Is there some type of setting to turn this off? I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting?
[ "The answer is no.\nAt least, not until something like the following is implemented:\nfrom __future__ import braces\n\n", "No. Indentation-as-grammar is an integral part of the Python language, for better and worse.\n", "Emacs! Seriously, its use of \"tab is a command, not a character\", is absolutely perfect for python development.\n", "All of the whitespace issues I had when I was starting Python were the result mixing tabs and spaces. Once I configured everything to just use one or the other, I stopped having problems.\nIn my case I configured UltraEdit & vim to use spaces in place of tabs.\n", "It's possible to write a pre-processor which takes randomly-indented code with pseudo-python keywords like \"endif\" and \"endwhile\" and properly indents things. I had to do this when using python as an \"ASP-like\" language, because the whole notion of \"indentation\" gets a bit fuzzy in such an environment.\nOf course, even with such a thing you really ought to indent sanely, at which point the conveter becomes superfluous.\n", "I find it hard to understand when people flag this as a problem with Python. I took to it immediately and actually find it's one of my favourite 'features' of the language :)\nIn other languages I have two jobs:\n1. Fix the braces so the computer can parse my code\n2. Fix the indentation so I can parse my code.\nSo in Python I have half as much to worry about ;-)\n(nb the only time I ever have problem with indendation is when Python code is in a blog and a forum that messes with the white-space but this is happening less and less as the apps get smarter)\n", "\nI'm currently using NotePad++. Is\n there maybe an IDE that would take\n care of the tabs and indenting?\n\nI liked pydev extensions of eclipse for that.\n", "I do not believe so, as Python is a whitespace-delimited language. Perhaps a text editor or IDE with auto-indentation would be of help. What are you currently using?\n", "No, there isn't. Indentation is syntax for Python. You can:\n\nUse tabnanny.py to check your code\nUse a syntax-aware editor that highlights such mistakes (vi does that, emacs I bet it does, and then, most IDEs do too)\n(far-fetched) write a preprocessor of your own to convert braces (or whatever block delimiters you love) into indentation\n\n", "You should disable tab characters in your editor when you're working with Python (always, actually, IMHO, but especially when you're working with Python). Look for an option like \"Use spaces for tabs\": any decent editor should have one.\n", "Not really. There are a few ways to modify whitespace rules for a given line of code, but you will still need indent levels to determine scope.\nYou can terminate statements with ; and then begin a new statement on the same line. (Which people often do when golfing.)\nIf you want to break up a single line into multiple lines you can finish a line with the \\ character which means the current line effectively continues from the first non-whitespace character of the next line. This visually appears violate the usual whitespace rules but is legal.\nMy advice: don't use tabs if you are having tab/space confusion. Use spaces, and choose either 2 or 3 spaces as your indent level. \nA good editor will make it so you don't have to worry about this. (python-mode for emacs, for example, you can just use the tab key and it will keep you honest).\n", "I agree with justin and others -- pick a good editor and use spaces rather than tabs for indentation and the whitespace thing becomes a non-issue. I only recently started using Python, and while I thought the whitespace issue would be a real annoyance it turns out to not be the case. For the record I'm using emacs though I'm sure there are other editors out there that do an equally fine job.\nIf you're really dead-set against it, you can always pass your scripts through a pre-processor but that's a bad idea on many levels. If you're going to learn a language, embrace the features of that language rather than try to work around them. Otherwise, what's the point of learning a new language?\n", "Getting your indentation to work correctly is going to be important in any language you use. \nEven though it won't affect the execution of the program in most other languages, incorrect indentation can be very confusing for anyone trying to read your program, so you need to invest the time in figuring out how to configure your editor to align things correctly.\nPython is pretty liberal in how it lets you indent. You can pick between tabs and spaces (but you really should use spaces) and can pick how many spaces. The only thing it requires is that you are consistent which ultimately is important no matter what language you use.\n", "Tabs and spaces confusion can be fixed by setting your editor to use spaces instead of tabs. \nTo make whitespace completely intuitive, you can use a stronger code editor or an IDE (though you don't need a full-blown IDE if all you need is proper automatic code indenting). \nA list of editors can be found in the Python wiki, though that one is a bit too exhausting: \n- http://wiki.python.org/moin/PythonEditors\nThere's already a question in here which tries to slim that down a bit: \n\nhttps://stackoverflow.com/questions/60784/poll-which-python-ideeditor-is-the-best\n\nMaybe you should add a more specific question on that: \"Which Python editor or IDE do you prefer on Windows - and why?\"\n", "I was a bit reluctant to learn Python because of tabbing. However, I almost didn't notice it when I used Vim.\n", "If you don't want to use an IDE/text editor with automatic indenting, you can use the pindent.py script that comes in the Tools\\Scripts directory. It's a preprocessor that can convert code like:\ndef foobar(a, b):\nif a == b:\na = a+1\nelif a < b:\nb = b-1\nif b > a: a = a-1\nend if\nelse:\nprint 'oops!'\nend if\nend def foobar\n\ninto:\ndef foobar(a, b):\n if a == b:\n a = a+1\n elif a < b:\n b = b-1\n if b > a: a = a-1\n # end if\n else:\n print 'oops!'\n # end if\n# end def foobar\n\nWhich is valid python.\n", "Nope, there's no way around it, and it's by design:\n>>> from __future__ import braces\n File \"<stdin>\", line 1\nSyntaxError: not a chance\n\nMost Python programmers simply don't use tabs, but use spaces to indent instead, that way there's no editor-to-editor inconsistency.\n", "I'm surprised no one has mentioned IDLE as a good default python editor. Nice syntax colors, handles indents, has intellisense, easy to adjust fonts, and it comes with the default download of python. Heck, I write mostly IronPython, but it's so nice & easy to edit in IDLE and run ipy from a command prompt.\nOh, and what is the big deal about whitespace? Most easy to read C or C# is well indented, too, python just enforces a really simple formatting rule.\n", "Many Python IDEs and generally-capable text/source editors can handle the whitespace for you.\nHowever, it is best to just \"let go\" and enjoy the whitespace rules of Python. With some practice, they won't get into your way at all, and you will find they have many merits, the most important of which are:\n\nBecause of the forced whitespace, Python code is simpler to understand. You will find that as you read code written by others, it is easier to grok than code in, say, Perl or PHP.\nWhitespace saves you quite a few keystrokes of control characters like { and }, which litter code written in C-like languages. Less {s and }s means, among other things, less RSI and wrist pain. This is not a matter to take lightly.\n\n", "In Python, indentation is a semantic element as well as providing visual grouping for readability.\nBoth space and tab can indicate indentation. This is unfortunate, because:\n\nThe interpretation(s) of a tab varies\namong editors and IDEs and is often\nconfigurable (and often configured).\nOTOH, some editors are not\nconfigurable but apply their own\nrules for indentation.\nDifferent sequences of\nspaces and tabs may be visually\nindistinguishable.\nCut and pastes can alter whitespace.\n\nSo, unless you know that a given piece of code will only be modified by yourself with a single tool and an unvarying config, you must avoid tabs for indentation (configure your IDE) and make sure that you are warned if they are introduced (search for tabs in leading whitespace).\nAnd you can still expect to be bitten now and then, as long as arbitrary semantics are applied to control characters.\n", "Check the options of your editor or find an editor/IDE that allows you to convert TABs to spaces. I usually set the options of my editor to substitute the TAB character with 4 spaces, and I never run into any problems.\n", "Yes, there is a way. I hate these \"no way\" answers, there is no way until you discover one.\nAnd in that case, whatever it is worth, there is one.\nI read once about a guy who designed a way to code so that a simple script could re-indent the code properly. I didn't managed to find any links today, though, but I swear I read it.\nThe main tricks are to always use return at the end of a function, always use pass at the end of an if or at the end of a class definition, and always use continue at the end of a while. Of course, any other no-effect instruction would fit the purpose.\nThen, a simple awk script can take your code and detect the end of block by reading pass/continue/return instructions, and the start of code with if/def/while/... instructions.\nOf course, because you'll develop your indenting script, you'll see that you don't have to use continue after a return inside the if, because the return will trigger the indent-back mechanism. The same applies for other situations. Just get use to it.\nIf you are diligent, you'll be able to cut/paste and add/remove if and correct the indentations automagically. And incidentally, pasting code from the web will require you to understand a bit of it so that you can adapt it to that \"non-classical\" setting.\n" ]
[ 45, 37, 10, 6, 5, 5, 4, 3, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000063086_python.txt
Q: Using Python's xml.dom.minidom I'm trying to use Python's xml.dom.minidom, and I'm getting the following error: >>> from xml.dom import minidom >>> xdocument = minidom.Document() >>> xrss = minidom.Element("rss") >>> xdocument.appendChild(xrss) <DOM Element: rss at 0xc1d0f8> >>> xchannel = minidom.Element("channel") >>> xrss.appendChild(xchannel) Traceback (most recent call last): File "C:\Program Files\Wing IDE 3.2\src\debug\tserver\_sandbox.py", line 1, in ? # Used internally for debug sandbox under external interpreter File "c:\Python24\Lib\xml\dom\minidom.py", line 123, in appendChild _clear_id_cache(self) File "c:\Python24\Lib\xml\dom\minidom.py", line 1468, in _clear_id_cache node.ownerDocument._id_cache.clear() AttributeError: 'NoneType' object has no attribute '_id_cache' >>> Anyone has any idea why? A: Use xdocument.createElement('name') to create new elements. This is the standard way to do that in DOM. A: Replace xdocument.appendChild(xrss) with xrss = xdocument.appendChild(xrss). From the docs: Node.appendChild(newChild) Add a new child node to this node at the end of the list of children, returning newChild. If the node was already in in the tree, it is removed first. So you need to assign xrss to the returned element from appendChild.
Using Python's xml.dom.minidom
I'm trying to use Python's xml.dom.minidom, and I'm getting the following error: >>> from xml.dom import minidom >>> xdocument = minidom.Document() >>> xrss = minidom.Element("rss") >>> xdocument.appendChild(xrss) <DOM Element: rss at 0xc1d0f8> >>> xchannel = minidom.Element("channel") >>> xrss.appendChild(xchannel) Traceback (most recent call last): File "C:\Program Files\Wing IDE 3.2\src\debug\tserver\_sandbox.py", line 1, in ? # Used internally for debug sandbox under external interpreter File "c:\Python24\Lib\xml\dom\minidom.py", line 123, in appendChild _clear_id_cache(self) File "c:\Python24\Lib\xml\dom\minidom.py", line 1468, in _clear_id_cache node.ownerDocument._id_cache.clear() AttributeError: 'NoneType' object has no attribute '_id_cache' >>> Anyone has any idea why?
[ "Use xdocument.createElement('name') to create new elements. This is the standard way to do that in DOM.\n", "Replace xdocument.appendChild(xrss) with xrss = xdocument.appendChild(xrss). From the docs:\n\nNode.appendChild(newChild) Add a new\n child node to this node at the end of\n the list of children, returning\n newChild. If the node was already in\n in the tree, it is removed first.\n\nSo you need to assign xrss to the returned element from appendChild.\n" ]
[ 3, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001539023_python_xml.txt
Q: How to sort all possible words out of a string? I'm wondering how to proceed with this task, take this string for example "thingsandstuff". How could I generate all possible strings out of this string as to look them up individually against an english dictionary? The goal is to find valid english words in a string that does not contain space. Thanks A: Another possibility is going the other way around, instead of generating substrings from a string, grab all your candidate words and match them against your string. You can store as a result (start,end) pairs of indexes of the words in the original string. This could be easily done in regex, or, if not performant enough, with str.find(), or if even not performant enough with more complex dictionary index schemes or smarts about what can and can't match (see Gregg's answer for ideas) Here you have a sample of what I mean candidate = "thingsandstuffmydarlingpretty" words = file('/usr/share/dict/words').read() #This generator calls find twice, it should be rewritten as a normal loop generate_matches = ((candidate.find(word),word) for word in words.split('\n') if candidate.find(word) != -1 and word != '') for match in generate_matches: print "Found %s at (%d,%d)" % (match[1],match[0],match[0] + len(match[1])) A: People talk about this as though the Order of the problem is the number of possible substrings. This is incorrect. The correct order of this problem is: O( min ( number-of-words-in-dict, number-of-substring-combinations) * comparison_cost) So, another approach to the problem, to build on Vinko, is to index the heck out of the dictionary (e.g., for each work in the dict, determine the letters in that word, the length of the word, etc). This can speed things up dramatically. As an example, we know that target "queen" can't match "zebra" (no z's!) ( or any word containing z,r,b,a...), and the like. Also, store each word in the dict as a sorted string ('zebra' -> 'aberz') and do "string in string" (longest common substring) matching. 'eenuq' vs 'abarz' (no match). (Note: I am assuming that the order of the letters in the original word don't matter -- it's a 'bag of letters', if they do, then adjust accordingly) If you have lots of words to compare at once, the comparison cost can be lowered further using something like KMP. (Also, I dove right in, and made some assumptions that Alex didn't, so if they're wrong, then shut my mouth!) A: The brute force approach, i.e. checking every substring, is computationally unfeasible even for strings of middling lengths (a string of length N has O(N**2) substrings). Unless there is a pretty tight bound on the length of strings you care about, that doesn't scale well. To make things more feasible, more knowledge is required -- are you interested in overlapping words (eg 'things' and 'sand' in your example) and/or words which would leave unaccounted for characters (eg 'thing' and 'and' in your example, leaving the intermediate 's' stranded), or you do you want a strict partition of the string into juxtaposed (not overlapping) words with no residue? The latter would be the simplest problem, because the degrees of freedom drop sharply -- essentially to trying to determine a sequence of "breaking points", each between two adjacent characters, that would split the string into words. If that's the case, do you need every possible valid split (i.e. do you need both "thing sand" and "things and"), or will any single valid split do, or are there criteria that your split must optimize? If you clarify all of these issues it may be possible to give you more help! A: norving wrote a great article on how to write a spell checker in python. http://norvig.com/spell-correct.html it will give you an good idea on how to detect words. (i.e. just go testing each group of chars until you get a valid word... beware that to be deterministic you would need to do the reverse. Test all the string, and then go removing chars at the end. that way you get composite words as they are intended... or not intended, who knows. spaces have a reason :) after that, it's basic CS 101. A: This will find whether or not a candidate can be formed out of the letters in a given word; it's assumed that word (but not candidate) is sorted prior to the call. >>> def match(candidate, word): def next_char(w): for ch in sorted(w): yield ch g = next_char(word) for cl in sorted(candidate): try: wl = g.next() except StopIteration: return False if wl > cl: return False while wl < cl: try: wl = g.next() except StopIteration: return False if wl > cl: return False return True >>> word = sorted("supernatural") >>> dictionary = ["super", "natural", "perturb", "rant", "arrant"] >>> for candidate in dictionary: print candidate, match(candidate, word) super True natural True perturb False rant True arrant True When I load the BSD words file (235,000+ words) and run this using plenipotentiary as my word, I get about 2500 hits in under a second and a half. If you're going to run many searches, it's a good idea to remove the sort from next_char, build a dictionary keyed on the sorted version of each word - d = dict([(sorted(word), word) for word in dictionary]) and produce results via logic like this: result = [d[k] for k in d.keys() if match(k, word)] so that you have to perform 250,000+ sorts over and over again. A: Well here is my idea Find all possible strings containing 1 character from the original Find all possible strings containing 2 characters from the original ... Same thing up to the length of the original string Then add all up and go match with your dictionary A: What if you break it up into syllables and then use those to construct words to compare to your dictionary. It's still a brute force method, but it would surely speed things up a bit. A: I looked at a powerset implementation. Too many possibilities. Try encoding your string and all candidates from your dictionary and see if the candidate from the dictionary could be made from the candidate string. That is, do the letters in the dictionary word appear no more frequently than in your candidate string? from __future__ import with_statement import collections def word_dict(word): d = collections.defaultdict(int) for c in word: d[c] += 1 return d def compare_word_dict(dict_cand, cand): return all(dict_cand[k] <= cand[k] for k in dict_cand) def try_word(candidate): s = word_dict(candidate) dictionary_file = r"h:\words\WORDs(3).txt" i = 0 with open(dictionary_file) as f: for line in f: line = line.strip() dc = word_dict(line) if compare_word_dict(dc,s): print line i += 1 return i print try_word("thingsandstuff") I get 670 words with my dictionary. Seems a bit small. Takes about 3 seconds on 200k words in the dictionary. This works for python 2.5 and above because of the addition of collections.defaultdict. In python 3.1, collections.Counter was added that works like collections.defaultdict(int). A: Take a look to this post, it addresses the same problem, both in Python and OCaml, with a solution based in normalizing the strings first instead of doing brute-force search. By the way, the automatic translation removes the indenting, so to get the working Python code you should look at the untranslated Spanish version (that in fact is much proper than the crappy English generated by Google translator)... Edit: Re-reading your question, I understand now that you could want only those words that are unscrambled, right? If so, you don't need to do all the stuff described in the post, just: maxwordlength = max(map(len, english_words)) for i in range(len(word)): for j in range(i+1, min(maxwordlength+i, len(word))): if word[i:j] in english_words: print word[i:j] The complexity should be O(N) now, given that the size of the largest word in English is finite. A: Code: def all_substrings(val): return [val[start:end] for start in range(len(val)) for end in range(start + 1, len(val))] val = "thingsandstuff" for result in all_substrings(val): print result Output: t th thi thin thing [ ... ] tu tuf u uf f A: If you know the full dictionary well in advance, and it doesn't change between searches, you might try the following... Index the dictionary. Each word (e.g. "hello") becomes a (key, data) tuple such as ("ehllo", "hello"). In the key, the letters are sorted alphabetically. Good index data structures would include a trie (aka digital tree) or a ternary tree. A conventional binary tree could be made to work. A hash table wouldn't work. I'm going to assume a trie or a ternary tree. Note - the data structure must act as a multimap (you probably need a linked list of matched data items at each key-matched leaf). Before evaluating for a particular string, sort the letters in the string. Then do a key search in the data structure. BUT a simple key search will only find words that use all letters from the original string. Basically, a trie search matches one letter at a time, choosing a child node based on the next letter of the input. However, at each step, we have an extra option - skip a letter of the sorted input string and remain at the same node (ie, don't use that letter in the output). The obvious thing to do is a depth-first backtracking search. Note that both our keys and our input have the letters sorted, so we can probably optimise the search a bit. A ternary tree version follows similar principles to a trie, but instead of multiple children per node, you basically have next-letter binary tree logic built into the structure. The search can be easily adapted - the options for each next-letter search being match the next input letter or discard it. When you get runs of the same letter in the sorted input string, the 'skip a letter' option in the search should be 'skip to the next different letter'. Otherwise, you end up doing duplicate searches (during backtracking) - e.g. there are 3 different ways to use two out of three duplicate letters - you could ignore the first, the second, or the third duplicate - and you only need to check one case. Optimisations might have extra details in the data structure nodes in order to help prune the search tree. E.g. keeping the maximum length of word tails in the subtree allows you to check whether your remaining part of your search string contains enough letters to bother continuing the search. Time complexity isn't immediately obvious due to the backtracking.
How to sort all possible words out of a string?
I'm wondering how to proceed with this task, take this string for example "thingsandstuff". How could I generate all possible strings out of this string as to look them up individually against an english dictionary? The goal is to find valid english words in a string that does not contain space. Thanks
[ "Another possibility is going the other way around, instead of generating substrings from a string, grab all your candidate words and match them against your string. \nYou can store as a result (start,end) pairs of indexes of the words in the original string.\nThis could be easily done in regex, or, if not performant enough, with str.find(), or if even not performant enough with more complex dictionary index schemes or smarts about what can and can't match (see Gregg's answer for ideas)\nHere you have a sample of what I mean\ncandidate = \"thingsandstuffmydarlingpretty\"\nwords = file('/usr/share/dict/words').read()\n#This generator calls find twice, it should be rewritten as a normal loop\ngenerate_matches = ((candidate.find(word),word) for word in words.split('\\n')\n if candidate.find(word) != -1 and word != '')\n\nfor match in generate_matches:\n print \"Found %s at (%d,%d)\" % (match[1],match[0],match[0] + len(match[1]))\n\n", "People talk about this as though the Order of the problem is the number of possible substrings. This is incorrect. The correct order of this problem is:\nO( min ( number-of-words-in-dict, number-of-substring-combinations) * comparison_cost)\nSo, another approach to the problem, to build on Vinko, is to index the heck out of the dictionary (e.g., for each work in the dict, determine the letters in that word, the length of the word, etc). This can speed things up dramatically. As an example, we know that target \"queen\" can't match \"zebra\" (no z's!) ( or any word containing z,r,b,a...), and the like. Also, store each word in the dict as a sorted string ('zebra' -> 'aberz') and do \"string in string\" (longest common substring) matching. 'eenuq' vs 'abarz' (no match). \n(Note: I am assuming that the order of the letters in the original word don't matter -- it's a 'bag of letters', if they do, then adjust accordingly) \nIf you have lots of words to compare at once, the comparison cost can be lowered further using something like KMP. \n(Also, I dove right in, and made some assumptions that Alex didn't, so if they're wrong, then shut my mouth!)\n", "The brute force approach, i.e. checking every substring, is computationally unfeasible even for strings of middling lengths (a string of length N has O(N**2) substrings). Unless there is a pretty tight bound on the length of strings you care about, that doesn't scale well.\nTo make things more feasible, more knowledge is required -- are you interested in overlapping words (eg 'things' and 'sand' in your example) and/or words which would leave unaccounted for characters (eg 'thing' and 'and' in your example, leaving the intermediate 's' stranded), or you do you want a strict partition of the string into juxtaposed (not overlapping) words with no residue?\nThe latter would be the simplest problem, because the degrees of freedom drop sharply -- essentially to trying to determine a sequence of \"breaking points\", each between two adjacent characters, that would split the string into words. If that's the case, do you need every possible valid split (i.e. do you need both \"thing sand\" and \"things and\"), or will any single valid split do, or are there criteria that your split must optimize?\nIf you clarify all of these issues it may be possible to give you more help!\n", "norving wrote a great article on how to write a spell checker in python.\nhttp://norvig.com/spell-correct.html\nit will give you an good idea on how to detect words. (i.e. just go testing each group of chars until you get a valid word... beware that to be deterministic you would need to do the reverse. Test all the string, and then go removing chars at the end. that way you get composite words as they are intended... or not intended, who knows. spaces have a reason :)\nafter that, it's basic CS 101.\n", "This will find whether or not a candidate can be formed out of the letters in a given word; it's assumed that word (but not candidate) is sorted prior to the call.\n>>> def match(candidate, word):\n\n def next_char(w):\n for ch in sorted(w):\n yield ch\n\n g = next_char(word)\n for cl in sorted(candidate):\n try:\n wl = g.next()\n except StopIteration:\n return False\n if wl > cl:\n return False\n while wl < cl:\n try:\n wl = g.next()\n except StopIteration:\n return False\n if wl > cl:\n return False\n return True\n\n>>> word = sorted(\"supernatural\")\n>>> dictionary = [\"super\", \"natural\", \"perturb\", \"rant\", \"arrant\"]\n>>> for candidate in dictionary:\n print candidate, match(candidate, word)\n\nsuper True\nnatural True\nperturb False\nrant True\narrant True\n\nWhen I load the BSD words file (235,000+ words) and run this using plenipotentiary as my word, I get about 2500 hits in under a second and a half. \nIf you're going to run many searches, it's a good idea to remove the sort from next_char, build a dictionary keyed on the sorted version of each word -\nd = dict([(sorted(word), word) for word in dictionary])\n\nand produce results via logic like this:\nresult = [d[k] for k in d.keys() if match(k, word)]\n\nso that you have to perform 250,000+ sorts over and over again.\n", "Well here is my idea \n\nFind all possible strings containing 1 character from the original\nFind all possible strings containing 2 characters from the original\n... Same thing up to the length of the original string\n\nThen add all up and go match with your dictionary\n", "What if you break it up into syllables and then use those to construct words to compare to your dictionary. It's still a brute force method, but it would surely speed things up a bit.\n", "I looked at a powerset implementation. Too many possibilities. \nTry encoding your string and all candidates from your dictionary and see if the candidate from the dictionary could be made from the candidate string. That is, do the letters in the dictionary word appear no more frequently than in your candidate string?\nfrom __future__ import with_statement\nimport collections\n\ndef word_dict(word):\n d = collections.defaultdict(int)\n for c in word:\n d[c] += 1\n return d\n\ndef compare_word_dict(dict_cand, cand):\n return all(dict_cand[k] <= cand[k] for k in dict_cand)\n\n\ndef try_word(candidate):\n s = word_dict(candidate)\n dictionary_file = r\"h:\\words\\WORDs(3).txt\"\n i = 0\n with open(dictionary_file) as f:\n for line in f:\n line = line.strip()\n dc = word_dict(line)\n if compare_word_dict(dc,s):\n print line\n i += 1\n return i\n\n\nprint try_word(\"thingsandstuff\")\n\nI get 670 words with my dictionary. Seems a bit small. Takes about 3 seconds on 200k words in the dictionary.\nThis works for python 2.5 and above because of the addition of collections.defaultdict. In python 3.1, collections.Counter was added that works like collections.defaultdict(int).\n", "Take a look to this post, it addresses the same problem, both in Python and OCaml, with a solution based in normalizing the strings first instead of doing brute-force search.\nBy the way, the automatic translation removes the indenting, so to get the working Python code you should look at the untranslated Spanish version (that in fact is much proper than the crappy English generated by Google translator)...\nEdit:\nRe-reading your question, I understand now that you could want only those words that are unscrambled, right? If so, you don't need to do all the stuff described in the post, just:\nmaxwordlength = max(map(len, english_words))\nfor i in range(len(word)):\n for j in range(i+1, min(maxwordlength+i, len(word))):\n if word[i:j] in english_words:\n print word[i:j]\n\nThe complexity should be O(N) now, given that the size of the largest word in English is finite.\n", "Code:\ndef all_substrings(val):\n return [val[start:end] for start in range(len(val)) for end in range(start + 1, len(val))]\n\nval = \"thingsandstuff\"\nfor result in all_substrings(val):\n print result\n\nOutput:\nt\nth\nthi\nthin\nthing\n\n[ ... ]\ntu\ntuf\nu\nuf\nf\n\n", "If you know the full dictionary well in advance, and it doesn't change between searches, you might try the following...\nIndex the dictionary. Each word (e.g. \"hello\") becomes a (key, data) tuple such as (\"ehllo\", \"hello\"). In the key, the letters are sorted alphabetically.\nGood index data structures would include a trie (aka digital tree) or a ternary tree. A conventional binary tree could be made to work. A hash table wouldn't work. I'm going to assume a trie or a ternary tree. Note - the data structure must act as a multimap (you probably need a linked list of matched data items at each key-matched leaf).\nBefore evaluating for a particular string, sort the letters in the string. Then do a key search in the data structure. BUT a simple key search will only find words that use all letters from the original string.\nBasically, a trie search matches one letter at a time, choosing a child node based on the next letter of the input. However, at each step, we have an extra option - skip a letter of the sorted input string and remain at the same node (ie, don't use that letter in the output). The obvious thing to do is a depth-first backtracking search. Note that both our keys and our input have the letters sorted, so we can probably optimise the search a bit.\nA ternary tree version follows similar principles to a trie, but instead of multiple children per node, you basically have next-letter binary tree logic built into the structure. The search can be easily adapted - the options for each next-letter search being match the next input letter or discard it.\nWhen you get runs of the same letter in the sorted input string, the 'skip a letter' option in the search should be 'skip to the next different letter'. Otherwise, you end up doing duplicate searches (during backtracking) - e.g. there are 3 different ways to use two out of three duplicate letters - you could ignore the first, the second, or the third duplicate - and you only need to check one case.\nOptimisations might have extra details in the data structure nodes in order to help prune the search tree. E.g. keeping the maximum length of word tails in the subtree allows you to check whether your remaining part of your search string contains enough letters to bother continuing the search.\nTime complexity isn't immediately obvious due to the backtracking.\n" ]
[ 5, 5, 3, 2, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001538589_python.txt
Q: Changing document's attributes in Python's xml.dom.minidom I created a xml.dom.minidom.Document. How do I give it attributes so that when I do .toprettyxml() it will show like this: <?xml version="1.0" encoding="iso-8859-2"?> A: .toprettyxml() has an encoding keyword argument: Document.toprettyxml(self, indent='\t', newl='\n', encoding=None)
Changing document's attributes in Python's xml.dom.minidom
I created a xml.dom.minidom.Document. How do I give it attributes so that when I do .toprettyxml() it will show like this: <?xml version="1.0" encoding="iso-8859-2"?>
[ ".toprettyxml() has an encoding keyword argument:\nDocument.toprettyxml(self, indent='\\t', newl='\\n', encoding=None)\n\n" ]
[ 1 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001539555_python_xml.txt
Q: Weird behaviour with two Trac instances under Apache + mod_wsgi I am trying to configure two Trac instances in order to access them via browser each one with a different url: http://trac.domain.com/trac1 http://trac.domain.com/trac2 First time I access them Apache response is fine, I get the first Trac with /trac1, then the second one in /trac2. But when I access /trac1 again, it keeps giving me the contents of the second Trac (/trac2). If I touch the .wsgi config file for the first one (say it trac1.wsgi), then request again /trac1 with browser, I get the expected contents again. The opposite case works equal: access /trac2, then /trac1, then /trac2 keeps giving the contents of /trac1 until I touch trac2.wsgi... So it seems Python, mod_wsgi and/or Apache are caching results or something. I am not sysadmin and can't get further on this issue. The .wsgi files and http.conf for Apache: trac1.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac1' os.environ['PYTHON_EGG_CACHE'] = '/tmp/' import trac.web.main application = trac.web.main.dispatch_request trac2.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac2' os.environ['PYTHON_EGG_CACHE'] = '/tmp/' import trac.web.main application = trac.web.main.dispatch_request http.conf: <VirtualHost trac.domain.com:8080> WSGIScriptAlias /trac1 /home/myuser/public_html/trac1/apache/trac1.wsgi WSGIScriptAlias /trac2 /home/myuser/public_html/trac2/apache/trac2.wsgi <Directory /home/myuser/public_html/trac1/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> <Location "/trac1"> AuthType Basic AuthName "Trac1 Trac Auth" AuthUserFile /home/myuser/public_html/trac1/apache/trac1.htpasswd Require valid-user </Location> <Directory /home/myuser/public_html/trac2/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> <Location "/trac2"> AuthType Basic AuthName "Trac2 Trac Auth" AuthUserFile /home/myuser/public_html/trac2/apache/trac2.htpasswd Require valid-user </Location> </VirtualHost> If anybody suggests an alternative configuration or whatever, it will be welcome as well. thanks! Hector A: I found the solution myself, it was on the Trac documentation ("important note" section), and I did not event take look, fool of me :P http://trac.edgewall.org/wiki/TracModWSGI A: Move your egg cache to separate dirs trac1.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac1' os.environ['PYTHON_EGG_CACHE'] = '/tmp/trac1' import trac.web.main application = trac.web.main.dispatch_request trac2.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac2' os.environ['PYTHON_EGG_CACHE'] = '/tmp/trac2' import trac.web.main application = trac.web.main.dispatch_request
Weird behaviour with two Trac instances under Apache + mod_wsgi
I am trying to configure two Trac instances in order to access them via browser each one with a different url: http://trac.domain.com/trac1 http://trac.domain.com/trac2 First time I access them Apache response is fine, I get the first Trac with /trac1, then the second one in /trac2. But when I access /trac1 again, it keeps giving me the contents of the second Trac (/trac2). If I touch the .wsgi config file for the first one (say it trac1.wsgi), then request again /trac1 with browser, I get the expected contents again. The opposite case works equal: access /trac2, then /trac1, then /trac2 keeps giving the contents of /trac1 until I touch trac2.wsgi... So it seems Python, mod_wsgi and/or Apache are caching results or something. I am not sysadmin and can't get further on this issue. The .wsgi files and http.conf for Apache: trac1.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac1' os.environ['PYTHON_EGG_CACHE'] = '/tmp/' import trac.web.main application = trac.web.main.dispatch_request trac2.wsgi: import os os.environ['TRAC_ENV'] = '/home/myuser/trac/trac2' os.environ['PYTHON_EGG_CACHE'] = '/tmp/' import trac.web.main application = trac.web.main.dispatch_request http.conf: <VirtualHost trac.domain.com:8080> WSGIScriptAlias /trac1 /home/myuser/public_html/trac1/apache/trac1.wsgi WSGIScriptAlias /trac2 /home/myuser/public_html/trac2/apache/trac2.wsgi <Directory /home/myuser/public_html/trac1/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> <Location "/trac1"> AuthType Basic AuthName "Trac1 Trac Auth" AuthUserFile /home/myuser/public_html/trac1/apache/trac1.htpasswd Require valid-user </Location> <Directory /home/myuser/public_html/trac2/apache> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> <Location "/trac2"> AuthType Basic AuthName "Trac2 Trac Auth" AuthUserFile /home/myuser/public_html/trac2/apache/trac2.htpasswd Require valid-user </Location> </VirtualHost> If anybody suggests an alternative configuration or whatever, it will be welcome as well. thanks! Hector
[ "I found the solution myself, it was on the Trac documentation (\"important note\" section), and I did not event take look, fool of me :P\nhttp://trac.edgewall.org/wiki/TracModWSGI\n", "Move your egg cache to separate dirs\ntrac1.wsgi:\nimport os\n\nos.environ['TRAC_ENV'] = '/home/myuser/trac/trac1' \nos.environ['PYTHON_EGG_CACHE'] = '/tmp/trac1'\n\nimport trac.web.main \napplication = trac.web.main.dispatch_request\n\ntrac2.wsgi:\nimport os\n\nos.environ['TRAC_ENV'] = '/home/myuser/trac/trac2'\nos.environ['PYTHON_EGG_CACHE'] = '/tmp/trac2'\n\nimport trac.web.main\napplication = trac.web.main.dispatch_request\n\n" ]
[ 2, 0 ]
[]
[]
[ "apache", "mod_wsgi", "python", "trac" ]
stackoverflow_0001539203_apache_mod_wsgi_python_trac.txt
Q: How to redirect and then display errors with Google App Engine I'm working on a Google App Engine project that collects stories submitted by users. This is how I handle submission errors in the post method of my Request Handler: # get the title and content using self.request.get() errors = [] if not title: errors.append("Please enter a title.") if not content: errors.append("Please enter a story.") if not errors: # create the story, save it to the database # redirect to the story's page else: # pass the title and/or content to a template # pass the error message(s) to a template # the same template that displays the submission form is used here The problem: since my form sends posts to example.com/createstory.do -- if there are errors I end up redisplaying the form page at that address. What I want to happen: redirect the user back the page where they submitted the form: example.com/Share, while at the same time displaying the error messages and redisplaying the submitted form data. What's the easiest way to do this? I know I could just have /Share handle both get and post requests, but I'm looking for a solution that I can use even when doing that wouldn't be an option. A: You could redirect to /Share including the errors in a GET variable in the URL, if you're absolutely sure you need to use separate URLs. Of course, this makes your URL ugly since it now has all of the error information in it. Another option would be to redirect back to Share and have the errors stored in cookies or in a session variable. Combining either of these with client-side form validation in Javascript before submitting in the first place so the ugly solution isn't hit in most cases might be the best option. A: There's no 'clean' way of doing this, because you cannot redirect POST requests (and have the redirected request also make a POST). The standard - and cleanest - approach is to have the same URL display the form when fetched with GET, and accept the data when fetched with POST.
How to redirect and then display errors with Google App Engine
I'm working on a Google App Engine project that collects stories submitted by users. This is how I handle submission errors in the post method of my Request Handler: # get the title and content using self.request.get() errors = [] if not title: errors.append("Please enter a title.") if not content: errors.append("Please enter a story.") if not errors: # create the story, save it to the database # redirect to the story's page else: # pass the title and/or content to a template # pass the error message(s) to a template # the same template that displays the submission form is used here The problem: since my form sends posts to example.com/createstory.do -- if there are errors I end up redisplaying the form page at that address. What I want to happen: redirect the user back the page where they submitted the form: example.com/Share, while at the same time displaying the error messages and redisplaying the submitted form data. What's the easiest way to do this? I know I could just have /Share handle both get and post requests, but I'm looking for a solution that I can use even when doing that wouldn't be an option.
[ "You could redirect to /Share including the errors in a GET variable in the URL, if you're absolutely sure you need to use separate URLs. Of course, this makes your URL ugly since it now has all of the error information in it.\nAnother option would be to redirect back to Share and have the errors stored in cookies or in a session variable.\nCombining either of these with client-side form validation in Javascript before submitting in the first place so the ugly solution isn't hit in most cases might be the best option.\n", "There's no 'clean' way of doing this, because you cannot redirect POST requests (and have the redirected request also make a POST). The standard - and cleanest - approach is to have the same URL display the form when fetched with GET, and accept the data when fetched with POST.\n" ]
[ 1, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001538287_google_app_engine_python.txt
Q: Why isn't getopt working if sys.argv is passed fully? If I'm using this with getopt: import getopt import sys opts,args = getopt.getopt(sys.argv,"a:bc") print opts print args opts will be empty. No tuples will be created. If however, I'll use sys.argv[1:], everything works as expected. I don't understand why that is. Anyone care to explain? A: The first element of sys.argv (sys.argv[0]) is the name of the script currently being executed. Because this script name is (likely) not a valid argument (and probably doesn't begin with a - or -- anyway), getopt does not recognize it as an argument. Due to the nature of how getopt works, when it sees something that is not a command-line flag (something that does not begin with - or --), it stops processing command-line options (and puts the rest of the arguments into args), because it assumes the rest of the arguments are items that will be handled by the program (such as filenames or other "required" arguments). A: It's by design. Recall that sys.argv[0] is the running program name, and getopt doesn't want it. From the docs: Parses command line options and parameter list. args is the argument list to be parsed, without the leading reference to the running program. Typically, this means sys.argv[1:]. options is the string of option letters that the script wants to recognize, with options that require an argument followed by a colon (':'; i.e., the same format that Unix getopt() uses). http://docs.python.org/library/getopt.html
Why isn't getopt working if sys.argv is passed fully?
If I'm using this with getopt: import getopt import sys opts,args = getopt.getopt(sys.argv,"a:bc") print opts print args opts will be empty. No tuples will be created. If however, I'll use sys.argv[1:], everything works as expected. I don't understand why that is. Anyone care to explain?
[ "The first element of sys.argv (sys.argv[0]) is the name of the script currently being executed. Because this script name is (likely) not a valid argument (and probably doesn't begin with a - or -- anyway), getopt does not recognize it as an argument. Due to the nature of how getopt works, when it sees something that is not a command-line flag (something that does not begin with - or --), it stops processing command-line options (and puts the rest of the arguments into args), because it assumes the rest of the arguments are items that will be handled by the program (such as filenames or other \"required\" arguments).\n", "It's by design. Recall that sys.argv[0] is the running program name, and getopt doesn't want it.\nFrom the docs:\n\nParses command line options and\n parameter list. args is the argument\n list to be parsed, without the leading\n reference to the running program.\n Typically, this means sys.argv[1:].\n options is the string of option\n letters that the script wants to\n recognize, with options that require\n an argument followed by a colon (':';\n i.e., the same format that Unix\n getopt() uses).\n\nhttp://docs.python.org/library/getopt.html\n" ]
[ 16, 7 ]
[]
[]
[ "getopt", "python" ]
stackoverflow_0001540365_getopt_python.txt
Q: Should I Start With Python 3.0? Recently I decided to expand my programming horizons and learn the python programming language. While I have used python a little bit for classes in college and for a project or two at work I am by no means an expert. My question is as follows: should I bother with the 2.x releases or should I jump straight to 3.0? I am leaning towards 3.0 since I will be programming applications more for personal/learning use, but I wanted to see if there were any good arguments against it before I began. A: Absolutely not 3.0 - 3.1 is out and is stabler, better, faster in every respect; it makes absolutely no sense to start with 3.0 at this time, if you want to take up the 3 series it should on all accounts be 3.1. As for 2.6 vs 3.1, 3.1 is a better language (especially because some cruft was removed that had accumulated over the years but has to stay in 2.* for backwards compatibility) but all the rest of the ecosystem (from extensions to tools, from books to collective knowledge) is still very much in favor of 2.6 -- if you don't care about being able to use (e.g.) certain GUIs or scientific extensions, deploy on App Engine, script Windows with COM, have a spiffy third party IDE, and so on, 3.1 is advisable, but if you care about such things, still 2.* for now. A: Use 3.1 Why? 1) Because as long as everyone is still using 2.6, the libraries will have less reasons to migrate to 3.1. As long as those libraries are not ported to 3.1, you are stuck with the choice of either not using the strengths of 3.1, or only doing the jobs half way by using the hackish solution of using a back-ported feature set. Be a forward thinker and help push Python forward. 2) If you learn and use 3.1 now, you wont have to relearn it later when the mass port is complete. I know some people say you wont have to learn much, but why learn the old crap at all? Python itself is moving towards 3.1, the libraries will move toward 3.1, and it sucks to have to play catch-up and relearn a language you are already using. 3) 3.1 is all around a better language, more stable and more consistent than 2.6... this is normal. The lessons learned from 2.6 were all poured into 3.1 to make it better. It is a process called PROGRESS. This is why nobody still uses Windows 3.1. It is the way things move FORWARD. Why else do you think they went to the trouble of back porting a feature set in the first place? 4) If you are learning Python, and learn 2.6, then by the time you are really comfortable with the language, the ports will be out, and you will have to learn the libraries, and the language all over again. If you start with 3.1, then by the time you are comfortable with the language, the ports will be out, and then you can learn the libraries that you are interested in. It is a smoother process. 5) To be a better developer. If you have to learn and relearn the same things, your understanding will not be very deep. By learning this language, and its libraries only once, you will have more time to work with them rather than relearning syntax. This allows you to understand them better. If you are really missing some pieces by forgoing on the libraries? WRITE THEM. You will probably not need an entire library, and can usually write only those pieces that you need, and develop tools for yourself. This, again, helps you understand the language better, and more deeply. A: Short answer: Start with Python 2.6. Why: Programming is more fun and useful when you can leverage the work of others. This means using 3rd party libraries often. Many of the popular libraries for Python don't have 3.x support yet. PIL and NumPy/SciPy come to mind. My favorite interpreter, ipython, also doesn't work with 3.0 yet. Many unit testing frameworks and web frameworks are also not on 3.0 yet. So if you start out in 3.x many doors will be closed to you, at least until 3.x porting takes on steam. There are admittedly a lot of nice features in Python 3.x, but some of them have been backported to 2.6 and some more will make it into 2.7. So stick with 2.6 for now, and re-evaluate 3.x in a year's time or so. A: I think that you will be better served going straight into 3.0. Unless you have a legacy codebase to contend with, there are very few advantages to learning the 2.xx ways of doing things. In the Python world (as in most others, really), releases do tend to take a while to migrate down to all of the subprojects, but if you ever find the need to transition back to 2.xx, I don't think you'll find relearning things to be particularly painful. A: You should go with the latest release of any programming language you learn unless you have a specific reason not to. Since you don't have an existing project that won't work with Python 3.0, you should feel free to use the newest version. A: Use python 3.1, Luke. A: Python 3.1 should not be used until other libraries have caught up with support for it. You should use 2.6 now. It has several 3.x features back-ported to it, so that migrating to 3.x won't be difficult later on, and you won't learn obsolete practices. A: The good news is that it's not really that tough to learn both Python 2.x and 3.x. You can install the latest 2.x version as the version registered with the system to run Python scripts by default, but also install the latest 3.x version to explicitly kick off when you want to. That's what I have on my Windows Vista system. Then, the key document for learning the differences between the 2.x and 3.x versions is: http://docs.python.org/3.1/whatsnew/3.0.html If you read Python learning materials out there which are based on 2.x and also refer to that "What’s New In Python 3.0" link above, you'll get an understanding of how things changed. Also see the other whats new docs, like for the differences between 3.0 and 3.1, but the link above is the main one to understand the 2.x vs. 3.x changes.
Should I Start With Python 3.0?
Recently I decided to expand my programming horizons and learn the python programming language. While I have used python a little bit for classes in college and for a project or two at work I am by no means an expert. My question is as follows: should I bother with the 2.x releases or should I jump straight to 3.0? I am leaning towards 3.0 since I will be programming applications more for personal/learning use, but I wanted to see if there were any good arguments against it before I began.
[ "Absolutely not 3.0 - 3.1 is out and is stabler, better, faster in every respect; it makes absolutely no sense to start with 3.0 at this time, if you want to take up the 3 series it should on all accounts be 3.1.\nAs for 2.6 vs 3.1, 3.1 is a better language (especially because some cruft was removed that had accumulated over the years but has to stay in 2.* for backwards compatibility) but all the rest of the ecosystem (from extensions to tools, from books to collective knowledge) is still very much in favor of 2.6 -- if you don't care about being able to use (e.g.) certain GUIs or scientific extensions, deploy on App Engine, script Windows with COM, have a spiffy third party IDE, and so on, 3.1 is advisable, but if you care about such things, still 2.* for now.\n", "Use 3.1\nWhy?\n1) Because as long as everyone is still using 2.6, the libraries will have less reasons to migrate to 3.1. As long as those libraries are not ported to 3.1, you are stuck with the choice of either not using the strengths of 3.1, or only doing the jobs half way by using the hackish solution of using a back-ported feature set. Be a forward thinker and help push Python forward.\n2) If you learn and use 3.1 now, you wont have to relearn it later when the mass port is complete. I know some people say you wont have to learn much, but why learn the old crap at all? Python itself is moving towards 3.1, the libraries will move toward 3.1, and it sucks to have to play catch-up and relearn a language you are already using.\n3) 3.1 is all around a better language, more stable and more consistent than 2.6... this is normal. The lessons learned from 2.6 were all poured into 3.1 to make it better. It is a process called PROGRESS. This is why nobody still uses Windows 3.1. It is the way things move FORWARD. Why else do you think they went to the trouble of back porting a feature set in the first place?\n4) If you are learning Python, and learn 2.6, then by the time you are really comfortable with the language, the ports will be out, and you will have to learn the libraries, and the language all over again. If you start with 3.1, then by the time you are comfortable with the language, the ports will be out, and then you can learn the libraries that you are interested in. It is a smoother process.\n5) To be a better developer. If you have to learn and relearn the same things, your understanding will not be very deep. By learning this language, and its libraries only once, you will have more time to work with them rather than relearning syntax. This allows you to understand them better. If you are really missing some pieces by forgoing on the libraries? WRITE THEM. You will probably not need an entire library, and can usually write only those pieces that you need, and develop tools for yourself. This, again, helps you understand the language better, and more deeply.\n", "Short answer: Start with Python 2.6.\nWhy: Programming is more fun and useful when you can leverage the work of others. This means using 3rd party libraries often. Many of the popular libraries for Python don't have 3.x support yet. PIL and NumPy/SciPy come to mind. My favorite interpreter, ipython, also doesn't work with 3.0 yet. Many unit testing frameworks and web frameworks are also not on 3.0 yet.\nSo if you start out in 3.x many doors will be closed to you, at least until 3.x porting takes on steam. There are admittedly a lot of nice features in Python 3.x, but some of them have been backported to 2.6 and some more will make it into 2.7. So stick with 2.6 for now, and re-evaluate 3.x in a year's time or so.\n", "I think that you will be better served going straight into 3.0. Unless you have a legacy codebase to contend with, there are very few advantages to learning the 2.xx ways of doing things.\nIn the Python world (as in most others, really), releases do tend to take a while to migrate down to all of the subprojects, but if you ever find the need to transition back to 2.xx, I don't think you'll find relearning things to be particularly painful.\n", "You should go with the latest release of any programming language you learn unless you have a specific reason not to. Since you don't have an existing project that won't work with Python 3.0, you should feel free to use the newest version.\n", "Use python 3.1, Luke.\n", "Python 3.1 should not be used until other libraries have caught up with support for it.\nYou should use 2.6 now. It has several 3.x features back-ported to it, so that migrating to 3.x won't be difficult later on, and you won't learn obsolete practices.\n", "The good news is that it's not really that tough to learn both Python 2.x and 3.x. You can install the latest 2.x version as the version registered with the system to run Python scripts by default, but also install the latest 3.x version to explicitly kick off when you want to. That's what I have on my Windows Vista system.\nThen, the key document for learning the differences between the 2.x and 3.x versions is:\nhttp://docs.python.org/3.1/whatsnew/3.0.html\nIf you read Python learning materials out there which are based on 2.x and also refer to that \"What’s New In Python 3.0\" link above, you'll get an understanding of how things changed. Also see the other whats new docs, like for the differences between 3.0 and 3.1, but the link above is the main one to understand the 2.x vs. 3.x changes.\n" ]
[ 20, 8, 7, 4, 3, 2, 2, 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001222782_python_python_3.x.txt
Q: the fastest way to create checksum for large files in python i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection. A: It's an algorithm selection problem, rather than a library/language selection problem! There appears to be two points to consider primarily: how much would the disk I/O affect the overall performance? what is the expected reliability of the error detection feature? Apparently, the answer to the second question is something like 'some false negative allowed' since the reliability of any 32 bits hash, relative to a 4Gb message, even in a moderately noisy channel, is not going to be virtually absolute. Assuming that I/O can be improved through multithreading, we may choose a hash that doesn't require a sequential scan of the complete message. Instead we can maybe work the file in parallel, hashing individual sections and either combining the hash values or appending them, to form a longer, more reliable error detection device. The next step could be to formalize this handling of files as ordered sections, and to transmit them as such (to be re-glued together at the recipient's end). This approach, along additional information about the way the files are produced (for ex. they may be exclusively modified by append, like log files), may even allow to limit the amount of hash calculation required. The added complexity of this approach needs to weighted against the desire to have zippy fast CRC calculation. Side note: Alder32 is not limited to message sizes below a particular threshold. It may just be a limit of the zlib API. (BTW, the reference I found about zlib.adler32 used a buffer, and well... this approach is to be avoided in the context of our huge messages, in favor of streamed processes: read a little from file, calculate, repeat..) A: First, there is nothing inherent in any of the CRC algorithms that would prevent them working on an arbitrary length of data (however, a particular implementation might well impose a limit). However, in a file syncing application, that probably doesn't matter, as you may not want to hash the entire file when it gets large, just chunks anyway. If you hash the entire file, and the hashes at each end differ, you have to copy the entire file. If you hash fixed sized chunks, then you only have to copy the chunks whose hash has changed. If most of the changes to the files are localized (e.g. database) then this will likely require much less copying (and it' easier to spread per chunk calculations across multiple cores). As for the hash algorithm itself, the basic tradeoff is speed vs. lack of collisions (two different data chunks yielding the same hash). CRC-32 is fast, but with only 2^32 unique values, collisions may be seen. MD5 is much slower, but has 2^128 unique values, so collisions will almost never be seen (but are still theoretically possible). The larger hashes (SHA1, SHA256, ...) have even more unique values, but are slower still: I doubt you need them: you're worried about accidental collisions, unlike digital signature applications, where you're worried about deliberately (malicously) engineered collisions. It sounds like you're trying to do something very similar to what the rsync utility does. Can you just use rsync? A: You might be hitting a size limit for files in XP. The 64-bit gives you more addressing space (removing the 2GB (or so) addressing space per application), but probably does nothing for the file size problem. A: You cannot possibly use more than one core to calculate MD5 hash of a large file because of the very nature of MD5: it expects a message to be broken up in chunks and fed into hashing function in strict sequence. However, you can use one thread to read a file into internal queue, and then calculate hash in a separate thread so that. I do not think though that this will give you any significant performance boost. The fact that it takes so long to process a big file might be due to "unbuffered" reads. Try reading, say, 16 Kb at a time and then feed the content in chunks to hashing function. A: md5 itself can't be run in parallel. However you can md5 the file in sections (in parallel) and the take an md5 of the list of hashes. However that assumes that the hashing is not IO-limited, which I would suspect it is. As Anton Gogolev suggests - make sure that you're reading the file efficiently (in large power-of-2 chunks). Once you've done that, make sure the file isn't fragmented. Also a hash such as sha256 should be selected rather than md5 for new projects. Are the zlib checksums much faster than md5 for 4Gb files? A: Did you try the crc-generator module?
the fastest way to create checksum for large files in python
i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me. somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only. my questions are: is there a way to make crc works on large file? i prefer to use crc than md5 if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that? PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.
[ "It's an algorithm selection problem, rather than a library/language selection problem!\nThere appears to be two points to consider primarily:\n\nhow much would the disk I/O affect the overall performance?\nwhat is the expected reliability of the error detection feature?\n\nApparently, the answer to the second question is something like 'some false negative allowed' since the reliability of any 32 bits hash, relative to a 4Gb message, even in a moderately noisy channel, is not going to be virtually absolute.\nAssuming that I/O can be improved through multithreading, we may choose a hash that doesn't require a sequential scan of the complete message. Instead we can maybe work the file in parallel, hashing individual sections and either combining the hash values or appending them, to form a longer, more reliable error detection device.\nThe next step could be to formalize this handling of files as ordered sections, and to transmit them as such (to be re-glued together at the recipient's end). This approach, along additional information about the way the files are produced (for ex. they may be exclusively modified by append, like log files), may even allow to limit the amount of hash calculation required. The added complexity of this approach needs to weighted against the desire to have zippy fast CRC calculation.\nSide note: Alder32 is not limited to message sizes below a particular threshold. It may just be a limit of the zlib API. (BTW, the reference I found about zlib.adler32 used a buffer, and well... this approach is to be avoided in the context of our huge messages, in favor of streamed processes: read a little from file, calculate, repeat..)\n", "First, there is nothing inherent in any of the CRC algorithms that would prevent them working on an arbitrary length of data (however, a particular implementation might well impose a limit).\nHowever, in a file syncing application, that probably doesn't matter, as you may not want to hash the entire file when it gets large, just chunks anyway. If you hash the entire file, and the hashes at each end differ, you have to copy the entire file. If you hash fixed sized chunks, then you only have to copy the chunks whose hash has changed. If most of the changes to the files are localized (e.g. database) then this will likely require much less copying (and it' easier to spread per chunk calculations across multiple cores).\nAs for the hash algorithm itself, the basic tradeoff is speed vs. lack of collisions (two different data chunks yielding the same hash). CRC-32 is fast, but with only 2^32 unique values, collisions may be seen. MD5 is much slower, but has 2^128 unique values, so collisions will almost never be seen (but are still theoretically possible). The larger hashes (SHA1, SHA256, ...) have even more unique values, but are slower still: I doubt you need them: you're worried about accidental collisions, unlike digital signature applications, where you're worried about deliberately (malicously) engineered collisions.\nIt sounds like you're trying to do something very similar to what the rsync utility does. Can you just use rsync?\n", "You might be hitting a size limit for files in XP. The 64-bit gives you more addressing space (removing the 2GB (or so) addressing space per application), but probably does nothing for the file size problem.\n", "You cannot possibly use more than one core to calculate MD5 hash of a large file because of the very nature of MD5: it expects a message to be broken up in chunks and fed into hashing function in strict sequence. However, you can use one thread to read a file into internal queue, and then calculate hash in a separate thread so that. I do not think though that this will give you any significant performance boost.\nThe fact that it takes so long to process a big file might be due to \"unbuffered\" reads. Try reading, say, 16 Kb at a time and then feed the content in chunks to hashing function.\n", "md5 itself can't be run in parallel. However you can md5 the file in sections (in parallel) and the take an md5 of the list of hashes.\nHowever that assumes that the hashing is not IO-limited, which I would suspect it is. As Anton Gogolev suggests - make sure that you're reading the file efficiently (in large power-of-2 chunks). Once you've done that, make sure the file isn't fragmented.\nAlso a hash such as sha256 should be selected rather than md5 for new projects.\nAre the zlib checksums much faster than md5 for 4Gb files?\n", "Did you try the crc-generator module?\n" ]
[ 5, 3, 2, 1, 1, 0 ]
[]
[]
[ "crc32", "hashlib", "md5", "multithreading", "python" ]
stackoverflow_0001532720_crc32_hashlib_md5_multithreading_python.txt
Q: Python 'object' type and inheritance In Python I can define a class 'foo' in the following ways: class foo: pass or class foo(object): pass What is the difference? I have tried to use the function issubclass(foo, object) to see if it returns True for both class definitions. It does not. IDLE 2.6.3 >>> class foo: pass >>> issubclass(foo, object) False >>> class foo(object): pass >>> issubclass(foo, object) True Thanks. A: Inheriting from object makes a class a "new-style class". There is a discussion of old-style vs. new-style here: What is the difference between old style and new style classes in Python? As @CrazyJugglerDrummer commented below, in Python 3 all classes are "new-style" classes. In Python 3, the following two declarations are exactly equivalent: class A(object): pass class A: pass A: The first creates an "old-style" class, which are deprecated and have been removed in Python 3. You should not use it in Python 2.x. See the documentation for the Python data model. A: Old style and new style objects... they have sightly different behaviours, for example in the constructors, or in the method resolution order in multiple inheritance.
Python 'object' type and inheritance
In Python I can define a class 'foo' in the following ways: class foo: pass or class foo(object): pass What is the difference? I have tried to use the function issubclass(foo, object) to see if it returns True for both class definitions. It does not. IDLE 2.6.3 >>> class foo: pass >>> issubclass(foo, object) False >>> class foo(object): pass >>> issubclass(foo, object) True Thanks.
[ "Inheriting from object makes a class a \"new-style class\". There is a discussion of old-style vs. new-style here: What is the difference between old style and new style classes in Python?\nAs @CrazyJugglerDrummer commented below, in Python 3 all classes are \"new-style\" classes. In Python 3, the following two declarations are exactly equivalent:\nclass A(object):\n pass\n\nclass A:\n pass\n\n", "The first creates an \"old-style\" class, which are deprecated and have been removed in Python 3. You should not use it in Python 2.x. See the documentation for the Python data model.\n", "Old style and new style objects... they have sightly different behaviours, for example in the constructors, or in the method resolution order in multiple inheritance.\n" ]
[ 9, 2, 2 ]
[]
[]
[ "inheritance", "object", "python" ]
stackoverflow_0001540975_inheritance_object_python.txt
Q: python regular expression to parse div tags a question about python regular expression. I would like to match a div block like <div class="leftTail"><ul class="hotnews">any news stuff</ul></div> I was thinking a pattern like p = re.compile(r'<div\s+class=\"leftTail\">[^(div)]+</div>') but it seems not working properly another pattern p = re.compile(r'<div\s+class=\"leftTail\">[\W|\w]+</div>') i got much more than i think, it gets all the stuff until the last tag in the file. Thanks for any help A: You might want to consider graduating to an actual HTML parser. I suggest you give Beautiful Soup a try. There are many crazy ways for HTML to be formatted, and the regular expressions may not work correctly all the time, even if you write them correctly. A: Don't use regular expressions to parse XML or HTML. You'll never be able to get it to work correctly for nested divs. A: try this: p = re.compile(r'<div\s+class=\"leftTail\">.*?</div>')
python regular expression to parse div tags
a question about python regular expression. I would like to match a div block like <div class="leftTail"><ul class="hotnews">any news stuff</ul></div> I was thinking a pattern like p = re.compile(r'<div\s+class=\"leftTail\">[^(div)]+</div>') but it seems not working properly another pattern p = re.compile(r'<div\s+class=\"leftTail\">[\W|\w]+</div>') i got much more than i think, it gets all the stuff until the last tag in the file. Thanks for any help
[ "You might want to consider graduating to an actual HTML parser. I suggest you give Beautiful Soup a try. There are many crazy ways for HTML to be formatted, and the regular expressions may not work correctly all the time, even if you write them correctly.\n", "Don't use regular expressions to parse XML or HTML. You'll never be able to get it to work correctly for nested divs.\n", "try this:\np = re.compile(r'<div\\s+class=\\\"leftTail\\\">.*?</div>')\n\n" ]
[ 12, 4, 4 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001541281_python_regex.txt
Q: Difference between GET and FILTER in Django model layer What is the difference, please explain them in laymen's terms with examples. Thanks! A: I don't know if you really need an example, it's quite easy: if you know it's one object that matches your query, use get. It will fail if it's more than one. otherwise use filter, which gives you a list of objects. To be more precise: MyTable.objects.get(id=x).whatever gives you the whatever property of your object. get() raises MultipleObjectsReturned if more than one object was found. The MultipleObjectsReturned exception is an attribute of the model class. get() raises a DoesNotExist exception if an object wasn’t found for the given parameters. This exception is also an attribute of the model class. MyTable.objects.filter(somecolumn=x) is not only usable as a list, but you can also query it again, something like MyTable.objects.filter(somecolumn=x).order_by('date'). The reason is that it's not actually a list, but a query object. You can iterate through it like through a list: for obj in MyTable.objects.filter(somecolumn=x)
Difference between GET and FILTER in Django model layer
What is the difference, please explain them in laymen's terms with examples. Thanks!
[ "I don't know if you really need an example, it's quite easy:\n\nif you know it's one object that matches your query, use get. It will fail if it's more than one.\notherwise use filter, which gives you a list of objects.\n\nTo be more precise:\n\nMyTable.objects.get(id=x).whatever gives you the whatever property of your object.\n\nget() raises MultipleObjectsReturned if more than one object was found.\nThe MultipleObjectsReturned exception is an attribute of the model\nclass.\nget() raises a DoesNotExist exception if an object wasn’t found for the\ngiven parameters. This exception is also an attribute of the model class.\n\nMyTable.objects.filter(somecolumn=x) is not only usable as a list, but you can also query it again, something like MyTable.objects.filter(somecolumn=x).order_by('date'). \nThe reason is that it's not actually a list, but a query object. You can iterate through it like through a list: for obj in MyTable.objects.filter(somecolumn=x)\n\n" ]
[ 48 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001541249_django_python.txt
Q: How can I make images so that appengine doesn't make transparent into black on resize? I'm on the google appengine, and trying to resize images. I do : from google.appengine.api import images image = images.resize(contents, w, h) And for some images I get a nice transparent resize, and others I get a black background. How can I keep the transparency for all images? Original : http://www.stdicon.com/g-flat/application/pgp-encrypted Black : http://www.stdicon.com/g-flat/application/pgp-encrypted?size=64 Original : http://www.stdicon.com/gartoon/application/rtf Black : http://www.stdicon.com/gartoon/application/rtf?size=64 Original : http://www.stdicon.com/nuvola/application/x-debian-package Transparent : http://www.stdicon.com/nuvola/application/x-debian-package?size=64 A: Article on this problem: http://doesnotvalidate.com/2009/resizing-transparent-images-with-django-pil/ Google-code patch: http://code.google.com/p/sorl-thumbnail/issues/detail?id=56 A: Is this on the dev appserver, or in production? There's a known bug on the dev appserver that turns transparent to black when compositing, but it should run fine in production. A: With PIL you have to convert your image in RGBA like this : im = im.convert("RGBA") If you want a better implementation, you can read the sorl-thumbnail code. It makes a good usage of PIL.
How can I make images so that appengine doesn't make transparent into black on resize?
I'm on the google appengine, and trying to resize images. I do : from google.appengine.api import images image = images.resize(contents, w, h) And for some images I get a nice transparent resize, and others I get a black background. How can I keep the transparency for all images? Original : http://www.stdicon.com/g-flat/application/pgp-encrypted Black : http://www.stdicon.com/g-flat/application/pgp-encrypted?size=64 Original : http://www.stdicon.com/gartoon/application/rtf Black : http://www.stdicon.com/gartoon/application/rtf?size=64 Original : http://www.stdicon.com/nuvola/application/x-debian-package Transparent : http://www.stdicon.com/nuvola/application/x-debian-package?size=64
[ "Article on this problem: http://doesnotvalidate.com/2009/resizing-transparent-images-with-django-pil/\nGoogle-code patch: http://code.google.com/p/sorl-thumbnail/issues/detail?id=56\n", "Is this on the dev appserver, or in production? There's a known bug on the dev appserver that turns transparent to black when compositing, but it should run fine in production.\n", "With PIL you have to convert your image in RGBA like this :\nim = im.convert(\"RGBA\")\n\nIf you want a better implementation, you can read the sorl-thumbnail code. It makes a good usage of PIL.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "google_app_engine", "python", "python_imaging_library", "resize" ]
stackoverflow_0001476514_google_app_engine_python_python_imaging_library_resize.txt
Q: Debugging swig extensions for Python Is there any other way to debug swig extensions except for doing gdb python stuff.py ? I have wrapped the legacy library libkdtree++ and followed all the swig related memory managemant points (borrowed ref vs. own ref, etc.). But still, I am not sure whether my binding is not eating up memory. It would be helpful to be able to just debug step by step each publicized function: starting from Python then going to via the C glue binding into C space, and returning back. Is there already such a possibility? A: gdb 7.0 supports python scripting. It might help you in this particular case. A: Well, for debugging, you use a debugger ;-). When debugging, it may be a good idea to configure Python with '--with-pydebug' and recompile. It does additional checks then. If you are looking for memory leaks, there is a simple way: Run your code over and over in a loop, and look for Python's memory consumption.
Debugging swig extensions for Python
Is there any other way to debug swig extensions except for doing gdb python stuff.py ? I have wrapped the legacy library libkdtree++ and followed all the swig related memory managemant points (borrowed ref vs. own ref, etc.). But still, I am not sure whether my binding is not eating up memory. It would be helpful to be able to just debug step by step each publicized function: starting from Python then going to via the C glue binding into C space, and returning back. Is there already such a possibility?
[ "gdb 7.0 supports python scripting. It might help you in this particular case.\n", "Well, for debugging, you use a debugger ;-).\nWhen debugging, it may be a good idea to configure Python with '--with-pydebug' and recompile. It does additional checks then.\nIf you are looking for memory leaks, there is a simple way:\nRun your code over and over in a loop, and look for Python's memory consumption.\n" ]
[ 3, 1 ]
[]
[]
[ "debugging", "python", "swig" ]
stackoverflow_0000828843_debugging_python_swig.txt
Q: Django SELECT statement, Order by Suppose I have 2 models. The 2nd model has a one-to-one relationship with the first model. I'd like to select information from the first model, but ORDER BY the 2nd model. How can I do that? class Content(models.Model): link = models.TextField(blank=True) title = models.TextField(blank=True) is_channel = models.BooleanField(default=0, db_index=True) class Score(models.Model): content = models.OneToOneField(Content, primary_key=True) counter = models.IntegerField(default=0) A: I think you can do: Content.objects.filter(...).order_by('score__counter') More generally, when models have a relationship, you can select, order, and filter by fields on the "other" model using the relationshipName__fieldName pseudo attribute of the model which you are selecting on.
Django SELECT statement, Order by
Suppose I have 2 models. The 2nd model has a one-to-one relationship with the first model. I'd like to select information from the first model, but ORDER BY the 2nd model. How can I do that? class Content(models.Model): link = models.TextField(blank=True) title = models.TextField(blank=True) is_channel = models.BooleanField(default=0, db_index=True) class Score(models.Model): content = models.OneToOneField(Content, primary_key=True) counter = models.IntegerField(default=0)
[ "I think you can do:\nContent.objects.filter(...).order_by('score__counter')\n\nMore generally, when models have a relationship, you can select, order, and filter by fields on the \"other\" model using the relationshipName__fieldName pseudo attribute of the model which you are selecting on.\n" ]
[ 7 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001541376_django_python.txt
Q: Building universal binaries on Mac - Forcing single compiler child process Cheers, at company, we're creating a port of our games, and we need to compile PythonOgre, a wrapper of Ogre3d for Python. This is an enormous chunk of code, particularly the generated wrapper code. We have a Mac Mini with 1GB RAM. We've built i386 version. Since we have only 1GB of RAM, we've forced the build system to use only one core; second process running at the same time took a nice trip into the virtual memory. Now, we need to produce universal binaries, because publisher insists on supporting PPC. To facilitate building of universal binaries, we've modified the CFLAGS and CXXFLAGS to include -arch i386 -arch ppc when compiling wrapper code (Ogre3d itself already seems to be an universal binary). However, Apple has decided to use both cores when creating universal binary, and this has caused the system to roll over and die. (Well, crawl at 0.9-1.4% CPU usage, anyways.) While ordinarily we would appreciate this, on a 1GB Mac Mini, this completely blocks our development. Aside from getting a new build machine, giving up on PPC support and producing a PPC-only build, the only recourse we have is to block GCC's spawning of second simultaneous process. How would we go about that? A: I've checked the source of Apple's GCC driver (the one that supports those -arch options and runs the children processes), and there's no option or environment variable that you can choose. The only options I see left to you are: download the Apple driver (e.g. from there; end of the page) and modify the file driverdriver.c so that processes are launched sequentially (rather than in parallel) do separate i386 and powerpc builds, and join the final build objects (executables, shared libraries, etc.) using lipo I hope this helps! A: Will just turning off one core do? Apparently, installing the Apple Developer Tools (XCode et al.) gives you a Processor system preference pane, allowing you to turn off one core. I was trying to test this, but the XCode install failed for some reason... EDIT: I know this was a while ago, but I just thought of something. Why not build with -march=i386 and -march=ppc separately and then combine the binaries using lipo?
Building universal binaries on Mac - Forcing single compiler child process
Cheers, at company, we're creating a port of our games, and we need to compile PythonOgre, a wrapper of Ogre3d for Python. This is an enormous chunk of code, particularly the generated wrapper code. We have a Mac Mini with 1GB RAM. We've built i386 version. Since we have only 1GB of RAM, we've forced the build system to use only one core; second process running at the same time took a nice trip into the virtual memory. Now, we need to produce universal binaries, because publisher insists on supporting PPC. To facilitate building of universal binaries, we've modified the CFLAGS and CXXFLAGS to include -arch i386 -arch ppc when compiling wrapper code (Ogre3d itself already seems to be an universal binary). However, Apple has decided to use both cores when creating universal binary, and this has caused the system to roll over and die. (Well, crawl at 0.9-1.4% CPU usage, anyways.) While ordinarily we would appreciate this, on a 1GB Mac Mini, this completely blocks our development. Aside from getting a new build machine, giving up on PPC support and producing a PPC-only build, the only recourse we have is to block GCC's spawning of second simultaneous process. How would we go about that?
[ "I've checked the source of Apple's GCC driver (the one that supports those -arch options and runs the children processes), and there's no option or environment variable that you can choose.\nThe only options I see left to you are:\n\ndownload the Apple driver (e.g. from there; end of the page) and modify the file driverdriver.c so that processes are launched sequentially (rather than in parallel)\ndo separate i386 and powerpc builds, and join the final build objects (executables, shared libraries, etc.) using lipo\n\nI hope this helps!\n", "Will just turning off one core do? Apparently, installing the Apple Developer Tools (XCode et al.) gives you a Processor system preference pane, allowing you to turn off one core. I was trying to test this, but the XCode install failed for some reason...\nEDIT: I know this was a while ago, but I just thought of something. Why not build with -march=i386 and -march=ppc separately and then combine the binaries using lipo?\n" ]
[ 2, 1 ]
[]
[]
[ "gcc", "macos", "multicore", "python" ]
stackoverflow_0001536897_gcc_macos_multicore_python.txt
Q: Why Does List Argument in Python Behave Like ByRef? This may be for most languages in general, but I'm not sure. I'm a beginner at Python and have always worked on copies of lists in C# and VB. But in Python whenever I pass a list as an argument and enumerate through using a "for i in range," and then change the value of the list argument, the input values actually changes the original list. I thought Python was supposed to pass arguments by value by default so that once the function is finished I still have the original values from before I called the function. What am I missing? Thanks! A: Python does pass arguments by value but the value you are receiving is a copy of the reference (incidentally this is the exact same way that C#, VB.NET, and Java behave as well). This is the important thing to remember: Objects are not passed by reference - object references are passed by value. Since you have a copy of the reference, any operation on what that reference points to will be just as if you were holding the original reference itself. A: Python -- just like Java does for anything but primitive scalars, and like C# and VB.NET do for the default kind parameters as opposed to boxed types and out / ref parms -- passes "by object reference" (search for that phrase here -- it's how Guido, Python's architect and creator, uses to explain this argument-passing concept). Every name is a reference to some object; passing a name (or any other expression) as an argument is just creating yet another reference to the same object (which the function body can access through the parameter's name). ((There's no such thing as "a reference to a name": there are names, which are one kind of reference to objects, and object -- period)). When you're passing a mutable object, i.e. one which has mutating methods (like for example a list), the called function can mutate the object by calling, directly or indirectly, its mutating methods. ((By "indirectly", I mean "through operators" -- for example: somelist[len(somelist):] = [whatever] is exactly identical to somelist.append(whatever).)) When you want to pass a list into a function, but do not want the function to be able to mutate that list in any way, you must pass a copy of the list instead of the original -- just like in Java, C#, VB.NET. Be very clear about the distinction between rebinding a name and mutating an object. Rebinding the name ("barename", that is -- qualified-names are different;-) only affects that name -- NOT any object whatsoever. For example: def f1(alist): alist = [23] def f2(alist): alist[:] = [23] Can you spot the difference between these two functions? One is rebinding the barename alist -- without any effect whatsoever on anything. The other is mutating (altering, changing, ...) the list object it received as its argument -- by setting its content to be a one-item list with an int as its sole item. Completely, utterly different things!!! A: To add to Andrew's answer, you need to explicitly make a copy of a list if you want to retain the original. You can do this using the copy module, or just do something like a = [1,2] b = list(a) Since copying objects usually implies a performance hit, I find it helpful to explicitly use the copy module in my larger projects. That way, I can easily find all the places where I'm going to use a bunch more memory.
Why Does List Argument in Python Behave Like ByRef?
This may be for most languages in general, but I'm not sure. I'm a beginner at Python and have always worked on copies of lists in C# and VB. But in Python whenever I pass a list as an argument and enumerate through using a "for i in range," and then change the value of the list argument, the input values actually changes the original list. I thought Python was supposed to pass arguments by value by default so that once the function is finished I still have the original values from before I called the function. What am I missing? Thanks!
[ "Python does pass arguments by value but the value you are receiving is a copy of the reference (incidentally this is the exact same way that C#, VB.NET, and Java behave as well).\nThis is the important thing to remember:\n\nObjects are not passed by reference - object references are passed by value.\n\nSince you have a copy of the reference, any operation on what that reference points to will be just as if you were holding the original reference itself.\n", "Python -- just like Java does for anything but primitive scalars, and like C# and VB.NET do for the default kind parameters as opposed to boxed types and out / ref parms -- passes \"by object reference\" (search for that phrase here -- it's how Guido, Python's architect and creator, uses to explain this argument-passing concept).\nEvery name is a reference to some object; passing a name (or any other expression) as an argument is just creating yet another reference to the same object (which the function body can access through the parameter's name). ((There's no such thing as \"a reference to a name\": there are names, which are one kind of reference to objects, and object -- period)).\nWhen you're passing a mutable object, i.e. one which has mutating methods (like for example a list), the called function can mutate the object by calling, directly or indirectly, its mutating methods. ((By \"indirectly\", I mean \"through operators\" -- for example:\nsomelist[len(somelist):] = [whatever]\n\nis exactly identical to somelist.append(whatever).))\nWhen you want to pass a list into a function, but do not want the function to be able to mutate that list in any way, you must pass a copy of the list instead of the original -- just like in Java, C#, VB.NET.\nBe very clear about the distinction between rebinding a name and mutating an object. Rebinding the name (\"barename\", that is -- qualified-names are different;-) only affects that name -- NOT any object whatsoever. For example:\ndef f1(alist):\n alist = [23]\n\ndef f2(alist):\n alist[:] = [23]\n\nCan you spot the difference between these two functions? One is rebinding the barename alist -- without any effect whatsoever on anything. The other is mutating (altering, changing, ...) the list object it received as its argument -- by setting its content to be a one-item list with an int as its sole item. Completely, utterly different things!!!\n", "To add to Andrew's answer, you need to explicitly make a copy of a list if you want to retain the original. You can do this using the copy module, or just do something like \na = [1,2]\nb = list(a)\n\nSince copying objects usually implies a performance hit, I find it helpful to explicitly use the copy module in my larger projects. That way, I can easily find all the places where I'm going to use a bunch more memory.\n" ]
[ 8, 4, 0 ]
[]
[]
[ "argument_passing", "object_reference", "python", "reference" ]
stackoverflow_0001541620_argument_passing_object_reference_python_reference.txt
Q: Why is subprocess.Popen not waiting until the child process terminates? I'm having a problem with Python's subprocess.Popen method. Here's a test script which demonstrates the problem. It's being run on a Linux box. #!/usr/bin/env python import subprocess import time def run(cmd): p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) return p ### START MAIN # copy some rows from a source table to a destination table # note that the destination table is empty when this script is run cmd = 'mysql -u ve --skip-column-names --batch --execute="insert into destination (select * from source limit 100000)" test' run(cmd) # check to see how many rows exist in the destination table cmd = 'mysql -u ve --skip-column-names --batch --execute="select count(*) from destination" test' process = run(cmd) count = (int(process.communicate()[0][:-1])) # if subprocess.Popen() waited for the child to terminate than count should be # greater than 0 if count > 0: print "success: " + str(count) else: print "failure: " + str(count) time.sleep(5) # find out how many rows exists in the destination table after sleeping process = run(cmd) count = (int(process.communicate()[0][:-1])) print "after sleeping the count is " + str(count) Usually the output from this script is: success: 100000 but sometimes it's failure: 0 after sleeping the count is 100000 Note that in the failure case, the select immediately after the insert shows 0 rows but after sleeping for 5 seconds a second select correctly shows a row count of 100000. My conclusion is that one of the following is true: subprocess.Popen is not waiting for the child thread to terminate - This seems to contradict the documentation the mysql insert is not atomic - my understanding of mysql seems to indicate insert is atomic the select is not seeing the correct row count right away - according to a friend who knows mysql better than I do this should not happen either What am I missing? FYI, I'm aware that this is a hacky way of interacting with mysql from Python and MySQLdb would likely not have this problem but I'm curious as to why this method does not work. A: subprocess.Popen, when instantiated, runs the program. It does not, however, wait for it -- it fires it off in the background as if you'd typed cmd & in a shell. So, in the code above, you've essentially defined a race condition -- if the inserts can finish in time, it will appear normal, but if not you get the unexpected output. You are not waiting for your first run()'d PID to finish, you are simply returning its Popen instance and continuing. I'm not sure how this behavior contradicts the documentation, because there's some very clear methods on Popen that seem to indicate it is not waited for, like: Popen.wait() Wait for child process to terminate. Set and return returncode attribute. I do agree, however, that the documentation for this module could be improved. To wait for the program to finish, I'd recommend using subprocess's convenience method, subprocess.call, or using communicate on a Popen object (for the case when you need stdout). You are already doing this for your second call. ### START MAIN # copy some rows from a source table to a destination table # note that the destination table is empty when this script is run cmd = 'mysql -u ve --skip-column-names --batch --execute="insert into destination (select * from source limit 100000)" test' subprocess.call(cmd) # check to see how many rows exist in the destination table cmd = 'mysql -u ve --skip-column-names --batch --execute="select count(*) from destination" test' process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) try: count = (int(process.communicate()[0][:-1])) except: count = 0 Additionally, in most cases, you do not need to run the command in a shell. This is one of those cases, but you'll have to rewrite your command like a sequence. Doing it that way also allows you to avoid traditional shell injection and worry less about quoting, like so: prog = ["mysql", "-u", "ve", "--execute", 'insert into foo values ("snargle", 2)'] subprocess.call(prog) This will even work, and will not inject as you'd expect: prog = ["printf", "%s", "<", "/etc/passwd"] subprocess.call(prog) Try it interactively. You avoid the possibilities of shell injection, particularly if you're accepting user input. I suspect you're using the less-awesome string method of communicating with subprocess because you ran into trouble getting the sequences to work :^) A: If you don't absolutely need to use subprocess and popen, it's usually simpler to use os.system. For example, for quick scripts I often do something like this: import os run = os.system #convenience alias result = run('mysql -u ve --execute="select * from wherever" test') Unlike popen, os.system DOES wait for your process to return before moving on to the next stage of your script. More info on it in the docs: http://docs.python.org/library/os.html#os.system A: Dude, why did you think subprocess.Popen returned an object with a wait method, unless it was because the waiting was NOT implicit, intrinsic, immediate, and inevitable, as you appear to surmise...?! The most common reason to spawn a subprocess is NOT to immediately wait for it to finish, but rather to let it proceed (e.g. on another core, or at worst by time-slicing -- that's the operating system's -- and hardware's -- lookout) at the same time as the parent process continues; when the parent process needs to wait for the subprocess to be finished, it will obviously call wait on the object returned by the original subprocess.Process call.
Why is subprocess.Popen not waiting until the child process terminates?
I'm having a problem with Python's subprocess.Popen method. Here's a test script which demonstrates the problem. It's being run on a Linux box. #!/usr/bin/env python import subprocess import time def run(cmd): p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) return p ### START MAIN # copy some rows from a source table to a destination table # note that the destination table is empty when this script is run cmd = 'mysql -u ve --skip-column-names --batch --execute="insert into destination (select * from source limit 100000)" test' run(cmd) # check to see how many rows exist in the destination table cmd = 'mysql -u ve --skip-column-names --batch --execute="select count(*) from destination" test' process = run(cmd) count = (int(process.communicate()[0][:-1])) # if subprocess.Popen() waited for the child to terminate than count should be # greater than 0 if count > 0: print "success: " + str(count) else: print "failure: " + str(count) time.sleep(5) # find out how many rows exists in the destination table after sleeping process = run(cmd) count = (int(process.communicate()[0][:-1])) print "after sleeping the count is " + str(count) Usually the output from this script is: success: 100000 but sometimes it's failure: 0 after sleeping the count is 100000 Note that in the failure case, the select immediately after the insert shows 0 rows but after sleeping for 5 seconds a second select correctly shows a row count of 100000. My conclusion is that one of the following is true: subprocess.Popen is not waiting for the child thread to terminate - This seems to contradict the documentation the mysql insert is not atomic - my understanding of mysql seems to indicate insert is atomic the select is not seeing the correct row count right away - according to a friend who knows mysql better than I do this should not happen either What am I missing? FYI, I'm aware that this is a hacky way of interacting with mysql from Python and MySQLdb would likely not have this problem but I'm curious as to why this method does not work.
[ "subprocess.Popen, when instantiated, runs the program. It does not, however, wait for it -- it fires it off in the background as if you'd typed cmd & in a shell. So, in the code above, you've essentially defined a race condition -- if the inserts can finish in time, it will appear normal, but if not you get the unexpected output. You are not waiting for your first run()'d PID to finish, you are simply returning its Popen instance and continuing.\nI'm not sure how this behavior contradicts the documentation, because there's some very clear methods on Popen that seem to indicate it is not waited for, like:\nPopen.wait()\n Wait for child process to terminate. Set and return returncode attribute.\n\nI do agree, however, that the documentation for this module could be improved.\nTo wait for the program to finish, I'd recommend using subprocess's convenience method, subprocess.call, or using communicate on a Popen object (for the case when you need stdout). You are already doing this for your second call.\n### START MAIN\n# copy some rows from a source table to a destination table\n# note that the destination table is empty when this script is run\ncmd = 'mysql -u ve --skip-column-names --batch --execute=\"insert into destination (select * from source limit 100000)\" test'\nsubprocess.call(cmd)\n\n# check to see how many rows exist in the destination table\ncmd = 'mysql -u ve --skip-column-names --batch --execute=\"select count(*) from destination\" test'\nprocess = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\ntry: count = (int(process.communicate()[0][:-1]))\nexcept: count = 0\n\nAdditionally, in most cases, you do not need to run the command in a shell. This is one of those cases, but you'll have to rewrite your command like a sequence. Doing it that way also allows you to avoid traditional shell injection and worry less about quoting, like so:\nprog = [\"mysql\", \"-u\", \"ve\", \"--execute\", 'insert into foo values (\"snargle\", 2)']\nsubprocess.call(prog)\n\nThis will even work, and will not inject as you'd expect:\nprog = [\"printf\", \"%s\", \"<\", \"/etc/passwd\"]\nsubprocess.call(prog)\n\nTry it interactively. You avoid the possibilities of shell injection, particularly if you're accepting user input. I suspect you're using the less-awesome string method of communicating with subprocess because you ran into trouble getting the sequences to work :^)\n", "If you don't absolutely need to use subprocess and popen, it's usually simpler to use os.system. For example, for quick scripts I often do something like this:\nimport os\nrun = os.system #convenience alias\nresult = run('mysql -u ve --execute=\"select * from wherever\" test')\n\nUnlike popen, os.system DOES wait for your process to return before moving on to the next stage of your script.\nMore info on it in the docs: http://docs.python.org/library/os.html#os.system\n", "Dude, why did you think subprocess.Popen returned an object with a wait method, unless it was because the waiting was NOT implicit, intrinsic, immediate, and inevitable, as you appear to surmise...?! The most common reason to spawn a subprocess is NOT to immediately wait for it to finish, but rather to let it proceed (e.g. on another core, or at worst by time-slicing -- that's the operating system's -- and hardware's -- lookout) at the same time as the parent process continues; when the parent process needs to wait for the subprocess to be finished, it will obviously call wait on the object returned by the original subprocess.Process call.\n" ]
[ 21, 7, 3 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001541273_mysql_python.txt
Q: Definition of mathematical operations (sin…) on NumPy arrays containing objects I would like to provide "all" mathematical functions for the number-like objects created by a module (the uncertainties.py module, which performs calculations with error propagation)—these objects are numbers with uncertainties. What is the best way to do this? Currently, I redefine most of the functions from math in the module uncertainties.py, so that they work on numbers with uncertainties. One drawback is that users who want to do from math import * must do so after doing import uncertainties. The interaction with NumPy is, however, restricted to basic operations (an array of numbers with uncertainties can be added, etc.); it does not (yet) include more complex functions (e.g. sin()) that would work on NumPy arrays that contain numbers with uncertainties. The approach I have taken so far consists in suggesting that the user define sin = numpy.vectorize(math.sin), so that the new math.sin function (which works on numbers with uncertainties) is broadcast to the elements of any Numpy array. One drawback is that this has to be done for each function of interest by the user, which is cumbersome. So, what is the best way to extend mathematical functions such as sin() so that they work conveniently with simple numbers and NumPy arrays? The approach chosen by NumPy is to define its own numpy.sin, rather than modifying math.sin so that it works with Numpy arrays. Should I do the same for my uncertainties.py module, and stop redefining math.sin? Furthermore, what would be the most efficient and correct way of defining sin so that it works both for simple numbers, numbers with uncertainties, and Numpy arrays? My redefined math.sin already handles simple numbers and numbers with uncertainties. However, vectorizing it with numpy.vectorize is likely to be much slower on "regular" NumPy arrays than numpy.sin. A: It looks like following what NumPy itself does keeps things clean: "extended" mathematical operations (sin…) that work on new objects can be put in a separate name space. Thus, NumPy has numpy.sin, etc. These operations are mostly compatible with those from math, but also work on NumPy arrays. Therefore, it seems to me that mathematical functions that should work on usual numbers and NumPy arrays and their counterparts with uncertainties are best defined in a separate name space. For instance, the user could do: from uncertainties import sin or from uncertainties import * # sin, cos, etc. For optimization purposes, an alternative might be to provide two distinct sets of mathematical functions: those that generalize functions to simple numbers with uncertainties, and those that generalize them to arrays with uncertainties: from uncertainties.math_ops import * # Work on scalars and scalars with uncertainty or from uncertainties.numpy_ops import * # Work on everything (scalars, arrays, numbers with uncertainties, arrays with uncertainties)
Definition of mathematical operations (sin…) on NumPy arrays containing objects
I would like to provide "all" mathematical functions for the number-like objects created by a module (the uncertainties.py module, which performs calculations with error propagation)—these objects are numbers with uncertainties. What is the best way to do this? Currently, I redefine most of the functions from math in the module uncertainties.py, so that they work on numbers with uncertainties. One drawback is that users who want to do from math import * must do so after doing import uncertainties. The interaction with NumPy is, however, restricted to basic operations (an array of numbers with uncertainties can be added, etc.); it does not (yet) include more complex functions (e.g. sin()) that would work on NumPy arrays that contain numbers with uncertainties. The approach I have taken so far consists in suggesting that the user define sin = numpy.vectorize(math.sin), so that the new math.sin function (which works on numbers with uncertainties) is broadcast to the elements of any Numpy array. One drawback is that this has to be done for each function of interest by the user, which is cumbersome. So, what is the best way to extend mathematical functions such as sin() so that they work conveniently with simple numbers and NumPy arrays? The approach chosen by NumPy is to define its own numpy.sin, rather than modifying math.sin so that it works with Numpy arrays. Should I do the same for my uncertainties.py module, and stop redefining math.sin? Furthermore, what would be the most efficient and correct way of defining sin so that it works both for simple numbers, numbers with uncertainties, and Numpy arrays? My redefined math.sin already handles simple numbers and numbers with uncertainties. However, vectorizing it with numpy.vectorize is likely to be much slower on "regular" NumPy arrays than numpy.sin.
[ "It looks like following what NumPy itself does keeps things clean: \"extended\" mathematical operations (sin…) that work on new objects can be put in a separate name space. Thus, NumPy has numpy.sin, etc. These operations are mostly compatible with those from math, but also work on NumPy arrays.\nTherefore, it seems to me that mathematical functions that should work on usual numbers and NumPy arrays and their counterparts with uncertainties are best defined in a separate name space. For instance, the user could do:\nfrom uncertainties import sin\n\nor\nfrom uncertainties import * # sin, cos, etc.\n\nFor optimization purposes, an alternative might be to provide two distinct sets of mathematical functions: those that generalize functions to simple numbers with uncertainties, and those that generalize them to arrays with uncertainties:\nfrom uncertainties.math_ops import * # Work on scalars and scalars with uncertainty\n\nor\nfrom uncertainties.numpy_ops import * # Work on everything (scalars, arrays, numbers with uncertainties, arrays with uncertainties)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "operations", "python" ]
stackoverflow_0001530598_arrays_numpy_operations_python.txt
Q: Python Class Members Initialization I have just recently battled a bug in Python. It was one of those silly newbie bugs, but it got me thinking about the mechanisms of Python (I'm a long time C++ programmer, new to Python). I will lay out the buggy code and explain what I did to fix it, and then I have a couple of questions... The scenario: I have a class called A, that has a dictionary data member, following is its code (this is simplification of course): class A: dict1={} def add_stuff_to_1(self, k, v): self.dict1[k]=v def print_stuff(self): print(self.dict1) The class using this code is class B: class B: def do_something_with_a1(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('a', 1) a_instance.add_stuff_to_1('b', 2) a_instance.print_stuff() def do_something_with_a2(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('c', 1) a_instance.add_stuff_to_1('d', 2) a_instance.print_stuff() def do_something_with_a3(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('e', 1) a_instance.add_stuff_to_1('f', 2) a_instance.print_stuff() def __init__(self): self.do_something_with_a1() print("---") self.do_something_with_a2() print("---") self.do_something_with_a3() Notice that every call to do_something_with_aX() initializes a new "clean" instance of class A, and prints the dictionary before and after the addition. The bug (in case you haven't figured it out yet): >>> b_instance = B() {} {'a': 1, 'b': 2} --- {'a': 1, 'b': 2} {'a': 1, 'c': 1, 'b': 2, 'd': 2} --- {'a': 1, 'c': 1, 'b': 2, 'd': 2} {'a': 1, 'c': 1, 'b': 2, 'e': 1, 'd': 2, 'f': 2} In the second initialization of class A, the dictionaries are not empty, but start with the contents of the last initialization, and so forth. I expected them to start "fresh". What solves this "bug" is obviously adding: self.dict1 = {} In the __init__ constructor of class A. However, that made me wonder: What is the meaning of the "dict1 = {}" initialization at the point of dict1's declaration (first line in class A)? It is meaningless? What's the mechanism of instantiation that causes copying the reference from the last initialization? If I add "self.dict1 = {}" in the constructor (or any other data member), how does it not affect the dictionary member of previously initialized instances? EDIT: Following the answers I now understand that by declaring a data member and not referring to it in the __init__ or somewhere else as self.dict1, I'm practically defining what's called in C++/Java a static data member. By calling it self.dict1 I'm making it "instance-bound". A: What you keep referring to as a bug is the documented, standard behavior of Python classes. Declaring a dict outside of __init__ as you initially did is declaring a class-level variable. It is only created once at first, whenever you create new objects it will reuse this same dict. To create instance variables, you declare them with self in __init__; its as simple as that. A: When you access attribute of instance, say, self.foo, python will first find 'foo' in self.__dict__. If not found, python will find 'foo' in TheClass.__dict__ In your case, dict1 is of class A, not instance. A: @Matthew : Please review the difference between a class member and an object member in Object Oriented Programming. This problem happens because of the declaration of the original dict makes it a class member, and not an object member (as was the original poster's intent.) Consequently, it exists once for (is shared accross) all instances of the class (ie once for the class itself, as a member of the class object itself) so the behaviour is perfectly correct. A: Pythons class declarations are executed as a code block and any local variable definitions (of which function definitions are a special kind of) are stored in the constructed class instance. Due to the way attribute look up works in Python, if an attribute is not found on the instance the value on the class is used. The is an interesting article about the class syntax on the history of Python blog. A: If this is your code: class ClassA: dict1 = {} a = ClassA() Then you probably expected this to happen inside Python: class ClassA: __defaults__['dict1'] = {} a = instance(ClassA) # a bit of pseudo-code here: for name, value in ClassA.__defaults__: a.<name> = value As far as I can tell, that is what happens, except that a dict has its pointer copied, instead of the value, which is the default behaviour everywhere in Python. Look at this code: a = {} b = a a['foo'] = 'bar' print b
Python Class Members Initialization
I have just recently battled a bug in Python. It was one of those silly newbie bugs, but it got me thinking about the mechanisms of Python (I'm a long time C++ programmer, new to Python). I will lay out the buggy code and explain what I did to fix it, and then I have a couple of questions... The scenario: I have a class called A, that has a dictionary data member, following is its code (this is simplification of course): class A: dict1={} def add_stuff_to_1(self, k, v): self.dict1[k]=v def print_stuff(self): print(self.dict1) The class using this code is class B: class B: def do_something_with_a1(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('a', 1) a_instance.add_stuff_to_1('b', 2) a_instance.print_stuff() def do_something_with_a2(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('c', 1) a_instance.add_stuff_to_1('d', 2) a_instance.print_stuff() def do_something_with_a3(self): a_instance = A() a_instance.print_stuff() a_instance.add_stuff_to_1('e', 1) a_instance.add_stuff_to_1('f', 2) a_instance.print_stuff() def __init__(self): self.do_something_with_a1() print("---") self.do_something_with_a2() print("---") self.do_something_with_a3() Notice that every call to do_something_with_aX() initializes a new "clean" instance of class A, and prints the dictionary before and after the addition. The bug (in case you haven't figured it out yet): >>> b_instance = B() {} {'a': 1, 'b': 2} --- {'a': 1, 'b': 2} {'a': 1, 'c': 1, 'b': 2, 'd': 2} --- {'a': 1, 'c': 1, 'b': 2, 'd': 2} {'a': 1, 'c': 1, 'b': 2, 'e': 1, 'd': 2, 'f': 2} In the second initialization of class A, the dictionaries are not empty, but start with the contents of the last initialization, and so forth. I expected them to start "fresh". What solves this "bug" is obviously adding: self.dict1 = {} In the __init__ constructor of class A. However, that made me wonder: What is the meaning of the "dict1 = {}" initialization at the point of dict1's declaration (first line in class A)? It is meaningless? What's the mechanism of instantiation that causes copying the reference from the last initialization? If I add "self.dict1 = {}" in the constructor (or any other data member), how does it not affect the dictionary member of previously initialized instances? EDIT: Following the answers I now understand that by declaring a data member and not referring to it in the __init__ or somewhere else as self.dict1, I'm practically defining what's called in C++/Java a static data member. By calling it self.dict1 I'm making it "instance-bound".
[ "What you keep referring to as a bug is the documented, standard behavior of Python classes.\nDeclaring a dict outside of __init__ as you initially did is declaring a class-level variable. It is only created once at first, whenever you create new objects it will reuse this same dict. To create instance variables, you declare them with self in __init__; its as simple as that.\n", "When you access attribute of instance, say, self.foo, python will first find 'foo' in self.__dict__. If not found, python will find 'foo' in TheClass.__dict__\nIn your case, dict1 is of class A, not instance. \n", "@Matthew : Please review the difference between a class member and an object member in Object Oriented Programming. This problem happens because of the declaration of the original dict makes it a class member, and not an object member (as was the original poster's intent.) Consequently, it exists once for (is shared accross) all instances of the class (ie once for the class itself, as a member of the class object itself) so the behaviour is perfectly correct. \n", "Pythons class declarations are executed as a code block and any local variable definitions (of which function definitions are a special kind of) are stored in the constructed class instance. Due to the way attribute look up works in Python, if an attribute is not found on the instance the value on the class is used.\nThe is an interesting article about the class syntax on the history of Python blog.\n", "If this is your code:\nclass ClassA:\n dict1 = {}\na = ClassA()\n\nThen you probably expected this to happen inside Python:\nclass ClassA:\n __defaults__['dict1'] = {}\n\na = instance(ClassA)\n# a bit of pseudo-code here:\nfor name, value in ClassA.__defaults__:\n a.<name> = value\n\nAs far as I can tell, that is what happens, except that a dict has its pointer copied, instead of the value, which is the default behaviour everywhere in Python. Look at this code:\na = {}\nb = a\na['foo'] = 'bar'\nprint b\n\n" ]
[ 62, 2, 2, 0, 0 ]
[]
[]
[ "class", "initialization", "python" ]
stackoverflow_0000867219_class_initialization_python.txt
Q: Sort string collection in Python using various locale settings I want to sort list of strings with respect to user language preference. I have a multilanguage Python webapp and what is the correct way to sort strings such way? I know I can set up locale, like this: import locale locale.setlocale(locale.LC_ALL, '') But this should be done on application start (and doc says it is not thread-safe!), is it good idea to set it up in every thread according to current user (request) setting? I would like something like function locale.strcoll(...) with additional parameter - language that is used for sorting. A: I would recommend pyICU -- Python bindings for IBM's rich open-source ICU internationalization library. You make a Collator object e.g. with: collator = PyICU.Collator.createInstance(PyICU.Locale.getFrance()) and then you can sort e.g. a list of utf-8 encoded strings by the rules for French, e.g. by using thelist.sort(cmp=collator.compare). The only issue I had was that I found no good packaged, immediately usable version of PyICU plus ICU for MacOSX -- I ended up building and installing from sources: ICU's own sources, 3.6, from here -- there are binaries for Windows and several Unix versions there, but not for the Mac; PyICU 0.8.1 from here. Net of these build/installation issues, and somewhat-scant docs for the Python bindings, ICU's really a godsend if you do any substantial amount of i18n-related work, and PyICU a very serviceable set of bindings to it! A: You will want the latest possible ICU under your pyICU to get the best and most up to date data. A: Given the documentation warnings, it seems you are on your own if you try to set locale diffrently in different threads. If you can split your problem into one thread per locale, might you not as well split it into one subprocess per locale, using Python 2.6's multiprocessing? It seems everything solving this must be a hack, you could even consider using the command-line program sort (1) invoked with different LC_ALL for different languages. A: Another possible solution is to use SQL server that has good locale support (unfortunately, sqlite is not an option). Then I can put all data to temporary memory table and SELECT them with ORDER BY. IMO it should be better solution than trying to distribute locale settings to multiple processes as kaizer.se's answer recommends.
Sort string collection in Python using various locale settings
I want to sort list of strings with respect to user language preference. I have a multilanguage Python webapp and what is the correct way to sort strings such way? I know I can set up locale, like this: import locale locale.setlocale(locale.LC_ALL, '') But this should be done on application start (and doc says it is not thread-safe!), is it good idea to set it up in every thread according to current user (request) setting? I would like something like function locale.strcoll(...) with additional parameter - language that is used for sorting.
[ "I would recommend pyICU -- Python bindings for IBM's rich open-source ICU internationalization library. You make a Collator object e.g. with:\n collator = PyICU.Collator.createInstance(PyICU.Locale.getFrance())\n\nand then you can sort e.g. a list of utf-8 encoded strings by the rules for French, e.g. by using thelist.sort(cmp=collator.compare).\nThe only issue I had was that I found no good packaged, immediately usable version of PyICU plus ICU for MacOSX -- I ended up building and installing from sources: ICU's own sources, 3.6, from here -- there are binaries for Windows and several Unix versions there, but not for the Mac; PyICU 0.8.1 from here.\nNet of these build/installation issues, and somewhat-scant docs for the Python bindings, ICU's really a godsend if you do any substantial amount of i18n-related work, and PyICU a very serviceable set of bindings to it!\n", "You will want the latest possible ICU under your pyICU to get the best and most up to date data.\n", "Given the documentation warnings, it seems you are on your own if you try to set locale diffrently in different threads.\nIf you can split your problem into one thread per locale, might you not as well split it into one subprocess per locale, using Python 2.6's multiprocessing?\nIt seems everything solving this must be a hack, you could even consider using the command-line program sort (1) invoked with different LC_ALL for different languages.\n", "Another possible solution is to use SQL server that has good locale support (unfortunately, sqlite is not an option). Then I can put all data to temporary memory table and SELECT them with ORDER BY. IMO it should be better solution than trying to distribute locale settings to multiple processes as kaizer.se's answer recommends.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "collation", "google_app_engine", "python", "sorting", "web_applications" ]
stackoverflow_0001526109_collation_google_app_engine_python_sorting_web_applications.txt
Q: How to write this "model" in Django? I am currently using Django Users model. Very simple. However, I'd like to add one feature: Adding friends! I would like to create 2 columns in my table: UID (the ID of the User) friend_id (the ID of his friend! ...of course, this ID is also in Django's User model. The UID-friend_id combination must be unique! For example, if my ID is 84, I cannot have two rows the same, because I can only subscribe to the same friend once. Can anyone tell me if this is the right way to do it? Should I do some KEY relationship for the "friend_id", or should I leave it like this, as "IntegerField"? class Friend(models.Model): uid = models.ForeignKey(User) friend_id = models.IntegerField(default=0) Thank you A: You should create a model that defines the relationship between two users, and then define two foreign-key fields, each one to a User. You can then add a unique constraint to make sure you don't have duplicates. There is a article here explaining exactly how to do this: http://www.packtpub.com/article/building-friend-networks-with-django-1.0 The example model from that page: class Friendship(models.Model): from_friend = models.ForeignKey( User, related_name='friend_set' ) to_friend = models.ForeignKey( User, related_name='to_friend_set' ) def __unicode__(self): return u'%s, %s' % ( self.from_friend.username, self.to_friend.username ) class Meta: unique_together = (('to_friend', 'from_friend'), )
How to write this "model" in Django?
I am currently using Django Users model. Very simple. However, I'd like to add one feature: Adding friends! I would like to create 2 columns in my table: UID (the ID of the User) friend_id (the ID of his friend! ...of course, this ID is also in Django's User model. The UID-friend_id combination must be unique! For example, if my ID is 84, I cannot have two rows the same, because I can only subscribe to the same friend once. Can anyone tell me if this is the right way to do it? Should I do some KEY relationship for the "friend_id", or should I leave it like this, as "IntegerField"? class Friend(models.Model): uid = models.ForeignKey(User) friend_id = models.IntegerField(default=0) Thank you
[ "You should create a model that defines the relationship between two users, and then define two foreign-key fields, each one to a User. You can then add a unique constraint to make sure you don't have duplicates.\nThere is a article here explaining exactly how to do this: http://www.packtpub.com/article/building-friend-networks-with-django-1.0\nThe example model from that page:\nclass Friendship(models.Model):\n from_friend = models.ForeignKey(\n User, related_name='friend_set'\n )\n to_friend = models.ForeignKey(\n User, related_name='to_friend_set'\n )\n def __unicode__(self):\n return u'%s, %s' % (\n self.from_friend.username,\n self.to_friend.username\n )\n class Meta:\n unique_together = (('to_friend', 'from_friend'), )\n\n" ]
[ 11 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001542659_django_python.txt
Q: Does python's print function handle unicode differently now than when Dive Into Python was written? I'm trying to work my way through some frustrating encoding issues by going back to basics. In Dive Into Python example 9.14 (here) we have this: >>> s = u'La Pe\xf1a' >>> print s Traceback (innermost last): File "<interactive input>", line 1, in ? UnicodeError: ASCII encoding error: ordinal not in range(128) >>> print s.encode('latin-1') La Peña But on my machine, this happens: >>> sys.getdefaultencoding() 'ascii' >>> s = u'La Pe\xf1a' >>> print s La Peña I don't understand why these are different. Anybody? A: The default encoding for print doesn't depend on sys.getdefaultencoding(), but on sys.stdout.encoding. If you launch python with e.g. LANG=C or redirect a python script to a file, the encoding for stdout will be ANSI_X3.4-1968. On the other hand, if sys.stdout is a terminal, it will use the terminal's encoding. To explain what sys.getdefaultencoding() does -- it's used when implicitly converting strings from/to unicode. In this example, str(u'La Pe\xf1a') with the default ASCII encoding would fail, but with modified default encoding it would encode the string to Latin-1. However setting the default encoding is a horrible idea, you should always use explicit encoding when you want to go from unicode to str.
Does python's print function handle unicode differently now than when Dive Into Python was written?
I'm trying to work my way through some frustrating encoding issues by going back to basics. In Dive Into Python example 9.14 (here) we have this: >>> s = u'La Pe\xf1a' >>> print s Traceback (innermost last): File "<interactive input>", line 1, in ? UnicodeError: ASCII encoding error: ordinal not in range(128) >>> print s.encode('latin-1') La Peña But on my machine, this happens: >>> sys.getdefaultencoding() 'ascii' >>> s = u'La Pe\xf1a' >>> print s La Peña I don't understand why these are different. Anybody?
[ "The default encoding for print doesn't depend on sys.getdefaultencoding(), but on sys.stdout.encoding. If you launch python with e.g. LANG=C or redirect a python script to a file, the encoding for stdout will be ANSI_X3.4-1968. On the other hand, if sys.stdout is a terminal, it will use the terminal's encoding.\nTo explain what sys.getdefaultencoding() does -- it's used when implicitly converting strings from/to unicode. In this example, str(u'La Pe\\xf1a') with the default ASCII encoding would fail, but with modified default encoding it would encode the string to Latin-1. However setting the default encoding is a horrible idea, you should always use explicit encoding when you want to go from unicode to str.\n" ]
[ 6 ]
[]
[]
[ "character_encoding", "python", "unicode" ]
stackoverflow_0001542785_character_encoding_python_unicode.txt
Q: Help in the following code def startConnection(self): from ftplib import FTP self.ftp = FTP(self.loginServer) print 'Loging In' print self.ftp.login(self.username, self.password) data = [] self.ftp.dir(data.append) for line in data: try: self.date_str = ' '.join(line.split()[5:8]) newDate = time.strptime(self.date_str,'%b %d %H:%M') print newDate col_list = line.split() name = col_list[0] tempDir = {} if name.startswith('d'): tempDir['directory'] = newDate else: tempDir['file']=newDate self.dirInfo[col_list[8]] = tempDir except: print "**********Exception for line **********\n" + line + "\n**********Exception End**********" This function is working fine, newDate value is Aug 20 11:12, but year is missing, so Bydefault year value it is taking 1900, which is not correct. To debug it, i logged in ftp server and did dir / ls in both the cases it is showing timestamp like 'Aug 20 11:12'. But if i do ls -lTr, in that case it is showing year, what i want is some how i can pass above command to ftp and get the result. Is there any python ftplib module's function that can do this. A: Try self.ftp.dir('-lTr', data.append) A: If you are using a command that gives a short-form date, or want to cope with different arguments being handed to ls you'll have to make multiple attempts to parse the date with different format strings, until you don't get a ValueError, post-processing the successful parse according to which format it has matched -- files more than a year old will often report a date format including a year but no time of day (here you could probably stick with a default 00:00:00 time); recent files reported in a format without a year can have the current year defaulted in; formats that contain a year would be left unchanged.
Help in the following code
def startConnection(self): from ftplib import FTP self.ftp = FTP(self.loginServer) print 'Loging In' print self.ftp.login(self.username, self.password) data = [] self.ftp.dir(data.append) for line in data: try: self.date_str = ' '.join(line.split()[5:8]) newDate = time.strptime(self.date_str,'%b %d %H:%M') print newDate col_list = line.split() name = col_list[0] tempDir = {} if name.startswith('d'): tempDir['directory'] = newDate else: tempDir['file']=newDate self.dirInfo[col_list[8]] = tempDir except: print "**********Exception for line **********\n" + line + "\n**********Exception End**********" This function is working fine, newDate value is Aug 20 11:12, but year is missing, so Bydefault year value it is taking 1900, which is not correct. To debug it, i logged in ftp server and did dir / ls in both the cases it is showing timestamp like 'Aug 20 11:12'. But if i do ls -lTr, in that case it is showing year, what i want is some how i can pass above command to ftp and get the result. Is there any python ftplib module's function that can do this.
[ "Try\nself.ftp.dir('-lTr', data.append)\n\n", "If you are using a command that gives a short-form date, or want to cope with different arguments being handed to ls you'll have to make multiple attempts to parse the date with different format strings, until you don't get a ValueError, post-processing the successful parse according to which format it has matched -- files more than a year old will often report a date format including a year but no time of day (here you could probably stick with a default 00:00:00 time); recent files reported in a format without a year can have the current year defaulted in; formats that contain a year would be left unchanged.\n" ]
[ 1, 0 ]
[]
[]
[ "ftplib", "python" ]
stackoverflow_0001542991_ftplib_python.txt
Q: Determine which Button was pressed in Tkinter? I'm making a simple little utility while learning Python. It dynamically generates a list of buttons: for method in methods: button = Button(self.methodFrame, text=method, command=self.populateMethod) button.pack({'fill': 'x', 'expand': 1, 'padx': 5, 'pady': 3}) That part works fine. However, I need to know which of the buttons was pressed inside self.populateMethod. Any advice on how I might be able to tell? A: You can use lambda to pass arguments to a command: def populateMethod(self, method): print "method:", method for method in ["one","two","three"]: button = Button(self.methodFrame, text=method, command=lambda m=method: self.populateMethod(m)) button.pack({'fill': 'x', 'expand': 1, 'padx': 5, 'pady': 3}) A: It seems that the command method is not passed any event object. I can think of two workarounds: associate a unique callback to each button call button.bind('<Button-1>', self.populateMethod) instead of passing self.populateMethod as command. self.populateMethod must then accept a second argument which will be an event object. Assuming that this second argument is called event, event.widget is a reference to the button that was clicked.
Determine which Button was pressed in Tkinter?
I'm making a simple little utility while learning Python. It dynamically generates a list of buttons: for method in methods: button = Button(self.methodFrame, text=method, command=self.populateMethod) button.pack({'fill': 'x', 'expand': 1, 'padx': 5, 'pady': 3}) That part works fine. However, I need to know which of the buttons was pressed inside self.populateMethod. Any advice on how I might be able to tell?
[ "You can use lambda to pass arguments to a command:\ndef populateMethod(self, method):\n print \"method:\", method\n\nfor method in [\"one\",\"two\",\"three\"]:\n button = Button(self.methodFrame, text=method, \n command=lambda m=method: self.populateMethod(m))\n button.pack({'fill': 'x', 'expand': 1, 'padx': 5, 'pady': 3})\n\n", "It seems that the command method is not passed any event object.\nI can think of two workarounds:\n\nassociate a unique callback to each button\ncall button.bind('<Button-1>', self.populateMethod) instead of passing self.populateMethod as command. self.populateMethod must then accept a second argument which will be an event object.\nAssuming that this second argument is called event, event.widget is a reference to the button that was clicked.\n\n" ]
[ 24, 2 ]
[]
[]
[ "button", "python", "tkinter" ]
stackoverflow_0001539787_button_python_tkinter.txt
Q: binding local variables in python I wonder if there is a good way to bind local variables in python. Most of my work involves cobbling together short data or text processing scripts with a series of expressions (when python permits), so defining object classes (to use as namespaces) and instantiating them seems a bit much. So what I had in mind was something like in (common) lisp, where you could do something like (setq data '(1 2 3)) (setq output (let ( (x (nth 2 data)) ) x + x)) In python, the best I could come up with is data = [1,2,3] output = ((lambda x: x + x) (data[2])) These are, of course, very simple examples but might there be something that is as scalable as let or let* in lisp? Are defining classes the best way to go to create a local namespace?...(but feels a little less interactive that way) Edit: So to further explain the intention (my apologies for vagueness), I want to reduce the use of global variables. So in the case above, I meant to use the extraction operator as a general case of any type of operation that might not want to be repeated. For instance, one might write either output = data[2] + data[2] or x = data[2] output = x + x del x to accomplish the same result. In essence, if the desired operation on 'data' is more complicated then getting the second item, I wouldn't want to type it out multiple times, or let the computer compute the value of the same expression more times than necessary. So in most cases one would assign the result of the operation, in this case, data[2], or operator.itemgetter(2)(data), to some variable in the global space, but I have an aversion to leaving variables around in the global space if they were only necessary to store intermediate values in a computation... hence the use of the 'del' command immediately afterwards. Defining a local environment or namespace and binding intermediate results to local variables would be an ideal alternative. A: I can only second Lennart and Daniel - Python is not Lisp, and trying to write language X into language Y is usually inefficient and frustrating at best. First point: your example code data = [1,2,3] output = ((lambda x: x + x) (data[2])) would be much more readable as: data = [1, 2, 3] output = (lambda x=data[2] : x +x)() but anyway, in this concrete case, using a lambda is total overkill, overcomplexificated, and mostly inefficient. A braindead output = data[2] + data[2] would JustWork(tm) !-) Now wrt/ to local bindings / namespaces, the usual solution is to use... functions - eventually nested. While 100% object (as in "everything is an object"), Python is not pure object, and plain functions are just fine. FWIW, even for "scripts", you should put your logic in a function then call it - function's local namespace access is faster than "global" (really: module level) namespace access. The canonical pattern is import whatever def some_func(args): code_here def some_other_func(args) code_here def main(args): parse_args some_func(something) some_other_func(something_else) return some_exit_code if __name__ == '__main__' import sys sys.exit(main(sys.argv)) Note also that nested functions can also access the enclosing namespace, ie def main(): data = [1, 2, 3] def foo(): x = data[2] return x + x print foo() data = [4, 5, 6] print foo() # if you want the nested function to close over its arguments: def bar(data=data): x = data[2] return x + x print bar() data = [7, 8, 9] print bar() HTH A: It's a bit unclear what you are asking, bit I'll try to answer anyway: You bind variables to names with = in Python. So your data = [1,2,3] binds the list [1,2,3] to the name data. You can create local namespaces with classes and functions/methods. The closest you get so something as powerful as let is probably def and lambda. Python is (despite where some people try to tell you) not Lisp, and not particularly functional, so you will have to adapt your mindset a bit. Update: Ah, I see what you mean now. All variables are pretty much local in Python. The nearest you get to global variables are variables defined in module space, because you can access them with from <module> import <variable>. You also can access them from wherever in the module, but not modify them (unless you say that you want to modify them with the global keyword. Anything you define in a function/method or class definition, will only be accessible from that namespace. So in short: you don't have to worry about the things you worry about now. Python takes care of it for you. :) A: You could combine a function decorator and default parameters to get something like let and block scoped variables: def let(func): return func() data = [1,2,3] @let def output(x=data[2]): return x + x print(output) # 6 # or if a single expression is enough: output = let(lambda x=data[2]: x+x) But this isn't a popular idiom in Python so I advise you avoid it to make your code easier to understand for others. Just use regular local variables: data = [1,2,3] x = data[2] output = x + x If this becomes a real problem it's a good sign you are trying to do too much in a single function. A: Not really knowing Lisp, I can't see what you're trying to do here. But I would say that in general you should not try to write Python as if it were Lisp, or indeed any language as if it were any other language. I've been programming in Python for five years and I've never seen a need to do what you're trying above. Can you give an example of a use case for the above - what are you actually trying to do, in terms of the end result? Maybe then we can advise you on the best way to do it in Python, rather than Lisp.
binding local variables in python
I wonder if there is a good way to bind local variables in python. Most of my work involves cobbling together short data or text processing scripts with a series of expressions (when python permits), so defining object classes (to use as namespaces) and instantiating them seems a bit much. So what I had in mind was something like in (common) lisp, where you could do something like (setq data '(1 2 3)) (setq output (let ( (x (nth 2 data)) ) x + x)) In python, the best I could come up with is data = [1,2,3] output = ((lambda x: x + x) (data[2])) These are, of course, very simple examples but might there be something that is as scalable as let or let* in lisp? Are defining classes the best way to go to create a local namespace?...(but feels a little less interactive that way) Edit: So to further explain the intention (my apologies for vagueness), I want to reduce the use of global variables. So in the case above, I meant to use the extraction operator as a general case of any type of operation that might not want to be repeated. For instance, one might write either output = data[2] + data[2] or x = data[2] output = x + x del x to accomplish the same result. In essence, if the desired operation on 'data' is more complicated then getting the second item, I wouldn't want to type it out multiple times, or let the computer compute the value of the same expression more times than necessary. So in most cases one would assign the result of the operation, in this case, data[2], or operator.itemgetter(2)(data), to some variable in the global space, but I have an aversion to leaving variables around in the global space if they were only necessary to store intermediate values in a computation... hence the use of the 'del' command immediately afterwards. Defining a local environment or namespace and binding intermediate results to local variables would be an ideal alternative.
[ "I can only second Lennart and Daniel - Python is not Lisp, and trying to write language X into language Y is usually inefficient and frustrating at best.\nFirst point: your example code\ndata = [1,2,3]\noutput = ((lambda x: x + x)\n (data[2]))\n\nwould be much more readable as:\ndata = [1, 2, 3]\noutput = (lambda x=data[2] : x +x)()\n\nbut anyway, in this concrete case, using a lambda is total overkill, overcomplexificated, and mostly inefficient. A braindead\noutput = data[2] + data[2]\n\nwould JustWork(tm) !-)\nNow wrt/ to local bindings / namespaces, the usual solution is to use... functions - eventually nested. While 100% object (as in \"everything is an object\"), Python is not pure object, and plain functions are just fine. FWIW, even for \"scripts\", you should put your logic in a function then call it - function's local namespace access is faster than \"global\" (really: module level) namespace access. The canonical pattern is\nimport whatever\n\ndef some_func(args):\n code_here\n\ndef some_other_func(args)\n code_here\n\ndef main(args):\n parse_args\n some_func(something)\n some_other_func(something_else)\n return some_exit_code\n\nif __name__ == '__main__'\n import sys\n sys.exit(main(sys.argv)) \n\nNote also that nested functions can also access the enclosing namespace, ie\ndef main():\n data = [1, 2, 3]\n def foo():\n x = data[2]\n return x + x\n print foo()\n data = [4, 5, 6]\n print foo()\n # if you want the nested function to close over its arguments:\n def bar(data=data):\n x = data[2]\n return x + x\n print bar()\n data = [7, 8, 9]\n print bar()\n\nHTH\n", "It's a bit unclear what you are asking, bit I'll try to answer anyway:\nYou bind variables to names with = in Python. So your data = [1,2,3] binds the list [1,2,3] to the name data.\nYou can create local namespaces with classes and functions/methods.\nThe closest you get so something as powerful as let is probably def and lambda. Python is (despite where some people try to tell you) not Lisp, and not particularly functional, so you will have to adapt your mindset a bit.\nUpdate: Ah, I see what you mean now.\nAll variables are pretty much local in Python. The nearest you get to global variables are variables defined in module space, because you can access them with from <module> import <variable>. You also can access them from wherever in the module, but not modify them (unless you say that you want to modify them with the global keyword. Anything you define in a function/method or class definition, will only be accessible from that namespace.\nSo in short: you don't have to worry about the things you worry about now. Python takes care of it for you. :)\n", "You could combine a function decorator and default parameters to get something like let and block scoped variables:\ndef let(func):\n return func()\n\n\ndata = [1,2,3]\n@let\ndef output(x=data[2]):\n return x + x\nprint(output) # 6\n\n# or if a single expression is enough:\noutput = let(lambda x=data[2]: x+x)\n\nBut this isn't a popular idiom in Python so I advise you avoid it to make your code easier to understand for others. Just use regular local variables:\ndata = [1,2,3]\nx = data[2]\noutput = x + x\n\nIf this becomes a real problem it's a good sign you are trying to do too much in a single function.\n", "Not really knowing Lisp, I can't see what you're trying to do here. But I would say that in general you should not try to write Python as if it were Lisp, or indeed any language as if it were any other language. I've been programming in Python for five years and I've never seen a need to do what you're trying above.\nCan you give an example of a use case for the above - what are you actually trying to do, in terms of the end result? Maybe then we can advise you on the best way to do it in Python, rather than Lisp.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "lisp", "python" ]
stackoverflow_0001542551_lisp_python.txt
Q: Pure Python persistent key and value based container (a hash like interface) with large file system support? I am looking for a (possibly) pure Python library for persistent hash table (btree or b+tree which would provide following features Large file support (possibly in terabytes) Fast enough and low memory footprint (looking for a descent balance between speed and memory) Low cost of management Reliability i.e. doesn't corrupt file once the content is written through the file system Lastly a pure Python implementation. I am OK if it has C library but I am looking for a cross platform solution I have looked into solutions like redis, shelve, tokyo cabinet. Tokyo cabinet is impressive and has a Python binding in the making at http://code.google.com/p/python-tokyocabinet/, but its Windows port is a work in progress. Thanks for some good suggestions. I am currently exploring SQLite3 with Python. I got suggestions to use database engine but am more inclined towards a lean and mean persistent b+tree implementations A: ZODB http://pypi.python.org/pypi/ZODB3 Like Lennart says, use the latest version of course A: Use a relational database. Really fast when retrieving data based on a key, if you put an index in the key. Good scaling Don't get easily corrupted Tools already available for: Backups Replication Clustering Cross-platform Works over the network Allow really fast JOINs, grouping, agreggation, and other complex queries, in case you need them You can easily create a class that works like a dict or hash table, but uses the database as storage. You can make it cache as much as you want on memory. A: ZODB is indeed a powerful tool, but maybe it's overkill. You can hack your own solution in few Python lines : simply code a dictionary like object as a data base adapter. Try using this snippets, replacing the SQLite call to MySql and you should be done.
Pure Python persistent key and value based container (a hash like interface) with large file system support?
I am looking for a (possibly) pure Python library for persistent hash table (btree or b+tree which would provide following features Large file support (possibly in terabytes) Fast enough and low memory footprint (looking for a descent balance between speed and memory) Low cost of management Reliability i.e. doesn't corrupt file once the content is written through the file system Lastly a pure Python implementation. I am OK if it has C library but I am looking for a cross platform solution I have looked into solutions like redis, shelve, tokyo cabinet. Tokyo cabinet is impressive and has a Python binding in the making at http://code.google.com/p/python-tokyocabinet/, but its Windows port is a work in progress. Thanks for some good suggestions. I am currently exploring SQLite3 with Python. I got suggestions to use database engine but am more inclined towards a lean and mean persistent b+tree implementations
[ "ZODB\nhttp://pypi.python.org/pypi/ZODB3\nLike Lennart says, use the latest version of course\n", "Use a relational database. \n\nReally fast when retrieving data based on a key, if you put an index in the key. \nGood scaling\nDon't get easily corrupted\nTools already available for:\n\n\nBackups\nReplication\nClustering\n\nCross-platform\nWorks over the network\nAllow really fast JOINs, grouping, agreggation, and other complex queries, in case you need them\n\nYou can easily create a class that works like a dict or hash table, but uses the database as storage. You can make it cache as much as you want on memory.\n", "ZODB is indeed a powerful tool, but maybe it's overkill.\nYou can hack your own solution in few Python lines : simply code a dictionary like object as a data base adapter. Try using this snippets, replacing the SQLite call to MySql and you should be done.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "hash", "persistence", "python" ]
stackoverflow_0001543087_hash_persistence_python.txt
Q: Attribute as dict of lists using middleman table with SQLAlchemy My question is about SQLAlchemy but I'm having troubles explaining it in words so I figured I explain it with a simple example of what I'm trying to achieve: parent = Table('parent', metadata, Column('parent_id', Integer, primary_key=True), Column('name', Unicode), ) parent_child = Table('parent_child', metadata, Column('parent_id', Integer, primary_key=True), Column('child_id', Integer, primary_key=True), Column('number', Integer), ForeignKeyConstraint(['parent_id'], ['parent.parent_id']), ForeignKeyConstraint(['child_id'], ['child.child_id']), ) child = Table('child', metadata, Column('child_id', Integer, primary_key=True), Column('name', Unicode), ) class Parent(object): pass class ParentChild(object): pass class Child(object): pass >>> p = Parent(name=u'A') >>> print p.children {} >>> p.children[0] = Child(name=u'Child A') >>> p.children[10] = Child(name=u'Child B') >>> p.children[10] = Child(name=u'Child C') This code would create 3 rows in parent_child table, column number would be 0 for the first row, and 10 for the second and third row. >>> print p.children {0: [<Child A>], 10: [<Child B>, <Child C>]} >>> print p.children[10][0] <Child B> (I left out all SQLAlchemy session/engine code in the example to make it as clean as possible) I did a try using´ collection_class=attribute_mapped_collection('number') on a relation between Parent and ParentChild, but it only gave me one child for each number. Not a dict with lists in it. Any help appreciated! UPDATED! Thanks to Denis Otkidach I now has this code, but it still doesn't work. def _create_children(number, child): return ParentChild(parent=None, child=child, number=number) class Parent(object): children = association_proxy('_children', 'child', creator=_create_children) class MyMappedCollection(MappedCollection): def __init__(self): keyfunc = lambda attr_name: operator.attrgetter('number') MappedCollection.__init__(self, keyfunc=keyfunc) @collection.appender @collection.internally_instrumented def set(self, value, _sa_initiator=None): key = self.keyfunc(value) try: self.__getitem__(key).append(value) except KeyError: self.__setitem__(key, [value]) mapper(Parent, parent, properties={ '_children': relation(ParentChild, collection_class=MyMappedCollection), }) To insert an Child seems to work p.children[100] = Child(...) But when I try to print children like this: print p.children I get this error: sqlalchemy.orm.exc.UnmappedInstanceError: Class '__builtin__.list' is not mapped A: You have to define your own collection class. There are only 3 methods to implement: appender, remover, and converter. See sqlalchemy.orm.collections.MappedCollection as an example. Update: Here is quick-n-dirty implementation according to your requirements: from sqlalchemy import * from sqlalchemy.orm import mapper, relation, sessionmaker from sqlalchemy.orm.collections import collection metadata = MetaData() parent = Table('parent', metadata, Column('parent_id', Integer, primary_key=True), Column('name', Unicode), ) child = Table('child', metadata, Column('child_id', Integer, primary_key=True), Column('name', Unicode), ) parent_child = Table('parent_child', metadata, Column('parent_id', Integer, ForeignKey(parent.c.parent_id)), Column('child_id', Integer, ForeignKey(child.c.child_id)), Column('number', Integer), PrimaryKeyConstraint('parent_id', 'child_id'), ) class ParentChild(object): def __init__(self, child, number): self.child = child self.number = number class Parent(object): pass class Child(object): pass class MyMappedCollection(object): def __init__(self, data=None): self._data = data or {} @collection.appender def _append(self, parent_child): l = self._data.setdefault(parent_child.number, []) l.append(parent_child) def __setitem__(self, number, child): self._append(ParentChild(number=number, child=child)) def __getitem__(self, number): return tuple(pc.child for pc in self._data[number]) @collection.remover def _remove(self, parent_child): self._data[parent_child.number].remove(parent_child) @collection.iterator def _iterator(self): for pcs in self._data.itervalues(): for pc in pcs: yield pc def __repr__(self): return '%s(%r)' % (type(self).__name__, self._data) mapper(Parent, parent, properties={ 'children': relation(ParentChild, collection_class=MyMappedCollection), }) mapper(Child, child) mapper(ParentChild, parent_child, properties={ 'parent': relation(Parent), 'child': relation(Child), }) engine = create_engine('sqlite://') db = sessionmaker(bind=engine)() metadata.create_all(bind=engine) p = Parent() c1 = Child() c2 = Child() c3 = Child() p.children[1] = c1 p.children[1] = c2 p.children[2] = c3 db.add(p) db.commit() p_id = p.parent_id db.expunge_all() p = db.query(Parent).get(p_id) print p.children[1] print p.children[2]
Attribute as dict of lists using middleman table with SQLAlchemy
My question is about SQLAlchemy but I'm having troubles explaining it in words so I figured I explain it with a simple example of what I'm trying to achieve: parent = Table('parent', metadata, Column('parent_id', Integer, primary_key=True), Column('name', Unicode), ) parent_child = Table('parent_child', metadata, Column('parent_id', Integer, primary_key=True), Column('child_id', Integer, primary_key=True), Column('number', Integer), ForeignKeyConstraint(['parent_id'], ['parent.parent_id']), ForeignKeyConstraint(['child_id'], ['child.child_id']), ) child = Table('child', metadata, Column('child_id', Integer, primary_key=True), Column('name', Unicode), ) class Parent(object): pass class ParentChild(object): pass class Child(object): pass >>> p = Parent(name=u'A') >>> print p.children {} >>> p.children[0] = Child(name=u'Child A') >>> p.children[10] = Child(name=u'Child B') >>> p.children[10] = Child(name=u'Child C') This code would create 3 rows in parent_child table, column number would be 0 for the first row, and 10 for the second and third row. >>> print p.children {0: [<Child A>], 10: [<Child B>, <Child C>]} >>> print p.children[10][0] <Child B> (I left out all SQLAlchemy session/engine code in the example to make it as clean as possible) I did a try using´ collection_class=attribute_mapped_collection('number') on a relation between Parent and ParentChild, but it only gave me one child for each number. Not a dict with lists in it. Any help appreciated! UPDATED! Thanks to Denis Otkidach I now has this code, but it still doesn't work. def _create_children(number, child): return ParentChild(parent=None, child=child, number=number) class Parent(object): children = association_proxy('_children', 'child', creator=_create_children) class MyMappedCollection(MappedCollection): def __init__(self): keyfunc = lambda attr_name: operator.attrgetter('number') MappedCollection.__init__(self, keyfunc=keyfunc) @collection.appender @collection.internally_instrumented def set(self, value, _sa_initiator=None): key = self.keyfunc(value) try: self.__getitem__(key).append(value) except KeyError: self.__setitem__(key, [value]) mapper(Parent, parent, properties={ '_children': relation(ParentChild, collection_class=MyMappedCollection), }) To insert an Child seems to work p.children[100] = Child(...) But when I try to print children like this: print p.children I get this error: sqlalchemy.orm.exc.UnmappedInstanceError: Class '__builtin__.list' is not mapped
[ "You have to define your own collection class. There are only 3 methods to implement: appender, remover, and converter. See sqlalchemy.orm.collections.MappedCollection as an example.\nUpdate: Here is quick-n-dirty implementation according to your requirements:\nfrom sqlalchemy import *\nfrom sqlalchemy.orm import mapper, relation, sessionmaker\nfrom sqlalchemy.orm.collections import collection\n\nmetadata = MetaData()\n\nparent = Table('parent', metadata,\n Column('parent_id', Integer, primary_key=True),\n Column('name', Unicode),\n)\n\nchild = Table('child', metadata,\n Column('child_id', Integer, primary_key=True),\n Column('name', Unicode),\n)\n\nparent_child = Table('parent_child', metadata,\n Column('parent_id', Integer, ForeignKey(parent.c.parent_id)),\n Column('child_id', Integer, ForeignKey(child.c.child_id)),\n Column('number', Integer),\n PrimaryKeyConstraint('parent_id', 'child_id'),\n)\n\nclass ParentChild(object):\n def __init__(self, child, number):\n self.child = child\n self.number = number\n\nclass Parent(object): pass\n\nclass Child(object): pass\n\n\nclass MyMappedCollection(object):\n\n def __init__(self, data=None):\n self._data = data or {}\n\n @collection.appender\n def _append(self, parent_child):\n l = self._data.setdefault(parent_child.number, [])\n l.append(parent_child)\n\n def __setitem__(self, number, child):\n self._append(ParentChild(number=number, child=child))\n\n def __getitem__(self, number):\n return tuple(pc.child for pc in self._data[number])\n\n @collection.remover\n def _remove(self, parent_child):\n self._data[parent_child.number].remove(parent_child)\n\n @collection.iterator\n def _iterator(self):\n for pcs in self._data.itervalues():\n for pc in pcs:\n yield pc\n\n def __repr__(self):\n return '%s(%r)' % (type(self).__name__, self._data)\n\n\nmapper(Parent, parent, properties={\n 'children': relation(ParentChild, collection_class=MyMappedCollection),\n})\nmapper(Child, child)\nmapper(ParentChild, parent_child, properties={\n 'parent': relation(Parent),\n 'child': relation(Child),\n})\n\nengine = create_engine('sqlite://')\ndb = sessionmaker(bind=engine)()\nmetadata.create_all(bind=engine)\n\np = Parent()\nc1 = Child()\nc2 = Child()\nc3 = Child()\np.children[1] = c1\np.children[1] = c2\np.children[2] = c3\n\ndb.add(p)\ndb.commit()\np_id = p.parent_id\ndb.expunge_all()\n\np = db.query(Parent).get(p_id)\nprint p.children[1]\nprint p.children[2]\n\n" ]
[ 2 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001534673_python_sqlalchemy.txt
Q: Can't get django-registration to work (on Windows) Trying to add django-registration to my app. I have installed setup tools to use easy_install. I think that works.. I run easy_install django-registation and a cmd prompt window flashes up, does something and closes. I don't think it's an error. But when I look in my app folder, theres nothing relation to django-registration. Anyone know what is wrong? And where should the django-registration files appear? (Also tried it with django-profiles and it was exactly the same) A: Django-registration will be installed on your python path, not in the project itself. You can see if it installed correctly by entering your python prompt and running: >>>import registration If you don't get an error it is working and installed. Just add 'registration' to your INSTALLED_APPS.
Can't get django-registration to work (on Windows)
Trying to add django-registration to my app. I have installed setup tools to use easy_install. I think that works.. I run easy_install django-registation and a cmd prompt window flashes up, does something and closes. I don't think it's an error. But when I look in my app folder, theres nothing relation to django-registration. Anyone know what is wrong? And where should the django-registration files appear? (Also tried it with django-profiles and it was exactly the same)
[ "Django-registration will be installed on your python path, not in the project itself. You can see if it installed correctly by entering your python prompt and running:\n>>>import registration\n\nIf you don't get an error it is working and installed. Just add 'registration' to your INSTALLED_APPS. \n" ]
[ 4 ]
[]
[]
[ "django", "easy_install", "python", "registration", "setuptools" ]
stackoverflow_0001543601_django_easy_install_python_registration_setuptools.txt
Q: Zero results in Query/GqlQuery How do I know if the results of my query either using the Query interface or the GqlQuery interface returned zero results? Would using .get() on zero results produce an error? If yes, what's the best way to handle it? A: when doing a get() if there are no results you will have an object containing None I normally do result = query.get() if result is None: #do the following or if you want to check that its not none then if result is not None: #do the following A: if a query returns no results, fetch() returns an empty list [] and get() returns None in either case you can use the following: if result: #handle the result else: #no results were returned
Zero results in Query/GqlQuery
How do I know if the results of my query either using the Query interface or the GqlQuery interface returned zero results? Would using .get() on zero results produce an error? If yes, what's the best way to handle it?
[ "when doing a get() if there are no results you will have an object containing None\nI normally do \nresult = query.get()\nif result is None:\n #do the following\n\nor if you want to check that its not none then\nif result is not None:\n #do the following\n\n", "if a query returns no results, fetch() returns an empty list [] and get() returns None\nin either case you can use the following:\nif result:\n #handle the result\nelse:\n #no results were returned\n\n" ]
[ 5, 2 ]
[]
[]
[ "google_app_engine", "gql", "gqlquery", "python", "queryinterface" ]
stackoverflow_0001509407_google_app_engine_gql_gqlquery_python_queryinterface.txt
Q: What are some useful non-built-in Django tags? I'm relatively new to Django and I'm trying to build up my toolbox for future projects. In my last project, when a built-in template tag didn't do quite what I needed, I would make a mangled mess of the template to shoe-horn in the feature. I later would find a template tag that would have saved me time and ugly code. So what are some useful template tags that doesn't come built into Django? A: I'll start. http://www.djangosnippets.org/snippets/1350/ Smart {% if %} template tag If you've ever found yourself needing more than a test for True, this tag is for you. It supports equality, greater than, and less than operators. Simple Example {% block list-products %} {% if products|length > 12 %} <!-- Code for pagination --> {% endif %} <!-- Code for displaying 12 products on the page --> {% endblock %} A: smart-if. Allows normal if x > y constructs in templates, among other things. A better if tag is now part of Django 1.2 (see the release notes), which is scheduled for release on March 9th 2010. A: James Bennet's over-the-top-dynamic get_latest tag edit as response to jpartogi's comment class GetItemsNode(Node): def __init__(self, model, num, by, varname): self.num, self.varname = num, varname self.model = get_model(*model.split('.')) self.by = by def render(self, context): if hasattr(self.model, 'publicmgr') and not context['user'].is_authenticated(): context[self.varname] = self.model.publicmgr.all().order_by(self.by)[:self.num] else: context[self.varname] = self.model._default_manager.all().order_by(self.by)[:self.num] return '' <div id="news_portlet" class="portlet"> {% get_sorted_items cms.news 5 by -created_on as items %} {% include 'snippets/dl.html' %} </div> <div id="event_portlet" class="portlet"> {% get_sorted_items cms.event 5 by date as items %} {% include 'snippets/dl.html' %} </div> I call it get_sorted_items, but it is based on James' blog-post A: In come case {% autopaginate queryset %} (http://code.google.com/p/django-pagination/) is useful. For example: #views.py obj_list = News.objects.filter(status=News.PUBLISHED) # do not use len(obj_list) - it's evaluate QuerySet obj_count = obj_list.count() #news_index.html {% load pagination_tags %} ... # do not use {% if obj_list %} {% if obj_count %} <div class="news"> <ul> {% autopaginate obj_list 10 %} {% for item in obj_list %} <li><a href="...">{{ item.title }}</a></li> {% endfor %} </ul> </div> {% paginate %} {% else %} Empty list {% endif %} Note, that obj_list must be lazy - read http://docs.djangoproject.com/en/dev/ref/models/querysets/#id1
What are some useful non-built-in Django tags?
I'm relatively new to Django and I'm trying to build up my toolbox for future projects. In my last project, when a built-in template tag didn't do quite what I needed, I would make a mangled mess of the template to shoe-horn in the feature. I later would find a template tag that would have saved me time and ugly code. So what are some useful template tags that doesn't come built into Django?
[ "I'll start.\nhttp://www.djangosnippets.org/snippets/1350/\nSmart {% if %} template tag\nIf you've ever found yourself needing more than a test for True, this tag is for you. It supports equality, greater than, and less than operators.\nSimple Example\n{% block list-products %}\n {% if products|length > 12 %}\n <!-- Code for pagination -->\n {% endif %}\n\n <!-- Code for displaying 12 products on the page -->\n\n{% endblock %}\n\n", "smart-if. Allows normal if x > y constructs in templates, among other things.\nA better if tag is now part of Django 1.2 (see the release notes), which is scheduled for release on March 9th 2010.\n", "James Bennet's over-the-top-dynamic get_latest tag\nedit as response to jpartogi's comment\nclass GetItemsNode(Node):\n def __init__(self, model, num, by, varname):\n self.num, self.varname = num, varname\n self.model = get_model(*model.split('.'))\n self.by = by\n\n def render(self, context):\n if hasattr(self.model, 'publicmgr') and not context['user'].is_authenticated():\n context[self.varname] = self.model.publicmgr.all().order_by(self.by)[:self.num]\n else:\n context[self.varname] = self.model._default_manager.all().order_by(self.by)[:self.num]\n return ''\n\n<div id=\"news_portlet\" class=\"portlet\">\n{% get_sorted_items cms.news 5 by -created_on as items %}\n{% include 'snippets/dl.html' %}\n</div>\n<div id=\"event_portlet\" class=\"portlet\">\n{% get_sorted_items cms.event 5 by date as items %}\n{% include 'snippets/dl.html' %}\n</div>\n\nI call it get_sorted_items, but it is based on James' blog-post\n", "In come case {% autopaginate queryset %} (http://code.google.com/p/django-pagination/) is useful. For example:\n#views.py\n obj_list = News.objects.filter(status=News.PUBLISHED)\n # do not use len(obj_list) - it's evaluate QuerySet\n obj_count = obj_list.count()\n\n\n#news_index.html\n {% load pagination_tags %}\n ...\n # do not use {% if obj_list %}\n {% if obj_count %}\n <div class=\"news\">\n <ul>\n {% autopaginate obj_list 10 %}\n {% for item in obj_list %}\n <li><a href=\"...\">{{ item.title }}</a></li>\n {% endfor %}\n </ul>\n </div>\n {% paginate %}\n {% else %}\n Empty list\n {% endif %}\n\nNote, that obj_list must be lazy - read http://docs.djangoproject.com/en/dev/ref/models/querysets/#id1\n" ]
[ 4, 3, 1, 1 ]
[]
[]
[ "django", "django_tagging", "favorites", "python" ]
stackoverflow_0001532021_django_django_tagging_favorites_python.txt
Q: Comprehensions in Python and Javascript are only very basic? Looking at comprehensions in Python and Javascript, so far I can't see some of the main features that I consider most powerful in comprehensions in languages like Haskell. Do they allow things like multiple generators? Or are they just a basic map-filter form? If they don't allow multiple generators, I find them quite disappointing - why have such things been left out? A: Python allows multiple generators: >>> [(x,y,x*y) for x in range(1,5) for y in range(1,5)] [(1, 1, 1), (1, 2, 2), (1, 3, 3), (1, 4, 4), (2, 1, 2), (2, 2, 4), (2, 3, 6), (2, 4, 8), (3, 1, 3), (3, 2, 6), (3, 3, 9), (3, 4, 12), (4, 1, 4), (4, 2, 8), (4, 3, 12), (4, 4, 16)] And also restrictions: >>> [(x,y,x*y) for x in range(1,5) for y in range(1,5) if x*y > 8] [(3, 3, 9), (3, 4, 12), (4, 3, 12), (4, 4, 16)] Update: Javascript's syntax is similar (results from using the javascript shell on firefox): var nums = [1, 2, 3, 21, 22, 30]; var s = eval('[[i,j] for each (i in nums) for each (j in [3,4]) if (i%2 == 0)]'); s.toSource(); [[2, 3], [2, 4], [22, 3], [22, 4], [30, 3], [30, 4]] (For some reason, something about the context stuff is evaluated in in the javascript shell requires the eval indirection to have list comprehensions work. Javascript inside a <script> tag doesn't require that, of course) A: Yes, you can have multiple iterables in a Python list comprehension: >>> [(x,y) for x in range(2) for y in range(3)] [(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2)] A: Add an if statement as well... >>> [(x,y) for x in range(5) for y in range(6) if x % 3 == 0 and y % 2 == 0] [(0, 0), (0, 2), (0, 4), (3, 0), (3, 2), (3, 4)] A: Comprehensions is very powerful in Haskell to a large extent because Haskell is functional, so it makes extremely much sense for them to be. Python is not functional so it makes less sense. You can make a lot of complex things with comprehensions in Python but it quickly becomes hard to read, thereby defeating the whole purpose (meaning you should do it some other way). However, as pointed out here, python does allow multiple generators in comprehensions.
Comprehensions in Python and Javascript are only very basic?
Looking at comprehensions in Python and Javascript, so far I can't see some of the main features that I consider most powerful in comprehensions in languages like Haskell. Do they allow things like multiple generators? Or are they just a basic map-filter form? If they don't allow multiple generators, I find them quite disappointing - why have such things been left out?
[ "Python allows multiple generators:\n>>> [(x,y,x*y) for x in range(1,5) for y in range(1,5)]\n[(1, 1, 1), (1, 2, 2), (1, 3, 3), (1, 4, 4), \n (2, 1, 2), (2, 2, 4), (2, 3, 6), (2, 4, 8), \n (3, 1, 3), (3, 2, 6), (3, 3, 9), (3, 4, 12),\n (4, 1, 4), (4, 2, 8), (4, 3, 12), (4, 4, 16)]\n\nAnd also restrictions:\n>>> [(x,y,x*y) for x in range(1,5) for y in range(1,5) if x*y > 8]\n[(3, 3, 9), (3, 4, 12), (4, 3, 12), (4, 4, 16)]\n\nUpdate: Javascript's syntax is similar (results from using the javascript shell on firefox):\nvar nums = [1, 2, 3, 21, 22, 30];\nvar s = eval('[[i,j] for each (i in nums) for each (j in [3,4]) if (i%2 == 0)]');\ns.toSource();\n[[2, 3], [2, 4], [22, 3], [22, 4], [30, 3], [30, 4]]\n\n(For some reason, something about the context stuff is evaluated in in the javascript shell requires the eval indirection to have list comprehensions work. Javascript inside a <script> tag doesn't require that, of course)\n", "Yes, you can have multiple iterables in a Python list comprehension:\n>>> [(x,y) for x in range(2) for y in range(3)]\n[(0, 0), (0, 1), (0, 2), (1, 0), (1, 1), (1, 2)]\n\n", "Add an if statement as well...\n>>> [(x,y) for x in range(5) for y in range(6) if x % 3 == 0 and y % 2 == 0]\n[(0, 0), (0, 2), (0, 4), (3, 0), (3, 2), (3, 4)]\n\n", "Comprehensions is very powerful in Haskell to a large extent because Haskell is functional, so it makes extremely much sense for them to be. Python is not functional so it makes less sense.\nYou can make a lot of complex things with comprehensions in Python but it quickly becomes hard to read, thereby defeating the whole purpose (meaning you should do it some other way).\nHowever, as pointed out here, python does allow multiple generators in comprehensions.\n" ]
[ 12, 3, 1, 1 ]
[]
[]
[ "haskell", "javascript", "list_comprehension", "python" ]
stackoverflow_0001543820_haskell_javascript_list_comprehension_python.txt
Q: simple dropping elements from the list in python I'd like to achieve following effect a=[11, -1, -1, -1] msg=['one','two','tree','four'] msg[where a<0] ['two','tree','four'] In similar simple fashion (without nasty loops). PS. For curious people this if statement is working natively in one of functional languages. //EDIT I know that below text is different that the requirements above, but I've found what I wonted to acheave. I don't want to spam another answer in my own thread, so I've also find some nice solution, and I want to present it to you. filter(lambda x: not x.endswith('one'),msg) A: You can use list comprehensions for this. You need to match the items from the two lists, for which the zip function is used. This will generate a list of tuples, where each tuple contains one item from each of the original lists (i.e., [(11, 'one'), ...]). Once you have this, you can iterate over the result, check if the first element is below 0 and return the second element. See the linked Python docs for more details about the syntax. [y for (x, y) in zip(a, msg) if x < 0] The actual problem seems to be about finding items in the msg list that don't contain the string "one". This can be done directly: [m for m in msg if "one" not in m] A: [m for m, i in zip(msg, a) if i < 0] A: The answers already posted are good, but if you want an alternative you can look at numpy and at its arrays. >>> import numpy as np >>> a = np.array([11, -1, -1, -1]) >>> msg = np.array(['one','two','tree','four']) >>> a < 0 array([False, True, True, True], dtype=bool) >>> msg[a < 0] array(['two', 'tree', 'four'], dtype='|S4') I don't know how array indexing is implemented in numpy, but it is usually fast and problably rewritten in C. Compared to the other solutions, this should be more readable, but it requires numpy. A: I think [msg[i] for i in range(len(a)) if a[i]<0] will work for you.
simple dropping elements from the list in python
I'd like to achieve following effect a=[11, -1, -1, -1] msg=['one','two','tree','four'] msg[where a<0] ['two','tree','four'] In similar simple fashion (without nasty loops). PS. For curious people this if statement is working natively in one of functional languages. //EDIT I know that below text is different that the requirements above, but I've found what I wonted to acheave. I don't want to spam another answer in my own thread, so I've also find some nice solution, and I want to present it to you. filter(lambda x: not x.endswith('one'),msg)
[ "You can use list comprehensions for this. You need to match the items from the two lists, for which the zip function is used. This will generate a list of tuples, where each tuple contains one item from each of the original lists (i.e., [(11, 'one'), ...]). Once you have this, you can iterate over the result, check if the first element is below 0 and return the second element. See the linked Python docs for more details about the syntax.\n[y for (x, y) in zip(a, msg) if x < 0]\n\nThe actual problem seems to be about finding items in the msg list that don't contain the string \"one\". This can be done directly:\n[m for m in msg if \"one\" not in m]\n\n", "[m for m, i in zip(msg, a) if i < 0]\n\n", "The answers already posted are good, but if you want an alternative you can look at numpy and at its arrays.\n>>> import numpy as np\n>>> a = np.array([11, -1, -1, -1])\n>>> msg = np.array(['one','two','tree','four'])\n>>> a < 0\narray([False, True, True, True], dtype=bool)\n\n>>> msg[a < 0]\narray(['two', 'tree', 'four'], dtype='|S4')\n\nI don't know how array indexing is implemented in numpy, but it is usually fast and problably rewritten in C. Compared to the other solutions, this should be more readable, but it requires numpy.\n", "I think [msg[i] for i in range(len(a)) if a[i]<0] will work for you.\n" ]
[ 10, 2, 1, 0 ]
[]
[]
[ "filter", "python" ]
stackoverflow_0001543456_filter_python.txt
Q: Non-intrusively unlock file on Windows Is there a way to unlock a file on Windows with a Python script? The file is exclusively locked by another process. I need a solution without killing or interupting the locking process. I already had a look at portalocker, a portable locking implementation. But this needs a file handle to unlock, which I can not get, as the file is already locked by the locking process. If there is no way, could someone lead me to the Windows API doc which describes the problem further? A: Anything you do will affect the other process if that process thinks it has a lock on the file then breaking the lock means that the program has unexpected brhaviour and could brek or corrupt things. Thus only do this if you know exactly what will happen. The api used by the other program probably uses msdn LockFile A: If you only need to infrequently read the locked file, you might try to use the Volume Shadow Copy Service
Non-intrusively unlock file on Windows
Is there a way to unlock a file on Windows with a Python script? The file is exclusively locked by another process. I need a solution without killing or interupting the locking process. I already had a look at portalocker, a portable locking implementation. But this needs a file handle to unlock, which I can not get, as the file is already locked by the locking process. If there is no way, could someone lead me to the Windows API doc which describes the problem further?
[ "Anything you do will affect the other process if that process thinks it has a lock on the file then breaking the lock means that the program has unexpected brhaviour and could brek or corrupt things.\nThus only do this if you know exactly what will happen.\nThe api used by the other program probably uses msdn LockFile\n", "If you only need to infrequently read the locked file, you might try to use the Volume Shadow Copy Service\n" ]
[ 1, 1 ]
[]
[]
[ "file", "locking", "python", "winapi", "windows" ]
stackoverflow_0001544275_file_locking_python_winapi_windows.txt
Q: Django Reuseable Apps I came across many resources about the difference between Django projects and reusable apps, most prominently the DjangoCon talk, and Pinax Project. However, being a newbie, writing my own projects and reusable software seems to a bit challenging. I don't quite understand how where models go (and how apps can be flexible and permissive), where the templates go, and how the different apps mesh together. Are there any tutorials on creating a project with reusable apps? Good practices page? Most preferably, a sample project with its own apps (rather than depend on external apps)? I am aiming to understand the architecture of a project and interaction between apps rather than just building reusable apps. Most tutorials I came across online are about building a reusable app, or building a simple monothelic blog app that only has external dependencies on builtin or django.contrib modules. A: James Bennett's Practical Django Projects does a pretty good job of covering those topics in general and even includes a chapter specifically on "Writing Reusable Django Applications" that goes through an example of splitting one of the example projects in the book out into its own app. A: You can watch video (DjangoCon 2008: Reusable Apps) - http://www.youtube.com/watch?v=A-S0tqpPga4 and get the idea, how to use it. There are a lot of reusapbe apps at Google, djangosnippets, git, etc. Most popular: django-contact-form - feedback form; django-debug-toolbar - watch sql queries and etc; django-registration + django-profiles - skip regs routines; django-mptt - use tree structure; django-pagination - usefull per-page viewer; django-stdimage or sorl-thumbnail - image routines; south - schema migrations; Read samples docs and save your dev-time. Good luck! A: If you want to see "sample projects with reusable apps interacting with each other," there's no better place to go than downloading Pinax, cloning one of their sample projects (just follow the docs) and reading through the code carefully.
Django Reuseable Apps
I came across many resources about the difference between Django projects and reusable apps, most prominently the DjangoCon talk, and Pinax Project. However, being a newbie, writing my own projects and reusable software seems to a bit challenging. I don't quite understand how where models go (and how apps can be flexible and permissive), where the templates go, and how the different apps mesh together. Are there any tutorials on creating a project with reusable apps? Good practices page? Most preferably, a sample project with its own apps (rather than depend on external apps)? I am aiming to understand the architecture of a project and interaction between apps rather than just building reusable apps. Most tutorials I came across online are about building a reusable app, or building a simple monothelic blog app that only has external dependencies on builtin or django.contrib modules.
[ "James Bennett's Practical Django Projects does a pretty good job of covering those topics in general and even includes a chapter specifically on \"Writing Reusable Django Applications\" that goes through an example of splitting one of the example projects in the book out into its own app.\n", "You can watch video (DjangoCon 2008: Reusable Apps) - http://www.youtube.com/watch?v=A-S0tqpPga4 and get the idea, how to use it.\nThere are a lot of reusapbe apps at Google, djangosnippets, git, etc. Most popular:\n\ndjango-contact-form - feedback form;\ndjango-debug-toolbar - watch sql queries and etc;\ndjango-registration + django-profiles - skip regs routines;\ndjango-mptt - use tree structure;\ndjango-pagination - usefull per-page viewer;\ndjango-stdimage or sorl-thumbnail - image routines;\nsouth - schema migrations;\n\nRead samples docs and save your dev-time. Good luck!\n", "If you want to see \"sample projects with reusable apps interacting with each other,\" there's no better place to go than downloading Pinax, cloning one of their sample projects (just follow the docs) and reading through the code carefully.\n" ]
[ 4, 3, 3 ]
[]
[]
[ "django", "django_apps", "python" ]
stackoverflow_0001539485_django_django_apps_python.txt
Q: Condense this Python statement without destroying readability I'm pretty new to Python still, so I'm trying to figure out how to do this and need some help. I use return codes to verify that my internal functions return successfully. For example (from internal library functions): result = some_function(arg1,arg2) if result != OK: return result or (from main script level): result = some_function(arg1,arg2) if result != OK: abort_on_error("Could not complete 'some_function': %s" % messages(result)) Can I get this down to one line without making it unreadable? EDIT: Some people are recognizing that exceptions might be a better choice. I wanted to save exceptions only for very 'exceptional' scenario capturing. The return codes may be expected to fail at times, and I thought it was generally bad practice to use exceptions for this scenario. A: Could you use exceptions to indicate failure, rather than return codes? Then most of your if result != OK: statements would simply go away. A: pythonic: An idea or piece of code which closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages.[...] Does not imply writing on-liners! A: It sounds as if you want something like.. if result = some_function(arg1, arg2): return result This is very deliberately not possible in Python. It's too common a typo to write if a = b instead of if a == b, and allowing this mixes assignment with flow-control. If this is necessary, split it into two lines: x = some_function() if x: print "Function returned True" A more practical example of this is.. result = re.match("a", "b") if result: print result.groups() (more correctly, you should do if result is not None: in this case, although the above works) In your specific case ("to verify that my internal functions return successfully"), it sounds like you should use exceptions. If everything is fine, just return whatever you wish. If something goes badly, raise an exception. Exceptions in Python aren't like many other languages - for example, they are used internally flow control (such as the StopIteration exception) I'd consider the following far more Pythonic than using return codes: #!/usr/bin/env python2.6 def some_function(arg1, arg2): if arg1 + arg2 > 5: return "some data for you" else: raise ValueError("Could not complete, arg1+arg2 was too small") Then, you can call the function in a single line: return some_function(3, 2) This either returns the value, or raises an exception, which you could handle the exception somewhere sensible: def main(): try: result = some_function(3, 5) except ValueError, errormsg: print errormsg sys.exit(1) else: print "Everything is perfect, the result was {0}".format(result) Or if this case is actually an error, simply let halt the application with a nice stack trace. Yes, it's much longer than one line, but the idea behind Python is brevity, but explicitness and readability. Basically, if the function can no longer continue, raise an exception. Handle this exception either where you can recover from the problem, or present the user with an error message.. unless you are writing a library, in which case leave the exception to run up the stack to the calling code Or, in poem form: $ python -m this The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! Finally, it might be worth reading over "PEP 8", the style guide for Python. It may answer some of your questions, such as "Are one-liner if statements 'pythonic'?" Compound statements (multiple statements on the same line) are generally discouraged. Yes: if foo == 'blah': do_blah_thing() do_one() do_two() do_three() Rather not: if foo == 'blah': do_blah_thing() do_one(); do_two(); do_three() A: If you insist on not using exceptions, then I would write two lines (one line will be too long here): res = some_function(arg1, arg2) return res if res != OK else ... By the way, I recommend you to come up with some static type of values your function returns (despite of dynamic typing in Python). For example, you may return "either int or None". You can put such a description into a docstring. In case you have int result values and int error codes you may distinguish them by introducing an error class: class ErrorCode(int): pass and then checking if the result isinstance of ErrorCode.
Condense this Python statement without destroying readability
I'm pretty new to Python still, so I'm trying to figure out how to do this and need some help. I use return codes to verify that my internal functions return successfully. For example (from internal library functions): result = some_function(arg1,arg2) if result != OK: return result or (from main script level): result = some_function(arg1,arg2) if result != OK: abort_on_error("Could not complete 'some_function': %s" % messages(result)) Can I get this down to one line without making it unreadable? EDIT: Some people are recognizing that exceptions might be a better choice. I wanted to save exceptions only for very 'exceptional' scenario capturing. The return codes may be expected to fail at times, and I thought it was generally bad practice to use exceptions for this scenario.
[ "Could you use exceptions to indicate failure, rather than return codes? Then most of your if result != OK: statements would simply go away.\n", "pythonic:\n\nAn idea or piece of code which closely follows the most common idioms of the Python language, rather than implementing code using concepts common to other languages.[...]\n\nDoes not imply writing on-liners!\n", "It sounds as if you want something like..\nif result = some_function(arg1, arg2):\n return result\n\nThis is very deliberately not possible in Python. It's too common a typo to write if a = b instead of if a == b, and allowing this mixes assignment with flow-control. If this is necessary, split it into two lines:\nx = some_function()\nif x:\n print \"Function returned True\"\n\nA more practical example of this is..\nresult = re.match(\"a\", \"b\")\nif result:\n print result.groups()\n\n(more correctly, you should do if result is not None: in this case, although the above works)\nIn your specific case (\"to verify that my internal functions return successfully\"), it sounds like you should use exceptions. If everything is fine, just return whatever you wish. If something goes badly, raise an exception.\nExceptions in Python aren't like many other languages - for example, they are used internally flow control (such as the StopIteration exception)\nI'd consider the following far more Pythonic than using return codes:\n#!/usr/bin/env python2.6\ndef some_function(arg1, arg2):\n if arg1 + arg2 > 5:\n return \"some data for you\"\n else:\n raise ValueError(\"Could not complete, arg1+arg2 was too small\")\n\nThen, you can call the function in a single line:\nreturn some_function(3, 2)\n\nThis either returns the value, or raises an exception, which you could handle the exception somewhere sensible:\ndef main():\n try:\n result = some_function(3, 5)\n except ValueError, errormsg:\n print errormsg\n sys.exit(1)\n else:\n print \"Everything is perfect, the result was {0}\".format(result)\n\nOr if this case is actually an error, simply let halt the application with a nice stack trace.\nYes, it's much longer than one line, but the idea behind Python is brevity, but explicitness and readability.\nBasically, if the function can no longer continue, raise an exception. Handle this exception either where you can recover from the problem, or present the user with an error message.. unless you are writing a library, in which case leave the exception to run up the stack to the calling code\nOr, in poem form:\n$ python -m this\nThe Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\nFinally, it might be worth reading over \"PEP 8\", the style guide for Python. It may answer some of your questions, such as \"Are one-liner if statements 'pythonic'?\"\n\nCompound statements (multiple statements on the same line) are generally discouraged.\nYes:\nif foo == 'blah':\n do_blah_thing()\ndo_one()\ndo_two()\ndo_three()\n\nRather not:\nif foo == 'blah': do_blah_thing()\ndo_one(); do_two(); do_three()\n\n\n", "If you insist on not using exceptions, then I would write two lines (one line will be too long here):\nres = some_function(arg1, arg2)\nreturn res if res != OK else ...\n\nBy the way, I recommend you to come up with some static type of values your function returns (despite of dynamic typing in Python). For example, you may return \"either int or None\". You can put such a description into a docstring.\nIn case you have int result values and int error codes you may distinguish them by introducing an error class:\nclass ErrorCode(int): pass\n\nand then checking if the result isinstance of ErrorCode.\n" ]
[ 10, 3, 3, 1 ]
[ "In addition to exceptions, using a decorator is a good solution to this problem:\n# Create a function that creates a decorator given a value to fail on...\ndef fail_unless(ok_val):\n def _fail_unless(f):\n def g(*args, **kwargs):\n val = f(*args, **kwargs)\n if val != ok_val:\n print 'CALLING abort_on_error...'\n else:\n return val\n return g\n return _fail_unless\n\n\n# Now you can use the decorator on any function you'd like to fail \n@fail_unless('OK')\ndef no_negatives(n):\n if n < 0:\n return 'UH OH!'\n else:\n return 'OK'\n\nIn practice:\n>>> no_negatives(9)\n'OK'\n>>> no_negatives(0)\n'OK'\n>>> no_negatives(-1)\n'CALLING abort_on_error...'\n\nI know the syntax defining fail_unless is a little tricky if you're not used to decorators and function closures but the application of fail_unless() is quite nice no?\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0001544350_python.txt
Q: Storing parameters in a class, and how to access them I'm writing a program that randomly assembles mathematical expressions using the values stored in this class. The operators are stored in a dictionary along with the number of arguements they need. The arguements are stored in a list. (the four x's ensure that the x variable gets chosen often) depth, ratio, method and riddle are other values needed. I put these in a class so they'd be in one place, where I can go to change them. Is this the best pythonic way to do this? It seems that I can't refer to them by Params.depth. This produces the error 'Params has no attribute 'depth'. I have to create an instance of Params() (p = Params()) and refer to them by p.depth. I'm faily new to Python. Thanks class Params(object): def __init__(self): object.__init__(self) self.atoms =['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x'] self.operators = {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2} self.depth = 1 self.ratio = .4 self.method = '' self.riddle = '1 + np.sin(x)' A: What you have there are object properties. You mean to use class variables: class Params(object): atoms =['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x'] operators = {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2} depth = 1 ratio = .4 method = '' riddle = '1 + np.sin(x)' # This works fine: Params.riddle It's fairly common in Python to do this, since pretty much everyone agrees that Params.riddle is a lot nicer to type than Params['riddle']. If you find yourself doing this a lot you may want to use this recipe which makes things a bit easier and much clearer semantically. Warning: if that Params class gets too big, an older, grumpier Pythonista may appear and tell you to just move all that crap into its own module. A: You can do this in a class, but I'd prefer to just put it in a dictionary. Dictionaries are the all-purpose container type in Python, and they're great for this sort of thing. params = { atoms: ['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x'], operators: {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2}, depth: 1, ratio: .4, method: '', riddle = '1 + np.sin(x)' } Now you can do params['depth'] or params['atoms'][2]. Admittedly, this is slightly more verbose than the object form, but personally I think it's worth it to avoid polluting the namespace with unnecessary class definitions.
Storing parameters in a class, and how to access them
I'm writing a program that randomly assembles mathematical expressions using the values stored in this class. The operators are stored in a dictionary along with the number of arguements they need. The arguements are stored in a list. (the four x's ensure that the x variable gets chosen often) depth, ratio, method and riddle are other values needed. I put these in a class so they'd be in one place, where I can go to change them. Is this the best pythonic way to do this? It seems that I can't refer to them by Params.depth. This produces the error 'Params has no attribute 'depth'. I have to create an instance of Params() (p = Params()) and refer to them by p.depth. I'm faily new to Python. Thanks class Params(object): def __init__(self): object.__init__(self) self.atoms =['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x'] self.operators = {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2} self.depth = 1 self.ratio = .4 self.method = '' self.riddle = '1 + np.sin(x)'
[ "What you have there are object properties. You mean to use class variables:\nclass Params(object):\n atoms =['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x']\n operators = {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2}\n depth = 1\n ratio = .4\n method = ''\n riddle = '1 + np.sin(x)'\n\n# This works fine:\nParams.riddle\n\nIt's fairly common in Python to do this, since pretty much everyone agrees that Params.riddle is a lot nicer to type than Params['riddle']. If you find yourself doing this a lot you may want to use this recipe which makes things a bit easier and much clearer semantically.\nWarning: if that Params class gets too big, an older, grumpier Pythonista may appear and tell you to just move all that crap into its own module.\n", "You can do this in a class, but I'd prefer to just put it in a dictionary. Dictionaries are the all-purpose container type in Python, and they're great for this sort of thing.\nparams = {\n atoms: ['1.0','2.0','3.0','4.0','5.0','6.0','7.0','8.0','9.0','x','x','x','x'],\n operators: {'+': 2, '-': 2, '*': 2, '/': 2,'+': 2, '-': 2, '*': 2, '/': 2, '**': 2, '%': 2},\n depth: 1,\n ratio: .4,\n method: '',\n riddle = '1 + np.sin(x)'\n}\n\nNow you can do params['depth'] or params['atoms'][2]. Admittedly, this is slightly more verbose than the object form, but personally I think it's worth it to avoid polluting the namespace with unnecessary class definitions.\n" ]
[ 7, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001544672_python.txt
Q: (i'm close - i think) Python loop through list of subdomains with selenium starting with a base URL, I'm trying to have selenium loop through a short list of subdomains in csv format (ie: one column of 20 subdomains) and printing the html for each. I'm having trouble figuring it out. Thanks! from selenium import selenium import unittest, time, re, csv, logging subds = csv.reader(open('listofsubdomains.txt', 'rb')) for subd in subds: try: class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.sourcedomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium sel.open(subd[0]) html = sel.get_html_source() print html def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() except Exception, e: print>>sys.stderr, "Url % not processed: error (%s) % (url, e)" A: You're defining the same function again and again in the body of the class. The class is completely created before unittest.main() starts, so only one test method will remain in the class.
(i'm close - i think) Python loop through list of subdomains with selenium
starting with a base URL, I'm trying to have selenium loop through a short list of subdomains in csv format (ie: one column of 20 subdomains) and printing the html for each. I'm having trouble figuring it out. Thanks! from selenium import selenium import unittest, time, re, csv, logging subds = csv.reader(open('listofsubdomains.txt', 'rb')) for subd in subds: try: class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.sourcedomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium sel.open(subd[0]) html = sel.get_html_source() print html def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() except Exception, e: print>>sys.stderr, "Url % not processed: error (%s) % (url, e)"
[ "You're defining the same function again and again in the body of the class. The class is completely created before unittest.main() starts, so only one test method will remain in the class.\n" ]
[ 1 ]
[]
[]
[ "for_loop", "python", "selenium" ]
stackoverflow_0001544701_for_loop_python_selenium.txt
Q: Python one-line "for" expression I'm not sure if I need a lambda, or something else. But still, I need the following: I have an array = [1,2,3,4,5]. I need to put this array, for instance, into another array. But write it all in one line. for item in array: array2.append(item) I know that this is completely possible to iterate through the items and make it one-line. But googling and reading manuals didn't help me that much... if you can just give me a hint or name this thing so that I could find what that is, I would really appreciate it. Update: let's say this: array2 = SOME FANCY EXPRESSION THAT IS GOING TO GET ALL THE DATA FROM THE FIRST ONE (the example is NOT real. I'm just trying to iterate through different chunks of data, but that's the best I could come up with) A: The keyword you're looking for is list comprehensions: >>> x = [1, 2, 3, 4, 5] >>> y = [2*a for a in x if a % 2 == 1] >>> print(y) [2, 6, 10] A: for item in array: array2.append (item) Or, in this case: array2 += array A: If you really only need to add the items in one array to another, the '+' operator is already overloaded to do that, incidentally: a1 = [1,2,3,4,5] a2 = [6,7,8,9] a1 + a2 --> [1, 2, 3, 4, 5, 6, 7, 8, 9] A: Even array2.extend(array1) will work. A: If you're trying to copy the array: array2 = array[:]
Python one-line "for" expression
I'm not sure if I need a lambda, or something else. But still, I need the following: I have an array = [1,2,3,4,5]. I need to put this array, for instance, into another array. But write it all in one line. for item in array: array2.append(item) I know that this is completely possible to iterate through the items and make it one-line. But googling and reading manuals didn't help me that much... if you can just give me a hint or name this thing so that I could find what that is, I would really appreciate it. Update: let's say this: array2 = SOME FANCY EXPRESSION THAT IS GOING TO GET ALL THE DATA FROM THE FIRST ONE (the example is NOT real. I'm just trying to iterate through different chunks of data, but that's the best I could come up with)
[ "The keyword you're looking for is list comprehensions:\n>>> x = [1, 2, 3, 4, 5]\n>>> y = [2*a for a in x if a % 2 == 1]\n>>> print(y)\n[2, 6, 10]\n\n", "for item in array: array2.append (item)\n\nOr, in this case:\narray2 += array\n\n", "If you really only need to add the items in one array to another, the '+' operator is already overloaded to do that, incidentally:\na1 = [1,2,3,4,5]\na2 = [6,7,8,9]\na1 + a2\n--> [1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n", "Even array2.extend(array1) will work.\n", "If you're trying to copy the array:\narray2 = array[:]\n\n" ]
[ 121, 29, 3, 3, 2 ]
[]
[]
[ "lambda", "python" ]
stackoverflow_0001545050_lambda_python.txt
Q: python numpy savetxt Can someone indicate what I am doing wrong here? import numpy as np a = np.array([1,2,3,4,5],dtype=int) b = np.array(['a','b','c','d','e'],dtype='|S1') np.savetxt('test.txt',zip(a,b),fmt="%i %s") The output is: Traceback (most recent call last): File "loadtxt.py", line 6, in <module> np.savetxt('test.txt',zip(a,b),fmt="%i %s") File "/Users/tom/Library/Python/2.6/site-packages/numpy/lib/io.py", line 785, in savetxt fh.write(format % tuple(row) + '\n') TypeError: %d format: a number is required, not numpy.string_ A: You need to construct you array differently: z = np.array(zip([1,2,3,4,5], ['a','b','c','d','e']), dtype=[('int', int), ('str', '|S1')]) np.savetxt('test.txt', z, fmt='%i %s') when you're passing a sequence, savetext performs asarray(sequence) call and resulting array is of type |S4, that is all elements are strings! that's why you see this error. A: If you want to save a CSV file you can also use the function rec2csv (included in matplotlib.mlab) >>> from matplotlib.mlab import rec2csv >>> rec = array([(1.0, 2), (3.0, 4)], dtype=[('x', float), ('y', int)]) >>> rec = array(zip([1,2,3,4,5], ['a','b','c','d','e']), dtype=[('x', int), ('y', str)]) >>> rec2csv(rec, 'recordfile.txt', delimiter=' ') hopefully, one day pylab's developers will implement a decent support to writing csv files. A: I think the problem you are having is that you are passing tuples through the formating string and it can't interpret the tuple with %i. Try using fmt="%s", assuming this is what you are looking for as the output: 1 a 2 b 3 c 4 d 5 e
python numpy savetxt
Can someone indicate what I am doing wrong here? import numpy as np a = np.array([1,2,3,4,5],dtype=int) b = np.array(['a','b','c','d','e'],dtype='|S1') np.savetxt('test.txt',zip(a,b),fmt="%i %s") The output is: Traceback (most recent call last): File "loadtxt.py", line 6, in <module> np.savetxt('test.txt',zip(a,b),fmt="%i %s") File "/Users/tom/Library/Python/2.6/site-packages/numpy/lib/io.py", line 785, in savetxt fh.write(format % tuple(row) + '\n') TypeError: %d format: a number is required, not numpy.string_
[ "You need to construct you array differently:\nz = np.array(zip([1,2,3,4,5], ['a','b','c','d','e']), dtype=[('int', int), ('str', '|S1')])\nnp.savetxt('test.txt', z, fmt='%i %s')\n\nwhen you're passing a sequence, savetext performs asarray(sequence) call and resulting array is of type |S4, that is all elements are strings! that's why you see this error.\n", "If you want to save a CSV file you can also use the function rec2csv (included in matplotlib.mlab)\n>>> from matplotlib.mlab import rec2csv\n>>> rec = array([(1.0, 2), (3.0, 4)], dtype=[('x', float), ('y', int)])\n>>> rec = array(zip([1,2,3,4,5], ['a','b','c','d','e']), dtype=[('x', int), ('y', str)])\n>>> rec2csv(rec, 'recordfile.txt', delimiter=' ')\n\nhopefully, one day pylab's developers will implement a decent support to writing csv files.\n", "I think the problem you are having is that you are passing tuples through the formating string and it can't interpret the tuple with %i. Try using fmt=\"%s\", assuming this is what you are looking for as the output:\n1 a\n2 b\n3 c\n4 d\n5 e\n\n" ]
[ 12, 4, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001544948_numpy_python.txt
Q: Is there a free python library for phone calling? I'm writting a small python script notify me when certain condition met. I used smtplib which does the emailing for me, but I also want the script to call my cell phone as well. I can't find a free library for phone callings. Does anyone know any? A: Make the calls using Skype, and use the Skype4Py API. If you want other suggestions, please specify how you want to make the call (modem? Some software bridge? What?). Also, might I suggest that you send an SMS instead of placing a call? You can do that via Skype too, btw. A: Twilio can make calls through their API. Pay as you go. Worked well for wakeup calls for me. A: I've used Skype4Py very successfully. Keep in mind though it does require Skype to be installed and costs the standard rate for SkypeOut.
Is there a free python library for phone calling?
I'm writting a small python script notify me when certain condition met. I used smtplib which does the emailing for me, but I also want the script to call my cell phone as well. I can't find a free library for phone callings. Does anyone know any?
[ "Make the calls using Skype, and use the Skype4Py API.\nIf you want other suggestions, please specify how you want to make the call (modem? Some software bridge? What?).\nAlso, might I suggest that you send an SMS instead of placing a call? You can do that via Skype too, btw.\n", "Twilio can make calls through their API. Pay as you go. Worked well for wakeup calls for me.\n", "I've used Skype4Py very successfully. Keep in mind though it does require Skype to be installed and costs the standard rate for SkypeOut.\n" ]
[ 8, 4, 1 ]
[]
[]
[ "phone_call", "python" ]
stackoverflow_0001544550_phone_call_python.txt
Q: How to apply Loop to working Python Selenium Script? I'm trying to figure out how to apply a for-loop to this script and I'm having a lot of trouble. I want to iterate through a list of subdomains which are stored in csv format (ie: one column with 20 subdomains) and print the html for each. They all have the same SourceDomain. Thanks! #Python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.SourceDomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium sel.open("/dns/www.subdomains.com.html") sel.wait_for_page_to_load("30000") html = sel.get_html_source() print html def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() A: #Python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.SourceDomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('your_file.csv')) for row in spamReader: sel.open(row[0]) sel.wait_for_page_to_load("30000") print sel.get_html_source() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() BTW, notice there's no need to place this script wrapped inside a unittest testcase. Even better, you don't need selenium for such a simple task (at least at first sight). Try this: import urllib2, csv def fetchsource(url): page = urllib2.urlopen(url) source = page.read() return source fooReader = csv.reader(open('your_file.csv')) for url in fooReader: print fetchsource(url)
How to apply Loop to working Python Selenium Script?
I'm trying to figure out how to apply a for-loop to this script and I'm having a lot of trouble. I want to iterate through a list of subdomains which are stored in csv format (ie: one column with 20 subdomains) and print the html for each. They all have the same SourceDomain. Thanks! #Python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://www.SourceDomain.com") self.selenium.start() def test_untitled(self): sel = self.selenium sel.open("/dns/www.subdomains.com.html") sel.wait_for_page_to_load("30000") html = sel.get_html_source() print html def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()
[ "#Python 2.6\nfrom selenium import selenium\nimport unittest, time, re, csv, logging\n\nclass Untitled(unittest.TestCase):\n def setUp(self):\n self.verificationErrors = []\n self.selenium = selenium(\"localhost\", 4444, \"*firefox\", \"http://www.SourceDomain.com\")\n self.selenium.start()\n\n def test_untitled(self):\n sel = self.selenium\n spamReader = csv.reader(open('your_file.csv'))\n for row in spamReader:\n sel.open(row[0])\n sel.wait_for_page_to_load(\"30000\")\n print sel.get_html_source()\n\n def tearDown(self):\n self.selenium.stop()\n self.assertEqual([], self.verificationErrors)\n\nif __name__ == \"__main__\":\n unittest.main()\n\nBTW, notice there's no need to place this script wrapped inside a unittest testcase. Even better, you don't need selenium for such a simple task (at least at first sight).\nTry this:\nimport urllib2, csv\n\ndef fetchsource(url):\n page = urllib2.urlopen(url)\n source = page.read()\n return source\n\nfooReader = csv.reader(open('your_file.csv'))\nfor url in fooReader:\n print fetchsource(url)\n\n" ]
[ 3 ]
[]
[]
[ "csv", "list", "loops", "python", "selenium" ]
stackoverflow_0001545602_csv_list_loops_python_selenium.txt
Q: Python mechanize doesn't click a button check the following script: from mechanize import Browser br = Browser() page = br.open('http://scottishladiespool.com/register.php') br.select_form(nr = 5) r = br.click(type = "submit", nr = 0) print r.data #prints username=&password1=&password2=&email=&user_hide_email=1&captcha_code=&user_msn=&user_yahoo=&user_web=&user_location=&user_month=&user_day=&user_year=&user_sig= that is, it doesn't add the name=value pair of the submit button (register=Register). Why is this happening? ClientForm is working properly on other pages, but on this one it is not. I've tried setting the disabled and readonly attributes of submit control to True, but it didn't solve the problem. A: There is a disabled=disabled attribute on the register button. This prevents the user from clicking and presumably mechanize respects the disabled attribute as well. You'll need to change the source code of that button. Enabling the control means completely removing the disabled=disabled text.
Python mechanize doesn't click a button
check the following script: from mechanize import Browser br = Browser() page = br.open('http://scottishladiespool.com/register.php') br.select_form(nr = 5) r = br.click(type = "submit", nr = 0) print r.data #prints username=&password1=&password2=&email=&user_hide_email=1&captcha_code=&user_msn=&user_yahoo=&user_web=&user_location=&user_month=&user_day=&user_year=&user_sig= that is, it doesn't add the name=value pair of the submit button (register=Register). Why is this happening? ClientForm is working properly on other pages, but on this one it is not. I've tried setting the disabled and readonly attributes of submit control to True, but it didn't solve the problem.
[ "There is a disabled=disabled attribute on the register button. This prevents the user from clicking and presumably mechanize respects the disabled attribute as well.\nYou'll need to change the source code of that button. Enabling the control means completely removing the disabled=disabled text. \n" ]
[ 2 ]
[]
[]
[ "clientform", "mechanize", "python" ]
stackoverflow_0001545698_clientform_mechanize_python.txt
Q: Is there a version of os.getcwd() that doesn't dereference symlinks? Possible Duplicate: How to get/set logical directory path in python I have a Python script that I run from a symlinked directory, and I call os.getcwd() in it, expecting to get the symlinked path I ran it from. Instead it gives me the "real" path, and in this case that's not helpful. I need it to actually give me the symlinked version. Does Python have a command for that? A: Workaround: os.getenv('PWD') A: In general this is not possible. os.getcwd() calls getcwd(3), and according to POSIX.1-2008 (IEEE Std 1003.1-2008): The pathname shall contain no components that are dot or dot-dot, or are symbolic links. os.getenv['PWD'] is shell-dependent and will not work for example with sh from FreeBSD.
Is there a version of os.getcwd() that doesn't dereference symlinks?
Possible Duplicate: How to get/set logical directory path in python I have a Python script that I run from a symlinked directory, and I call os.getcwd() in it, expecting to get the symlinked path I ran it from. Instead it gives me the "real" path, and in this case that's not helpful. I need it to actually give me the symlinked version. Does Python have a command for that?
[ "Workaround: os.getenv('PWD')\n", "In general this is not possible. os.getcwd() calls getcwd(3), and according to POSIX.1-2008 (IEEE Std 1003.1-2008):\n\nThe pathname shall contain no components that are dot or dot-dot, or are symbolic links.\n\nos.getenv['PWD'] is shell-dependent and will not work for example with sh from FreeBSD.\n" ]
[ 17, 13 ]
[]
[]
[ "path", "python", "symlink" ]
stackoverflow_0001542803_path_python_symlink.txt
Q: How do I replace all punctuation in my string with "" in Python? If my string was: Business -- way's I'd like to turn this into: Business ways ie. replace NON abc/123 into "" A: Simple regular expression: import re >>> s = "Business -- way's" >>> s = re.sub(r'[^\w\s]', '', s) >>> s "Business ways" A: Or, if you don't want to use a regular expression for some reason: ''.join([x for x in foo if x.isalpha() or x.isspace()])
How do I replace all punctuation in my string with "" in Python?
If my string was: Business -- way's I'd like to turn this into: Business ways ie. replace NON abc/123 into ""
[ "Simple regular expression:\nimport re\n\n>>> s = \"Business -- way's\"\n>>> s = re.sub(r'[^\\w\\s]', '', s)\n>>> s\n\"Business ways\"\n\n", "Or, if you don't want to use a regular expression for some reason:\n''.join([x for x in foo if x.isalpha() or x.isspace()])\n\n" ]
[ 16, 6 ]
[ "(regular expression) replace\n[[:punct:]]\n\nwith '' (if Python supports that).\n[] is a character class, [::] is posix class syntax. [:punct:] is punctuation, so the character class for all punctuation marks would be [[:punct:]]\nAn alternate way of the same thing is \\p and friends:\n\\p{IsPunct}\nSee just below \"Character Classes and other Special Escapes\" in http://perldoc.perl.org/perlre.html (yes, I know it's a Perl document, but this is more about regular expressions than Perl).\nThat being said, the first answer with [^\\w\\s] answers what you explained a little more explicitly. This was more just an explanation of how to do what your question asked.\n" ]
[ -3 ]
[ "python", "regex" ]
stackoverflow_0001545655_python_regex.txt
Q: How to replace the quote " and hyphen character in a string with nothing in Python? I'd like to replace " and - with "" nothing! make it disappear. s = re.sub(r'[^\w\s]', '', s) this makes all punctuation disappear, but I just want those 2 characters. Thanks. A: I'm curious as to why you are using a regular expression for this simple string replacement. The only advantage that I can see is that you can do it in one line of code instead of two, but I personally think that a replacement method is clearer than a regex for something like this. The string object has a replace method - str.replace(old, new[, count]), so use replace("-", "") and replace("\"", ""). Note that my syntax might be a little off - I'm still a python beginner. A: re.sub('["-]+', '', s) A: In Python 2.6/2.7, you can use the helpful translate() method on strings. When using None as the first argument, this method has the special behavior of deleting all occurences of any character in the second argument. >>> s = 'No- dashes or "quotes"' >>> s.translate(None, '"-') 'No dashes or quotes' Per SilentGhost's comment, this gets to be cumbersome pretty quickly in both <2.6 and >=3.0, because you have to explicitly create a translation table. That effort would only be worth it if you are performing this sort of operation a great deal. A: re.sub('[-"]', '', s) A: In Python 2.6: print 'Hey -- How are "you"?'.translate(None, '-"') Returns: Hey How are you?
How to replace the quote " and hyphen character in a string with nothing in Python?
I'd like to replace " and - with "" nothing! make it disappear. s = re.sub(r'[^\w\s]', '', s) this makes all punctuation disappear, but I just want those 2 characters. Thanks.
[ "I'm curious as to why you are using a regular expression for this simple string replacement. The only advantage that I can see is that you can do it in one line of code instead of two, but I personally think that a replacement method is clearer than a regex for something like this.\nThe string object has a replace method - str.replace(old, new[, count]), so use replace(\"-\", \"\") and replace(\"\\\"\", \"\").\nNote that my syntax might be a little off - I'm still a python beginner.\n", "re.sub('[\"-]+', '', s)\n\n", "In Python 2.6/2.7, you can use the helpful translate() method on strings. When using None as the first argument, this method has the special behavior of deleting all occurences of any character in the second argument. \n>>> s = 'No- dashes or \"quotes\"'\n>>> s.translate(None, '\"-')\n'No dashes or quotes'\n\nPer SilentGhost's comment, this gets to be cumbersome pretty quickly in both <2.6 and >=3.0, because you have to explicitly create a translation table. That effort would only be worth it if you are performing this sort of operation a great deal.\n", "re.sub('[-\"]', '', s)\n", "In Python 2.6:\nprint 'Hey -- How are \"you\"?'.translate(None, '-\"')\n\nReturns: \nHey How are you?\n\n" ]
[ 6, 2, 2, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001545878_python_regex.txt
Q: web scraping a problem site I'm trying to scrape some information from a web site, but am having trouble reading the relevant pages. The pages seem to first send a basic setup, then more detailed info. My download attempts only seem to capture the basic setup. I've tried urllib and mechanize so far. Firefox and Chrome have no trouble displaying the pages, although I can't see the parts I want when I view page source. A sample url is https://personal.vanguard.com/us/funds/snapshot?FundId=0542&FundIntExt=INT I'd like, for example, average maturity and average duration from the lower right of the page. The problem isn't extracting that info from the page, it's downloading the page so that I can extract the info. A: The page uses JavaScript to load the data. Firefox and Chrome are only working because you have JavaScript enabled - try disabling it and you'll get a mostly empty page. Python isn't going to be able to do this by itself - your best compromise would be to control a real browser (Internet Explorer is easiest, if you're on Windows) from Python using something like Pamie. A: The website loads the data via ajax. Firebug shows the ajax calls. For the given page, the data is loaded from https://personal.vanguard.com/us/JSP/Funds/VGITab/VGIFundOverviewTabContent.jsf?FundIntExt=INT&FundId=0542 See the corresponding javascript code on the original page: <script>populator = new Populator({parentId: "profileForm:vanguardFundTabBox:tab0",execOnLoad:true, populatorUrl:"/us/JSP/Funds/VGITab/VGIFundOverviewTabContent.jsf?FundIntExt=INT&FundId=0542", inline:fals e,type:"once"}); </script> A: The reason why is because it's performing AJAX calls after it loads. You will need to account for searching out those URLs to scrape it's content as well. A: As RichieHindle mentioned, your best bet on Windows is to use the WebBrowser class to create an instance of an IE rendering engine and then use that to browse the site. The class gives you full access to the DOM tree, so you can do whatever you want with it. http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser(loband).aspx
web scraping a problem site
I'm trying to scrape some information from a web site, but am having trouble reading the relevant pages. The pages seem to first send a basic setup, then more detailed info. My download attempts only seem to capture the basic setup. I've tried urllib and mechanize so far. Firefox and Chrome have no trouble displaying the pages, although I can't see the parts I want when I view page source. A sample url is https://personal.vanguard.com/us/funds/snapshot?FundId=0542&FundIntExt=INT I'd like, for example, average maturity and average duration from the lower right of the page. The problem isn't extracting that info from the page, it's downloading the page so that I can extract the info.
[ "The page uses JavaScript to load the data. Firefox and Chrome are only working because you have JavaScript enabled - try disabling it and you'll get a mostly empty page.\nPython isn't going to be able to do this by itself - your best compromise would be to control a real browser (Internet Explorer is easiest, if you're on Windows) from Python using something like Pamie.\n", "The website loads the data via ajax. Firebug shows the ajax calls. For the given page, the data is loaded from https://personal.vanguard.com/us/JSP/Funds/VGITab/VGIFundOverviewTabContent.jsf?FundIntExt=INT&FundId=0542\nSee the corresponding javascript code on the original page:\n<script>populator = new Populator({parentId:\n\"profileForm:vanguardFundTabBox:tab0\",execOnLoad:true,\n populatorUrl:\"/us/JSP/Funds/VGITab/VGIFundOverviewTabContent.jsf?FundIntExt=INT&FundId=0542\",\ninline:fals e,type:\"once\"});\n</script>\n\n", "The reason why is because it's performing AJAX calls after it loads. You will need to account for searching out those URLs to scrape it's content as well.\n", "As RichieHindle mentioned, your best bet on Windows is to use the WebBrowser class to create an instance of an IE rendering engine and then use that to browse the site.\nThe class gives you full access to the DOM tree, so you can do whatever you want with it.\nhttp://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser(loband).aspx\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "python", "screen_scraping" ]
stackoverflow_0001546089_python_screen_scraping.txt