content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Perl equivalent of (Python-) list comprehension I'm looking for ways to express this Python snippet in Perl: data = {"A": None, "B": "yes", "C": None} key_list = [k for k in data if data[k]] # in this case the same as filter(lambda k: data[k], data) but let's ignore that So looking at it one way, I just want the keys where the values are None or undef. Looking at it another way, what I want is the concise perl equivalent of a list comprehension with conditional. A: I think you want grep: #!/usr/bin/env perl use strict; use warnings; my %data = ( A => undef, B => 'yes', C => undef ); my @keys = grep { defined $data{$_} } keys %data; print "Key: $_\n" for @keys; I also think that I type too slowly, and that I should reload the page before posting answers. By the way, either a value of 0 or undef can be a good way to handle null values, but make sure you remember which you're using. A false value and and undefined value aren't the same thing in Perl. To clarify: undef returns false in a boolean test, but so does 0. If 0 is a valid value, then you want to explicitly test for definedness, not simply truth. (I mention it because James went for 0 and I went the other way, and you may or may not know if it matters.) A: Use grep: #!/usr/bin/perl use strict; use warnings; my %data = ("A" => 0, "B" => "yes", "C" => 0 ); my @keys = grep { $data{$_} } keys %data; Grep returns the values from the list on the right-hand side for which the expression in braces evaluates to a true value. As telemachus points out, you want to make sure you understand true/false values in Perl. This question has a good overview of truth in Perl. You'll likely want a look at map, which applies an expression in braces to each element of a list and returns the result. An example would be: my @data = ("A" => 0, "B" => 1, "C" => 0 ); my @modified_data = map { $data{$_} + 1 } @data; print join ' ', @data, "\n"; print join ' ', @modified_data, "\n"; A: For variation on the theme have a look at autobox (see its implementations autobox::Core and Moose::Autobox ) use autobox::Core; my %data = ( A => undef, B => 'yes', C => undef ); my $key_list = %data->keys->grep( sub { defined $data{$_} } ); say "Key: $_" for @$key_list; # => Key: B Moose::Autobox comes with key/value 'kv' which makes the code DRYer: my $key_list = %data->kv->grep( sub{ defined $_->[1] } )->map( sub{ $_->[0] } ); Here is a more explicit and even longer version of above: my $key_list = %data->kv ->grep( sub { my ($k, $v) = @$_; defined $v } ) ->map( sub { my ($k, $v) = @$_; $k } );
Perl equivalent of (Python-) list comprehension
I'm looking for ways to express this Python snippet in Perl: data = {"A": None, "B": "yes", "C": None} key_list = [k for k in data if data[k]] # in this case the same as filter(lambda k: data[k], data) but let's ignore that So looking at it one way, I just want the keys where the values are None or undef. Looking at it another way, what I want is the concise perl equivalent of a list comprehension with conditional.
[ "I think you want grep:\n#!/usr/bin/env perl\nuse strict;\nuse warnings;\n\nmy %data = ( A => undef, B => 'yes', C => undef );\n\nmy @keys = grep { defined $data{$_} } keys %data;\n\nprint \"Key: $_\\n\" for @keys;\n\nI also think that I type too slowly, and that I should reload the page before posting answers. By the way, either a value of 0 or undef can be a good way to handle null values, but make sure you remember which you're using. A false value and and undefined value aren't the same thing in Perl. To clarify: undef returns false in a boolean test, but so does 0. If 0 is a valid value, then you want to explicitly test for definedness, not simply truth. (I mention it because James went for 0 and I went the other way, and you may or may not know if it matters.)\n", "Use grep:\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\nmy %data = (\"A\" => 0, \"B\" => \"yes\", \"C\" => 0 );\nmy @keys = grep { $data{$_} } keys %data;\n\nGrep returns the values from the list on the right-hand side for which the expression in braces evaluates to a true value. As telemachus points out, you want to make sure you understand true/false values in Perl. This question has a good overview of truth in Perl. \nYou'll likely want a look at map, which applies an expression in braces to each element of a list and returns the result. An example would be:\nmy @data = (\"A\" => 0, \"B\" => 1, \"C\" => 0 );\nmy @modified_data = map { $data{$_} + 1 } @data;\nprint join ' ', @data, \"\\n\";\nprint join ' ', @modified_data, \"\\n\";\n\n", "For variation on the theme have a look at autobox (see its implementations autobox::Core and Moose::Autobox )\nuse autobox::Core;\n\nmy %data = ( A => undef, B => 'yes', C => undef );\nmy $key_list = %data->keys->grep( sub { defined $data{$_} } );\n\nsay \"Key: $_\" for @$key_list;\n\n# => Key: B\n\n\nMoose::Autobox comes with key/value 'kv' which makes the code DRYer:\nmy $key_list = %data->kv->grep( sub{ defined $_->[1] } )->map( sub{ $_->[0] } );\n\nHere is a more explicit and even longer version of above: \nmy $key_list = %data->kv\n ->grep( sub { my ($k, $v) = @$_; defined $v } )\n ->map( sub { my ($k, $v) = @$_; $k } );\n\n" ]
[ 20, 13, 6 ]
[]
[]
[ "list_comprehension", "perl", "python" ]
stackoverflow_0001112444_list_comprehension_perl_python.txt
Q: Safety of Python 'eval' For List Deserialization Are there any security exploits that could occur in this scenario: eval(repr(unsanitized_user_input), {"__builtins__": None}, {"True":True, "False":False}) where unsanitized_user_input is a str object. The string is user-generated and could be nasty. Assuming our web framework hasn't failed us, it's a real honest-to-god str instance from the Python builtins. If this is dangerous, can we do anything to the input to make it safe? We definitely don't want to execute anything contained in the string. See also: Funny blog post about eval safety Previous Question Blog: Fast deserialization in Python The larger context which is (I believe) not essential to the question is that we have thousands of these: repr([unsanitized_user_input_1, unsanitized_user_input_2, unsanitized_user_input_3, unsanitized_user_input_4, ...]) in some cases nested: repr([[unsanitized_user_input_1, unsanitized_user_input_2], [unsanitized_user_input_3, unsanitized_user_input_4], ...]) which are themselves converted to strings with repr(), put in persistent storage, and eventually read back into memory with eval. Eval deserialized the strings from persistent storage much faster than pickle and simplejson. The interpreter is Python 2.5 so json and ast aren't available. No C modules are allowed and cPickle is not allowed. A: It is indeed dangerous and the safest alternative is ast.literal_eval (see the ast module in the standard library). You can of course build and alter an ast to provide e.g. evaluation of variables and the like before you eval the resulting AST (when it's down to literals). The possible exploit of eval starts with any object it can get its hands on (say True here) and going via .__class_ to its type object, etc. up to object, then gets its subclasses... basically it can get to ANY object type and wreck havoc. I can be more specific but I'd rather not do it in a public forum (the exploit is well known, but considering how many people still ignore it, revealing it to wannabe script kiddies could make things worse... just avoid eval on unsanitized user input and live happily ever after!-). A: If you can prove beyond doubt that unsanitized_user_input is a str instance from the Python built-ins with nothing tampered, then this is always safe. In fact, it'll be safe even without all those extra arguments since eval(repr(astr)) = astr for all such string objects. You put in a string, you get back out a string. All you did was escape and unescape it. This all leads me to think that eval(repr(x)) isn't what you want--no code will ever be executed unless someone gives you an unsanitized_user_input object that looks like a string but isn't, but that's a different question--unless you're trying to copy a string instance in the slowest way possible :D. A: With everything as you describe, it is technically safe to eval repred strings, however, I'd avoid doing it anyway as it's asking for trouble: There could be some weird corner-case where your assumption that only repred strings are stored (eg. a bug / different pathway into the storage that doesn't repr instantly becmes a code injection exploit where it might otherwise be unexploitable) Even if everything is OK now, assumptions might change at some point, and unsanitised data may get stored in that field by someone unaware of the eval code. Your code may get reused (or worse, copy+pasted) into a situation you didn't consider. As Alex Martelli pointed out, in python2.6 and higher, there is ast.literal_eval which will safely handle both strings and other simple datatypes like tuples. This is probably the safest and most complete solution. Another possibility however is to use the string-escape codec. This is much faster than eval (about 10 times according to timeit), available in earlier versions than literal_eval, and should do what you want: >>> s = 'he\nllo\' wo"rld\0\x03\r\n\tabc' >>> repr(s)[1:-1].decode('string-escape') == s True (The [1:-1] is to strip the outer quotes repr adds.) A: Generally, you should never allow anyone to post code. So called "paid professional programmers" have a hard-enough time writing code that actually works. Accepting code from the anonymous public -- without benefit of formal QA -- is the worst of all possible scenarios. Professional programmers -- without good, solid formal QA -- will make a hash of almost any web site. Indeed, I'm reverse engineering some unbelievably bad code from paid professionals. The idea of allowing a non-professional -- unencumbered by QA -- to post code is truly terrifying. A: repr([unsanitized_user_input_1, unsanitized_user_input_2, ... ... unsanitized_user_input is a str object You shouldn't have to serialise strings to store them in a database.. If these are all strings, as you mentioned - why can't you just store the strings in a db.StringListProperty? The nested entries might be a bit more complicated, but why is this the case? When you have to resort to eval to get data from the database, you're probably doing something wrong.. Couldn't you store each unsanitized_user_input_x as it's own db.StringProperty row, and have group them by an reference field? Either of those may not be applicable, since I've no idea what you're trying to achieve, but my point is - can you not structure the data in a way you where don't have to rely on eval (and also rely on it not being a security issue)?
Safety of Python 'eval' For List Deserialization
Are there any security exploits that could occur in this scenario: eval(repr(unsanitized_user_input), {"__builtins__": None}, {"True":True, "False":False}) where unsanitized_user_input is a str object. The string is user-generated and could be nasty. Assuming our web framework hasn't failed us, it's a real honest-to-god str instance from the Python builtins. If this is dangerous, can we do anything to the input to make it safe? We definitely don't want to execute anything contained in the string. See also: Funny blog post about eval safety Previous Question Blog: Fast deserialization in Python The larger context which is (I believe) not essential to the question is that we have thousands of these: repr([unsanitized_user_input_1, unsanitized_user_input_2, unsanitized_user_input_3, unsanitized_user_input_4, ...]) in some cases nested: repr([[unsanitized_user_input_1, unsanitized_user_input_2], [unsanitized_user_input_3, unsanitized_user_input_4], ...]) which are themselves converted to strings with repr(), put in persistent storage, and eventually read back into memory with eval. Eval deserialized the strings from persistent storage much faster than pickle and simplejson. The interpreter is Python 2.5 so json and ast aren't available. No C modules are allowed and cPickle is not allowed.
[ "It is indeed dangerous and the safest alternative is ast.literal_eval (see the ast module in the standard library). You can of course build and alter an ast to provide e.g. evaluation of variables and the like before you eval the resulting AST (when it's down to literals).\nThe possible exploit of eval starts with any object it can get its hands on (say True here) and going via .__class_ to its type object, etc. up to object, then gets its subclasses... basically it can get to ANY object type and wreck havoc. I can be more specific but I'd rather not do it in a public forum (the exploit is well known, but considering how many people still ignore it, revealing it to wannabe script kiddies could make things worse... just avoid eval on unsanitized user input and live happily ever after!-).\n", "If you can prove beyond doubt that unsanitized_user_input is a str instance from the Python built-ins with nothing tampered, then this is always safe. In fact, it'll be safe even without all those extra arguments since eval(repr(astr)) = astr for all such string objects. You put in a string, you get back out a string. All you did was escape and unescape it.\nThis all leads me to think that eval(repr(x)) isn't what you want--no code will ever be executed unless someone gives you an unsanitized_user_input object that looks like a string but isn't, but that's a different question--unless you're trying to copy a string instance in the slowest way possible :D.\n", "With everything as you describe, it is technically safe to eval repred strings, however, I'd avoid doing it anyway as it's asking for trouble:\n\nThere could be some weird corner-case where your assumption that only repred strings are stored (eg. a bug / different pathway into the storage that doesn't repr instantly becmes a code injection exploit where it might otherwise be unexploitable)\nEven if everything is OK now, assumptions might change at some point, and unsanitised data may get stored in that field by someone unaware of the eval code.\nYour code may get reused (or worse, copy+pasted) into a situation you didn't consider.\n\nAs Alex Martelli pointed out, in python2.6 and higher, there is ast.literal_eval which will safely handle both strings and other simple datatypes like tuples. This is probably the safest and most complete solution.\nAnother possibility however is to use the string-escape codec. This is much faster than eval (about 10 times according to timeit), available in earlier versions than literal_eval, and should do what you want:\n>>> s = 'he\\nllo\\' wo\"rld\\0\\x03\\r\\n\\tabc'\n>>> repr(s)[1:-1].decode('string-escape') == s\nTrue\n\n(The [1:-1] is to strip the outer quotes repr adds.)\n", "Generally, you should never allow anyone to post code. \nSo called \"paid professional programmers\" have a hard-enough time writing code that actually works.\nAccepting code from the anonymous public -- without benefit of formal QA -- is the worst of all possible scenarios.\nProfessional programmers -- without good, solid formal QA -- will make a hash of almost any web site. Indeed, I'm reverse engineering some unbelievably bad code from paid professionals. \nThe idea of allowing a non-professional -- unencumbered by QA -- to post code is truly terrifying.\n", "\nrepr([unsanitized_user_input_1,\n unsanitized_user_input_2,\n ...\n\n... unsanitized_user_input is a str object\n\nYou shouldn't have to serialise strings to store them in a database..\nIf these are all strings, as you mentioned - why can't you just store the strings in a db.StringListProperty?\nThe nested entries might be a bit more complicated, but why is this the case? When you have to resort to eval to get data from the database, you're probably doing something wrong..\nCouldn't you store each unsanitized_user_input_x as it's own db.StringProperty row, and have group them by an reference field?\nEither of those may not be applicable, since I've no idea what you're trying to achieve, but my point is - can you not structure the data in a way you where don't have to rely on eval (and also rely on it not being a security issue)?\n" ]
[ 19, 8, 5, 3, 1 ]
[]
[]
[ "eval", "python" ]
stackoverflow_0001112665_eval_python.txt
Q: Is it possible to pass a variable out of a pdb session into the original interactive session? I am using pdb to examine a script having called run -d in an ipython session. It would be useful to be able to plot some of the variables but I need them in the main ipython environment in order to do that. So what I am looking for is some way to make a variable available back in the main interactive session after I quit pdb. If you set a variable in the topmost frame it does seem to be there in the ipython session, but this doesn't work for any frames further down. Something like export in the following: ipdb> myvar = [1,2,3] ipdb> p myvar [1, 2, 3] ipdb> export myvar ipdb> q In [66]: myvar Out[66]: [1, 2, 3] A: Per ipython's docs, and also a run? command from the ipython prompt, after execution, the IPython interactive namespace gets updated with all variables defined in the program (except for __name__ and sys.argv) By "defined in the program" (a slightly sloppy use of terms), it doesn't mean "anywhere within any nested functions found there" -- it means "in the globals() of the script/module you're running. If you're within any kind of nesting, globals()['myvar'] = [1,2,3] should still work fine, just like your hoped-for export would if it existed. Edit: If you're in a different module, you need to set the name in the globals of your original one -- after an import sys if needed, sys.modules["originalmodule"].myvar = [1, 2, 3] will do what you desire.
Is it possible to pass a variable out of a pdb session into the original interactive session?
I am using pdb to examine a script having called run -d in an ipython session. It would be useful to be able to plot some of the variables but I need them in the main ipython environment in order to do that. So what I am looking for is some way to make a variable available back in the main interactive session after I quit pdb. If you set a variable in the topmost frame it does seem to be there in the ipython session, but this doesn't work for any frames further down. Something like export in the following: ipdb> myvar = [1,2,3] ipdb> p myvar [1, 2, 3] ipdb> export myvar ipdb> q In [66]: myvar Out[66]: [1, 2, 3]
[ "Per ipython's docs, and also a run? command from the ipython prompt,\n\nafter execution, the IPython\n interactive namespace gets\n updated with all variables defined in the program (except for __name__\n and sys.argv)\n\nBy \"defined in the program\" (a slightly sloppy use of terms), it doesn't mean \"anywhere within any nested functions found there\" -- it means \"in the globals() of the script/module you're running. If you're within any kind of nesting, globals()['myvar'] = [1,2,3] should still work fine, just like your hoped-for export would if it existed.\nEdit: If you're in a different module, you need to set the name in the globals of your original one -- after an import sys if needed, sys.modules[\"originalmodule\"].myvar = [1, 2, 3] will do what you desire.\n" ]
[ 3 ]
[]
[]
[ "debugging", "ipython", "python" ]
stackoverflow_0001114080_debugging_ipython_python.txt
Q: Does main.py or app.yaml determine the URL used by the App Engine cron task in this example? In this sample code the URL of the app seems to be determined by this line within the app: application = webapp.WSGIApplication([('/mailjob', MailJob)], debug=True) but also by this line within the app handler of app.yaml: - url: /.* script: main.py However, the URL of the cron task is set by this line: url: /tasks/summary So it seems the cron utility will call "/tasks/summary" and because of the app handler, this will cause main.py to be invoked. Does this mean that, as far as the cron is concerned, the line in the app that sets the URL is extraneous: application = webapp.WSGIApplication([('/mailjob', MailJob)], debug=True) . . . since the only URL needed by the cron task is the one defined in app.yaml. app.yaml application: yourappname version: 1 runtime: python api_version: 1 handlers: - url: /.* script: main.py cron.yaml cron: - description: daily mailing job url: /tasks/summary schedule: every 24 hours main.py #!/usr/bin/env python import cgi from google.appengine.ext import webapp from google.appengine.api import mail from google.appengine.api import urlfetch class MailJob(webapp.RequestHandler): def get(self): # Call your website using URL Fetch service ... url = "http://www.yoursite.com/page_or_service" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) # Send emails using Mail service ... mail.send_mail(sender="[email protected]", to="[email protected]", subject="Your account on YourSite.com has expired", body="Bla bla bla ...") return application = webapp.WSGIApplication([ ('/mailjob', MailJob)], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() A: You could do it like this: app.yaml application: yourappname version: 1 runtime: python api_version: 1 handlers: - url: /tasks/.* script: main.py cron.yaml cron: - description: daily mailing job url: /tasks/summary schedule: every 24 hours main.py #!/usr/bin/env python import cgi from google.appengine.ext import webapp from google.appengine.api import mail from google.appengine.api import urlfetch class MailJob(webapp.RequestHandler): def get(self): # Call your website using URL Fetch service ... url = "http://www.yoursite.com/page_or_service" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) # Send emails using Mail service ... mail.send_mail(sender="[email protected]", to="[email protected]", subject="Your account on YourSite.com has expired", body="Bla bla bla ...") return application = webapp.WSGIApplication([ ('/tasks/summary', MailJob)], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() A: Looks like you're reading this page (even though you don't give us the URL). The configuration and code as presented won't run successfully: the cron task will try to visit URL path /tasks/summary, app.yaml will make that execute main.py, but the latter only sets up a handler for /mailjob, so the cron task's attempt will fail with a 404 status code.
Does main.py or app.yaml determine the URL used by the App Engine cron task in this example?
In this sample code the URL of the app seems to be determined by this line within the app: application = webapp.WSGIApplication([('/mailjob', MailJob)], debug=True) but also by this line within the app handler of app.yaml: - url: /.* script: main.py However, the URL of the cron task is set by this line: url: /tasks/summary So it seems the cron utility will call "/tasks/summary" and because of the app handler, this will cause main.py to be invoked. Does this mean that, as far as the cron is concerned, the line in the app that sets the URL is extraneous: application = webapp.WSGIApplication([('/mailjob', MailJob)], debug=True) . . . since the only URL needed by the cron task is the one defined in app.yaml. app.yaml application: yourappname version: 1 runtime: python api_version: 1 handlers: - url: /.* script: main.py cron.yaml cron: - description: daily mailing job url: /tasks/summary schedule: every 24 hours main.py #!/usr/bin/env python import cgi from google.appengine.ext import webapp from google.appengine.api import mail from google.appengine.api import urlfetch class MailJob(webapp.RequestHandler): def get(self): # Call your website using URL Fetch service ... url = "http://www.yoursite.com/page_or_service" result = urlfetch.fetch(url) if result.status_code == 200: doSomethingWithResult(result.content) # Send emails using Mail service ... mail.send_mail(sender="[email protected]", to="[email protected]", subject="Your account on YourSite.com has expired", body="Bla bla bla ...") return application = webapp.WSGIApplication([ ('/mailjob', MailJob)], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main()
[ "You could do it like this:\napp.yaml\napplication: yourappname\nversion: 1\nruntime: python\napi_version: 1\n\nhandlers:\n\n- url: /tasks/.*\n script: main.py\n\ncron.yaml\ncron:\n - description: daily mailing job\n url: /tasks/summary\n schedule: every 24 hours\n\nmain.py\n#!/usr/bin/env python \n\nimport cgi\nfrom google.appengine.ext import webapp\nfrom google.appengine.api import mail\nfrom google.appengine.api import urlfetch \n\nclass MailJob(webapp.RequestHandler):\n def get(self):\n\n # Call your website using URL Fetch service ...\n url = \"http://www.yoursite.com/page_or_service\"\n result = urlfetch.fetch(url)\n\n if result.status_code == 200:\n doSomethingWithResult(result.content)\n\n # Send emails using Mail service ...\n mail.send_mail(sender=\"[email protected]\",\n to=\"[email protected]\",\n subject=\"Your account on YourSite.com has expired\",\n body=\"Bla bla bla ...\")\n return\n\napplication = webapp.WSGIApplication([\n ('/tasks/summary', MailJob)], debug=True)\n\ndef main():\n wsgiref.handlers.CGIHandler().run(application)\n\nif __name__ == '__main__':\n main()\n\n", "Looks like you're reading this page (even though you don't give us the URL). The configuration and code as presented won't run successfully: the cron task will try to visit URL path /tasks/summary, app.yaml will make that execute main.py, but the latter only sets up a handler for /mailjob, so the cron task's attempt will fail with a 404 status code.\n" ]
[ 3, 1 ]
[]
[]
[ "cron", "google_app_engine", "python", "url_routing" ]
stackoverflow_0001114601_cron_google_app_engine_python_url_routing.txt
Q: Python converts string into tuple Example: regular_string = "%s %s" % ("foo", "bar") result = {} result["somekey"] = regular_string, print result["somekey"] # ('foo bar',) Why result["somekey"] tuple now not string? A: Because of comma at the end of the line. A: When you write result["somekey"] = regular_string, Python reads result["somekey"] = (regular_string,) (x,) is the syntax for a tuple with a single element. Parentheses are assumed. And you really end up putting a tuple, instead of a string there.
Python converts string into tuple
Example: regular_string = "%s %s" % ("foo", "bar") result = {} result["somekey"] = regular_string, print result["somekey"] # ('foo bar',) Why result["somekey"] tuple now not string?
[ "Because of comma at the end of the line.\n", "When you write\nresult[\"somekey\"] = regular_string,\n\nPython reads\nresult[\"somekey\"] = (regular_string,)\n\n(x,) is the syntax for a tuple with a single element. Parentheses are assumed. And you really end up putting a tuple, instead of a string there.\n" ]
[ 16, 9 ]
[]
[]
[ "python" ]
stackoverflow_0001114813_python.txt
Q: How to set default button in PyGTK? I have very simple window where I have 2 buttons - one for cancel, one for apply. How to set the button for apply as default one? (When I press enter, "apply" button is pressed) However, I want to set focus to the first input widget (I can't use grab_focus() on the button) Any suggestions? Edit: After wuub's answer it works visually good. However, when I press the button in different widget, it doesn't run callback of the default button. Example code: import os, sys, pygtk, gtk def run(button, window): dialog = gtk.MessageDialog(window, gtk.DIALOG_MODAL, gtk.MESSAGE_INFO, gtk.BUTTONS_OK, "OK") dialog.run() dialog.destroy() window = gtk.Window() window.connect("destroy", gtk.main_quit) vbox = gtk.VBox(spacing = 10) entry = gtk.Entry() vbox.pack_start(entry) button = gtk.Button(stock = gtk.STOCK_SAVE) button.connect("clicked", run, window) button.set_flags(gtk.CAN_DEFAULT) window.set_default(button) vbox.pack_start(button) window.add(vbox) window.show_all() gtk.main() EDIT2: Every input which can activate default widget must be ran widget.set_activates_default(True) A: http://www.pygtk.org/docs/pygtk/class-gtkdialog.html#method-gtkdialog--set-default-response http://www.pygtk.org/docs/pygtk/class-gtkwindow.html#method-gtkwindow--set-default
How to set default button in PyGTK?
I have very simple window where I have 2 buttons - one for cancel, one for apply. How to set the button for apply as default one? (When I press enter, "apply" button is pressed) However, I want to set focus to the first input widget (I can't use grab_focus() on the button) Any suggestions? Edit: After wuub's answer it works visually good. However, when I press the button in different widget, it doesn't run callback of the default button. Example code: import os, sys, pygtk, gtk def run(button, window): dialog = gtk.MessageDialog(window, gtk.DIALOG_MODAL, gtk.MESSAGE_INFO, gtk.BUTTONS_OK, "OK") dialog.run() dialog.destroy() window = gtk.Window() window.connect("destroy", gtk.main_quit) vbox = gtk.VBox(spacing = 10) entry = gtk.Entry() vbox.pack_start(entry) button = gtk.Button(stock = gtk.STOCK_SAVE) button.connect("clicked", run, window) button.set_flags(gtk.CAN_DEFAULT) window.set_default(button) vbox.pack_start(button) window.add(vbox) window.show_all() gtk.main() EDIT2: Every input which can activate default widget must be ran widget.set_activates_default(True)
[ "http://www.pygtk.org/docs/pygtk/class-gtkdialog.html#method-gtkdialog--set-default-response\nhttp://www.pygtk.org/docs/pygtk/class-gtkwindow.html#method-gtkwindow--set-default\n" ]
[ 4 ]
[]
[]
[ "pygtk", "python", "user_interface" ]
stackoverflow_0001114568_pygtk_python_user_interface.txt
Q: Sorting disk I/O errors in Python How do I sort out (distinguish) an error derived from a "disk full condition" from "trying to write to a read-only file system"? I don't want to fill my HD to find out :) What I want is to know who to catch each exception, so my code can say something to the user when he is trying to write to a ReadOnly FS and another message if the user is trying to write a file in a disk that is full. A: Once you catch IOError, e.g. with an except IOError, e: clause in Python 2.*, you can examine e.errno to find out exactly what kind of I/O error it was (unfortunately in a way that's not necessarily fully portable among different operating systems). See the errno module in Python standard library; opening a file for writing on a R/O filesystem (on a sensible OS) should produce errno.EPERM, errno.EACCES or better yet errno.EROFS ("read-only filesystem"); if the filesystem is R/W but there's no space left you should get errno.ENOSPC ("no space left on device"). But you will need to experiment on the OSes you care about (with a small USB key filling it up should be easy;-). There's no way to use different except clauses depending on errno -- such clauses must be distinguished by the class of exceptions they catch, not by attributes of the exception instance -- so you'll need an if/else or other kind of dispatching within a single except IOError, e: clause. A: On a read-only filesystem, the files themselves will be marked as read-only. Any attempt to open a read-only file for writing (O_WRONLY or O_RDWR) will fail. On UNIX-like systems, the errno EACCES will be set. >>> file('/etc/resolv.conf', 'a') Traceback (most recent call last): File "", line 1, in IOError: [Errno 13] Permission denied: '/etc/resolv.conf' In contrast, attempts to write to a full file may result in ENOSPC. May is critical; the error may be delayed until fsync or close. >>> file(/dev/full, 'a').write('\n') close failed in file object destructor: IOError: [Errno 28] No space left on device
Sorting disk I/O errors in Python
How do I sort out (distinguish) an error derived from a "disk full condition" from "trying to write to a read-only file system"? I don't want to fill my HD to find out :) What I want is to know who to catch each exception, so my code can say something to the user when he is trying to write to a ReadOnly FS and another message if the user is trying to write a file in a disk that is full.
[ "Once you catch IOError, e.g. with an except IOError, e: clause in Python 2.*, you can examine e.errno to find out exactly what kind of I/O error it was (unfortunately in a way that's not necessarily fully portable among different operating systems).\nSee the errno module in Python standard library; opening a file for writing on a R/O filesystem (on a sensible OS) should produce errno.EPERM, errno.EACCES or better yet errno.EROFS (\"read-only filesystem\"); if the filesystem is R/W but there's no space left you should get errno.ENOSPC (\"no space left on device\"). But you will need to experiment on the OSes you care about (with a small USB key filling it up should be easy;-).\nThere's no way to use different except clauses depending on errno -- such clauses must be distinguished by the class of exceptions they catch, not by attributes of the exception instance -- so you'll need an if/else or other kind of dispatching within a single except IOError, e: clause.\n", "On a read-only filesystem, the files themselves will be marked as read-only. Any attempt to open a read-only file for writing (O_WRONLY or O_RDWR) will fail. On UNIX-like systems, the errno EACCES will be set.\n\n>>> file('/etc/resolv.conf', 'a')\nTraceback (most recent call last):\n File \"\", line 1, in \nIOError: [Errno 13] Permission denied: '/etc/resolv.conf'\n\nIn contrast, attempts to write to a full file may result in ENOSPC. May is critical; the error may be delayed until fsync or close.\n\n>>> file(/dev/full, 'a').write('\\n')\nclose failed in file object destructor:\nIOError: [Errno 28] No space left on device\n\n" ]
[ 10, 2 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0001115203_exception_python.txt
Q: Making a Makefile How I can make a Makefile, because it's the best way when you distribute a program by source code. Remember that this is for a C++ program and I'm starting in the C development world. But is it possible to make a Makefile for my Python programs? A: From your question it sounds like a tutorial or an overview of what Makefiles actually do might benefit you. A good places to start is the GNU Make documentation. It includes the following overview "The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them." And its first three chapters covers: Overview of make An Introduction to Makefiles Writing Makefiles A: I use Makefiles for some Python projects, but this is highly dubious... I do things like: SITE_ROOT=/var/www/apache/... site_dist: cp -a assets/css build/$(SITE_ROOT)/css cp -a src/public/*.py build/$(SITE_ROOT) and so on. Makefile are nothing but batch execution systems (and fairly complex ones at that). You can use your normal Python tools (to generate .pyc and others) the same way you would use GCC. PY_COMPILE_TOOL=pycompiler all: myfile.pyc cp myfile.pyc /usr/share/python/...wherever myfile.pyc: <deps> $(PY_COMPILE_TOOL) myfile.py Then $ make all And so on. Just treat your operations like any other. Your pycompiler might be something simple like: #!/usr/bin/python import py_compile py_compile.compile(file_var) or some variation on $ python -mcompileall . It is all the same. Makefiles are nothing special, just automated executions and the ability to check if files need updating. A: A simple Makefile usually consists of a set of targets, its dependencies, and the actions performed by each target: all: output.out output.out: dependency.o dependency2.o ld -o output.out dependency.o dependency2.o dependency.o: dependency.c gcc -o dependency.o dependency.c dependency2.o: dependency2.c gcc -o dependency2.o dependency2.c The target all (which is the first in the example) and tries to build its dependencies in case they don't exist or are not up to date. will be run when no target argument is specified in the make command. A: How i can make a MakeFile, because it's the best way when you distribuite a program by source code It's not. For example, KDE uses CMake, and Wesnoth uses SCons. I would suggest one of these systems instead, they are easier and more powerful than make. CMake can generate makefiles. :-) A: For Python programs, they're usually distributed with a setup.py script which uses distutils in order to build the software. distutils has extensive documentation which should be a good starting point. A: If you are asking about a portable form of creating Makefiles you can try to look at http://www.cmake.org/cmake/project/about.html
Making a Makefile
How I can make a Makefile, because it's the best way when you distribute a program by source code. Remember that this is for a C++ program and I'm starting in the C development world. But is it possible to make a Makefile for my Python programs?
[ "From your question it sounds like a tutorial or an overview of what Makefiles actually do might benefit you.\nA good places to start is the GNU Make documentation.\nIt includes the following overview \"The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them.\"\nAnd its first three chapters covers:\n\nOverview of make\nAn Introduction to Makefiles\nWriting Makefiles\n\n", "I use Makefiles for some Python projects, but this is highly dubious... I do things like:\nSITE_ROOT=/var/www/apache/...\n\nsite_dist:\n cp -a assets/css build/$(SITE_ROOT)/css\n cp -a src/public/*.py build/$(SITE_ROOT)\n\nand so on. Makefile are nothing but batch execution systems (and fairly complex ones at that). You can use your normal Python tools (to generate .pyc and others) the same way you would use GCC.\nPY_COMPILE_TOOL=pycompiler\n\nall: myfile.pyc\n cp myfile.pyc /usr/share/python/...wherever\nmyfile.pyc: <deps>\n $(PY_COMPILE_TOOL) myfile.py\n\nThen\n$ make all\n\nAnd so on. Just treat your operations like any other. Your pycompiler might be something simple like:\n#!/usr/bin/python\nimport py_compile\npy_compile.compile(file_var)\n\nor some variation on \n$ python -mcompileall .\n\nIt is all the same. Makefiles are nothing special, just automated executions and the ability to check if files need updating.\n", "A simple Makefile usually consists of a set of targets, its dependencies, and the actions performed by each target:\nall: output.out\n\noutput.out: dependency.o dependency2.o\n ld -o output.out dependency.o dependency2.o\n\ndependency.o: dependency.c\n gcc -o dependency.o dependency.c\n\ndependency2.o: dependency2.c\n gcc -o dependency2.o dependency2.c\n\nThe target all (which is the first in the example) and tries to build its dependencies in case they don't exist or are not up to date. will be run when no target argument is specified in the make command. \n", "\nHow i can make a MakeFile, because it's the best way when you distribuite a program by source code\n\nIt's not. For example, KDE uses CMake, and Wesnoth uses SCons. I would suggest one of these systems instead, they are easier and more powerful than make. CMake can generate makefiles. :-)\n", "For Python programs, they're usually distributed with a setup.py script which uses distutils in order to build the software. distutils has extensive documentation which should be a good starting point.\n", "If you are asking about a portable form of creating Makefiles you can try to look at http://www.cmake.org/cmake/project/about.html\n" ]
[ 11, 6, 4, 4, 3, 1 ]
[]
[]
[ "c", "c++", "makefile", "python" ]
stackoverflow_0001114667_c_c++_makefile_python.txt
Q: PyQt Automatic Repeating Forms I'm currently attempting to migrate a legacy VBA/Microsoft Access application to Python and PyQt. I've had no problems migrating any of the logic, and most of the forms have been a snap, as well. However, I've hit a problem on the most important part of the application--the main data-entry form. The form is basically a row of text boxes corresponding to fields in the database. The user simply enters data in to a fields, tabs to the next and repeats. When he comes to the end of the record/row, he tabs again, and the form automatically creates a new blank row for him to start entering data in again. (In actuality, it displays a "blank" row below the current new record, which the user can actually click in to to start a new records as well.) It also allows the user to scroll up and down to see all the current subset of records he's working on. Is there a way to replicate this functionality in PyQt? I haven't managed to find a way to get Qt to do this easily. Access takes care of it automatically; no code outside the form is required. Is it that easy in PyQt (or even close), or is this something that's going to need to be programmed from scratch? A: You should look into QSqlTableModel, and the QTableView Objects. QSqlTableModel offers an abstraction of a relational table that can be used inside on of the Qt view classes. A QTableView for example. The functionality you describe can be implemented with moderate effort just by using these two classes. The QSqlTableModel also supports editing on database fields. My guess the only functionality that you will have to manually implement is the "TAB" at the end of the table to create a new row if you want to keep that. I don't know much about Access, but using the ODBC-SQL driver you should be able use the actual access database for your development or testing there is some older information here, you might want to consider moving to Sqlite, Mysql or another actual SQL database.
PyQt Automatic Repeating Forms
I'm currently attempting to migrate a legacy VBA/Microsoft Access application to Python and PyQt. I've had no problems migrating any of the logic, and most of the forms have been a snap, as well. However, I've hit a problem on the most important part of the application--the main data-entry form. The form is basically a row of text boxes corresponding to fields in the database. The user simply enters data in to a fields, tabs to the next and repeats. When he comes to the end of the record/row, he tabs again, and the form automatically creates a new blank row for him to start entering data in again. (In actuality, it displays a "blank" row below the current new record, which the user can actually click in to to start a new records as well.) It also allows the user to scroll up and down to see all the current subset of records he's working on. Is there a way to replicate this functionality in PyQt? I haven't managed to find a way to get Qt to do this easily. Access takes care of it automatically; no code outside the form is required. Is it that easy in PyQt (or even close), or is this something that's going to need to be programmed from scratch?
[ "You should look into QSqlTableModel, and the QTableView Objects. QSqlTableModel offers an abstraction of a relational table that can be used inside on of the Qt view classes. A QTableView for example. The functionality you describe can be implemented with moderate effort just by using these two classes. \nThe QSqlTableModel also supports editing on database fields.\nMy guess the only functionality that you will have to manually implement is the \"TAB\" at the end of the table to create a new row if you want to keep that.\nI don't know much about Access, but using the ODBC-SQL driver you should be able use the actual access database for your development or testing there is some older information here, you might want to consider moving to Sqlite, Mysql or another actual SQL database.\n" ]
[ 3 ]
[]
[]
[ "pyqt", "pyqt4", "python", "qt", "qt4" ]
stackoverflow_0001114678_pyqt_pyqt4_python_qt_qt4.txt
Q: Which version of python is currently best for os x? After going through hell trying to install the latest version of postgresql and psycopg2 today I'm going for a complete reinstall of Leopard. I've been sticking with macpython 2.5 for the past year but now I'm considering macports even 2.6 For me it's most important for Twisted, PIL and psycopg2 to be working without a problem. Can anyone give some guidelines for what version I should choose, based on experience? Edit: Ok I've decided to go without reinstalling the os. Hacked around to clean up the bad PostgresPlus installation and installed another one. The official python 2.6.1 package works great, no problem installing it alongside 2.5.2. Psycopg2 works. But as expected PIL wont compile. I guess I'll be switching between the 2.5 from macports and the official 2.6 for different tasks, since I know the macports python has it's issues with some packages. Another Edit: I've now compiled PIL. Had to hide the whole macports directory and half the xcode libraries, so it would find the right ones. It wouldn't accept the paths I was feeding it. PIL is notorious for this on leopard. A: You can install them side-by-side. If you've encounter problems just set python 2.5 as the standard python and use e.g. python26 for a newer version. A: Read this http://farmdev.com/thoughts/66/python-3-0-on-mac-os-x-alongside-2-6-2-5-etc-/ A: I still use macports python25, because so many other packages depend on it, and have not updated to use python26. $ port dependents python25 gnome-doc-utils depends on python25 mod_python25 depends on python25 postgresql83 depends on python25 gtk-doc depends on python25 at-spi depends on python25 gnome-desktop depends on python25 mercurial depends on python25 And that's excluding the py25-* packages I have installed. A: I wrote something today on this very subject, my recommendation? Run multiple version, and slap virtualenv down to compartmentalize things. http://jessenoller.com/2009/03/16/so-you-want-to-use-python-on-the-mac/ I also wouldn't both with macports. I don't see a need for it. A: I've updated my macbook running leopard to python 2.6 and haven't had any problems with psycopg2. For that matter, I haven't had any compatibility issues anywhere with 2.6, but obviously switching to python3k isn't exactly recommended if you're concerned about backwards compatibility. A: I would stick with the MacPython version 2.5.x (I believe 2.5.4 currently). Here's my rationale: Snow Leopard may still be on the 2.5 series, so you might as well be consistent with the future OS (i.e. no point in going too far ahead). For most production apps, nobody is going to want to use 2.6 for another year. No frameworks/programs are going to leave 2.5 behind for at least 2 years. In other words, my approach is that the only reason to do 2.6 is for fun. If you're looking to have fun, just go for 3.0. A: I use both Twisted and Psycopg2 extensively on OSX, and both work fine with Python 2.6. Neither has been ported to Python 3.0, as far as I know. Several of Python 3.0's features have been back-ported to 2.6, so you gain quite a bit by moving from 2.5 to 2.6. But I wouldn't switch to 3.0 until all of your thirdparty libraries support it; and this may not happen for some time. A: I had some trouble installing PIL. I compiled it and it worked with the modification explained on this post http://passingcuriosity.com/2009/installing-pil-on-mac-os-x-leopard/ After that it worked fine. A: I am using Python 2.5.1. It's working great for me for general scripting and some CherryPy web projects. A: If your using Macports, I recommend downloading the python_select package, which facilitates easy switching between different versions including the built in apple versions. Makes life a lot easier.
Which version of python is currently best for os x?
After going through hell trying to install the latest version of postgresql and psycopg2 today I'm going for a complete reinstall of Leopard. I've been sticking with macpython 2.5 for the past year but now I'm considering macports even 2.6 For me it's most important for Twisted, PIL and psycopg2 to be working without a problem. Can anyone give some guidelines for what version I should choose, based on experience? Edit: Ok I've decided to go without reinstalling the os. Hacked around to clean up the bad PostgresPlus installation and installed another one. The official python 2.6.1 package works great, no problem installing it alongside 2.5.2. Psycopg2 works. But as expected PIL wont compile. I guess I'll be switching between the 2.5 from macports and the official 2.6 for different tasks, since I know the macports python has it's issues with some packages. Another Edit: I've now compiled PIL. Had to hide the whole macports directory and half the xcode libraries, so it would find the right ones. It wouldn't accept the paths I was feeding it. PIL is notorious for this on leopard.
[ "You can install them side-by-side. If you've encounter problems just set python 2.5 as the standard python and use e.g. python26 for a newer version.\n", "Read this\nhttp://farmdev.com/thoughts/66/python-3-0-on-mac-os-x-alongside-2-6-2-5-etc-/\n", "I still use macports python25, because so many other packages depend on it, and have not updated to use python26.\n$ port dependents python25\ngnome-doc-utils depends on python25\nmod_python25 depends on python25\npostgresql83 depends on python25\ngtk-doc depends on python25\nat-spi depends on python25\ngnome-desktop depends on python25\nmercurial depends on python25\n\nAnd that's excluding the py25-* packages I have installed.\n", "I wrote something today on this very subject, my recommendation? Run multiple version, and slap virtualenv down to compartmentalize things.\nhttp://jessenoller.com/2009/03/16/so-you-want-to-use-python-on-the-mac/\nI also wouldn't both with macports. I don't see a need for it.\n", "I've updated my macbook running leopard to python 2.6 and haven't had any problems with psycopg2. For that matter, I haven't had any compatibility issues anywhere with 2.6, but obviously switching to python3k isn't exactly recommended if you're concerned about backwards compatibility.\n", "I would stick with the MacPython version 2.5.x (I believe 2.5.4 currently). Here's my rationale:\n\nSnow Leopard may still be on the 2.5 series, so you might as well be consistent with the future OS (i.e. no point in going too far ahead).\nFor most production apps, nobody is going to want to use 2.6 for another year.\nNo frameworks/programs are going to leave 2.5 behind for at least 2 years.\n\nIn other words, my approach is that the only reason to do 2.6 is for fun. If you're looking to have fun, just go for 3.0.\n", "I use both Twisted and Psycopg2 extensively on OSX, and both work fine with Python 2.6. Neither has been ported to Python 3.0, as far as I know.\nSeveral of Python 3.0's features have been back-ported to 2.6, so you gain quite a bit by moving from 2.5 to 2.6. But I wouldn't switch to 3.0 until all of your thirdparty libraries support it; and this may not happen for some time.\n", "I had some trouble installing PIL. I compiled it and it worked with the modification explained on this post http://passingcuriosity.com/2009/installing-pil-on-mac-os-x-leopard/\nAfter that it worked fine.\n", "I am using Python 2.5.1. It's working great for me for general scripting and some CherryPy web projects. \n", "If your using Macports, I recommend downloading the python_select package, which facilitates easy switching between different versions including the built in apple versions. Makes life a lot easier.\n" ]
[ 4, 3, 3, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0000651717_macos_python.txt
Q: Does "from-import" exec the whole module? OK, so I know that from-import is "exactly" the same as import, except that it's obviously not because namespaces are populated differently. My question is primarily motivated because I have a utils module which has one or two functions that are used by every other module in my app, and I'm working on incorporating the standard library logging module, which as far as I can tell I need to do sorta like this: import logging logging.basicConfig(filename="/var/log") # I want file logging baselogger = logging.getLogger("mine") #do some customizations to baselogger and then to use it in a different module I would import logging again: import logging logger = logging.getlogger("mine") # log stuff But what I want to know is if I do a from utils import awesome_func will my logger definitely be set up, and will the logging module be set up the way I want? This would apply to other generic set-ups as well. A: The answer to your question is yes. For a good explanation of the import process, please see Frederik Lundh's "Importing Python Modules". In particular, I'll quote the sections that answer your query. What Does Python Do to Import a Module? [...] Create a new, empty module object (this is essentially a dictionary) Insert that module object in the sys.modules dictionary Load the module code object (if necessary, compile the module first) Execute the module code object in the new module’s namespace. All variables assigned by the code will be available via the module object. and on the use of from-import: There are Many Ways to Import a Module [...] from X import a, b, c imports the module X, and creates references in the current namespace to the given objects. Or in other words, you can now use a and b and c in your program. Note I've elided some matter. It's worth reading the entire document, it's actually quite short. A: Looks like like the answer is yes: $ echo 'print "test" def f1(): print "f1" def f2(): print "f2" ' > util.py $ echo 'from util import f1 f1() from util import f2 f2() ' > test.py $ python test.py test f1 f2 $ A: Yes, from MODULE import OBJECT executes everything in the module and then effectively does OBJECT = MODULE.OBJECT. You can tell that the module has already been loaded, in a sense, because now it resides in the sys.modules dictionary. A: As mentioned above, yes. And you can write simple test to be sure: # file m.py import sys # define function def f(): pass #execute this when module is loaded (i.e. imported or run as script) print 'imported', __name__ # print all "exposed" variables to make sure that the f is visible print dir(sys.modules[__name__]) # file main.py from m import f print 'done' I recommend writing such tests every time you're in doubt how some importing or subclassing or thomething else works.
Does "from-import" exec the whole module?
OK, so I know that from-import is "exactly" the same as import, except that it's obviously not because namespaces are populated differently. My question is primarily motivated because I have a utils module which has one or two functions that are used by every other module in my app, and I'm working on incorporating the standard library logging module, which as far as I can tell I need to do sorta like this: import logging logging.basicConfig(filename="/var/log") # I want file logging baselogger = logging.getLogger("mine") #do some customizations to baselogger and then to use it in a different module I would import logging again: import logging logger = logging.getlogger("mine") # log stuff But what I want to know is if I do a from utils import awesome_func will my logger definitely be set up, and will the logging module be set up the way I want? This would apply to other generic set-ups as well.
[ "The answer to your question is yes. \nFor a good explanation of the import process, please see Frederik Lundh's \"Importing Python Modules\". \nIn particular, I'll quote the sections that answer your query.\n\nWhat Does Python Do to Import a Module?\n[...]\n\nCreate a new, empty module object (this is essentially a dictionary)\nInsert that module object in the sys.modules dictionary\nLoad the module code object (if necessary, compile the module first)\nExecute the module code object in the new module’s namespace. All variables assigned by the code will be available via the module object.\n\n\nand on the use of from-import:\n\nThere are Many Ways to Import a Module\n[...]\nfrom X import a, b, c imports the module X, and creates references in the current namespace to the given objects. Or in other words, you can now use a and b and c in your program.\n\nNote I've elided some matter. It's worth reading the entire document, it's actually quite short.\n", "Looks like like the answer is yes:\n$ echo 'print \"test\"\ndef f1():\n print \"f1\"\n\ndef f2():\n print \"f2\"\n\n' > util.py\n\n$ echo 'from util import f1\nf1()\nfrom util import f2\nf2()\n' > test.py\n\n$ python test.py \ntest\nf1\nf2\n\n$ \n\n", "Yes, from MODULE import OBJECT executes everything in the module and then effectively does OBJECT = MODULE.OBJECT. You can tell that the module has already been loaded, in a sense, because now it resides in the sys.modules dictionary.\n", "As mentioned above, yes.\nAnd you can write simple test to be sure:\n# file m.py\nimport sys\n\n# define function\ndef f():\n pass\n\n#execute this when module is loaded (i.e. imported or run as script)\nprint 'imported', __name__\n# print all \"exposed\" variables to make sure that the f is visible\nprint dir(sys.modules[__name__])\n\n# file main.py\nfrom m import f\nprint 'done'\n\nI recommend writing such tests every time you're in doubt how some importing or subclassing or thomething else works.\n" ]
[ 7, 6, 3, 0 ]
[]
[]
[ "import", "logging", "python" ]
stackoverflow_0001114787_import_logging_python.txt
Q: Static methods and thread safety In python with all this idea of "Everything is an object" where is thread-safety? I am developing django website with wsgi. Also it would work in linux, and as I know they use effective process management, so we could not think about thread-safety alot. I am not doubt in how module loads, and there functions are static or not? Every information would be helpfull. A: Functions in a module are equivalent to static methods in a class. The issue of thread safety arises when multiple threads may be modifying shared data, or even one thread may be modifying such data while others are reading it; it's best avoided by making data be owned by ONE module (accessed via Queue.Queue from others), but when that's not feasible you have to resort to locking and other, more complex, synchronization primitives. This applies whether the access to shared data happens in module functions, static methods, or instance methods -- and the shared data is such whether it's instance variables, class ones, or global ones (scoping and thread safety are essentially disjoint, except that function-local data is, to a pont, intrinsically thread-safe -- no other thread will ever see the data inside a function instance, until and unless the function deliberately "shares" it through shared containers). If you use the multiprocessing module in Python's standard library, instead of the threading module, you may in fact not have to care about "thread safety" -- essentially because NO data is shared among processes... well, unless you go out of your way to change that, e.g. via mmapped files;-). A: See the python documentation to better understand the general thread safety implications of Python. Django itself seems to be thread safe as of 1.0.3, but your code may not and you will have to verify that... My advice would be to simply don't care about that and serve your application with multiple processes instead of multiple threads (for example by using apache 'prefork' instead of 'worker' MPM).
Static methods and thread safety
In python with all this idea of "Everything is an object" where is thread-safety? I am developing django website with wsgi. Also it would work in linux, and as I know they use effective process management, so we could not think about thread-safety alot. I am not doubt in how module loads, and there functions are static or not? Every information would be helpfull.
[ "Functions in a module are equivalent to static methods in a class. The issue of thread safety arises when multiple threads may be modifying shared data, or even one thread may be modifying such data while others are reading it; it's best avoided by making data be owned by ONE module (accessed via Queue.Queue from others), but when that's not feasible you have to resort to locking and other, more complex, synchronization primitives.\nThis applies whether the access to shared data happens in module functions, static methods, or instance methods -- and the shared data is such whether it's instance variables, class ones, or global ones (scoping and thread safety are essentially disjoint, except that function-local data is, to a pont, intrinsically thread-safe -- no other thread will ever see the data inside a function instance, until and unless the function deliberately \"shares\" it through shared containers).\nIf you use the multiprocessing module in Python's standard library, instead of the threading module, you may in fact not have to care about \"thread safety\" -- essentially because NO data is shared among processes... well, unless you go out of your way to change that, e.g. via mmapped files;-).\n", "See the python documentation to better understand the general thread safety implications of Python.\nDjango itself seems to be thread safe as of 1.0.3, but your code may not and you will have to verify that...\nMy advice would be to simply don't care about that and serve your application with multiple processes instead of multiple threads (for example by using apache 'prefork' instead of 'worker' MPM).\n" ]
[ 8, 0 ]
[]
[]
[ "django", "python", "thread_safety" ]
stackoverflow_0001115420_django_python_thread_safety.txt
Q: Determine proxy type I have the following code to download a URL through a proxy: proxy_handler = urllib2.ProxyHandler({'http': p}) opener = urllib2.build_opener(proxy_handler) urllib2.install_opener(opener) req = urllib2.Request(url) sock = urllib2.urlopen(req) How can I use Python to determine the type of proxy it is (transparent, anonymous, etc)? One solution would be to use an external server, but I want to avoid that kind of dependency if possible. A: One solution would be to use an external server You must have a server of some sort. The best option you can hope of doing is to host your own web server and print the headers to see if it is leaking any variables.
Determine proxy type
I have the following code to download a URL through a proxy: proxy_handler = urllib2.ProxyHandler({'http': p}) opener = urllib2.build_opener(proxy_handler) urllib2.install_opener(opener) req = urllib2.Request(url) sock = urllib2.urlopen(req) How can I use Python to determine the type of proxy it is (transparent, anonymous, etc)? One solution would be to use an external server, but I want to avoid that kind of dependency if possible.
[ "\nOne solution would be to use an external server\n\nYou must have a server of some sort.\nThe best option you can hope of doing is to host your own web server and print the headers to see if it is leaking any variables.\n" ]
[ 1 ]
[ "Do you mean retrieving the current proxy configuration?\nYou can with urllib.getproxies:\nimport urllib\nurllib.getproxies()\n{'http': 'http://your_proxy_servername:8080'}\n\nNote: I was not able to find any documentation about urllib.getproxies. I am using Python 2.5, and it just works.\n" ]
[ -1 ]
[ "anonymous", "proxy", "python" ]
stackoverflow_0001115039_anonymous_proxy_python.txt
Q: Google App Engine: how to unescape POST body? Newbie question... I am using silverlight to POST data to my GAE application class XmlCrud(webapp.RequestHandler): def post(self): body = self.request.body The data comes in fine but it is escaped like this: %3C%3Fxml+version=%221.0%22+encoding%3D%22utf-16%22%3F%3E%0D%0A%3CBosses+xmlns%3Axsi%3D%22http%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema-instance%22+xmlns%3Axsd how do I unescape it? A: I agree with Hank. The answer to your actual question, though, is that your example is URL encoded. To decode, replace each %XX with the character having hex value 0xXX, and + with space. urllib.unquote_plus does this, and according to the docs it's in App Engine urllib docs: https://docs.python.org/library/urllib.html Statement that urllib is supported (there may be others): http://code.google.com/appengine/docs/python/urlfetch/overview.html A: I'd recommend not encoding it in the first place if the body of the post is just an XML document.
Google App Engine: how to unescape POST body?
Newbie question... I am using silverlight to POST data to my GAE application class XmlCrud(webapp.RequestHandler): def post(self): body = self.request.body The data comes in fine but it is escaped like this: %3C%3Fxml+version=%221.0%22+encoding%3D%22utf-16%22%3F%3E%0D%0A%3CBosses+xmlns%3Axsi%3D%22http%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema-instance%22+xmlns%3Axsd how do I unescape it?
[ "I agree with Hank.\nThe answer to your actual question, though, is that your example is URL encoded. To decode, replace each %XX with the character having hex value 0xXX, and + with space.\nurllib.unquote_plus does this, and according to the docs it's in App Engine\nurllib docs: https://docs.python.org/library/urllib.html\nStatement that urllib is supported (there may be others): http://code.google.com/appengine/docs/python/urlfetch/overview.html\n", "I'd recommend not encoding it in the first place if the body of the post is just an XML document.\n" ]
[ 3, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001116066_google_app_engine_python.txt
Q: "Python.exe" crashes when PyQt's setPixmap() is called with a Pixmap I have a program that sends and receives images to each other using sockets. The server sends the image data using 'image.tostring()' and the client side receives it and turns it back into an image using 'Image.fromstring', then into a QImage using 'ImageQt.ImageQt(image)', turns it into a QPixmap using 'QPixmap.fromimage(qimage)'then updates my QWidget's QLable's image using 'lable.setPixmap(qpixmap)' Everything works fine with small images, but with images larger than 200x200, python.exe crashes and the console only shows "Process terminated with an exit code of -1073741819" and doesn't tell me what the problem is. I've isolated the problem down to 'setPixmap()' (everything else works as long as I comment out that), but I can't see what the problem is. This only happens on the client side. The server side uses the same steps going from Image to QImage to QPixmap then setPixmap, but that doesn't have any problems. Also tried making it a QBitmap and using setPixmap on the bitmap, which worked (but it's black and white so can't use it). Weird! Any help would be appreciated! A: It may be worth dumping the image data to a file and checking that you have all the data by loading it into an image viewer. If you get incomplete data, you may still be able to obtain a QImage and create a QPixmap, but it may be invalid.
"Python.exe" crashes when PyQt's setPixmap() is called with a Pixmap
I have a program that sends and receives images to each other using sockets. The server sends the image data using 'image.tostring()' and the client side receives it and turns it back into an image using 'Image.fromstring', then into a QImage using 'ImageQt.ImageQt(image)', turns it into a QPixmap using 'QPixmap.fromimage(qimage)'then updates my QWidget's QLable's image using 'lable.setPixmap(qpixmap)' Everything works fine with small images, but with images larger than 200x200, python.exe crashes and the console only shows "Process terminated with an exit code of -1073741819" and doesn't tell me what the problem is. I've isolated the problem down to 'setPixmap()' (everything else works as long as I comment out that), but I can't see what the problem is. This only happens on the client side. The server side uses the same steps going from Image to QImage to QPixmap then setPixmap, but that doesn't have any problems. Also tried making it a QBitmap and using setPixmap on the bitmap, which worked (but it's black and white so can't use it). Weird! Any help would be appreciated!
[ "It may be worth dumping the image data to a file and checking that you have all the data by loading it into an image viewer. If you get incomplete data, you may still be able to obtain a QImage and create a QPixmap, but it may be invalid.\n" ]
[ 0 ]
[]
[]
[ "pyqt", "python", "qpixmap" ]
stackoverflow_0001029033_pyqt_python_qpixmap.txt
Q: PyQT: QTableWidget.setItemPrototype not working? In a QTableWidget i want to display all values only with two decimals places. For that I subclassed QTableWidgetItem. class MyCell(QTableWidgetItem): def __init__(self, *args): QTableWidgetItem.__init__(self, *args) def clone(self): return MyCell() def data(self, role): t = QTableWidgetItem(self).data(role) if role == 0: if t.type() != 0: try: a, b = str(t.toString()).split('.') return QVariant( ".".join([a,b[:2]])) except: return t return t I read the documentation and was thinking that I can use something like: class MyDialog(QDialog): def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.table = QTableWidget() acell = MyCell() self.table.setItemPrototype(acell) self.table.setRowCount(5) self.table.setColumnCount(5) .... But this crashes more or less randomly. When I use the method self.table.setItem it works without problem. Any hints are appreciated. A: There are two issues here. One may be a problem with your code, the other may be a bug in PyQt. In your data() method implementation, you probably meant to write this: def data(self, role): t = QTableWidgetItem.data(self, role) ... This calls the superclass's data() method rather than creating a new item and calling its data method. When you set up your dialog, you may need to keep a reference to your item prototype: def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.table = QTableWidget() self.acell = MyCell() self.table.setItemPrototype(self.acell) Although the Qt documentation says that ownership of the prototype is passed to the table widget, the PyQt bindings don't appear to do this, so you will need to prevent the prototype from being garbage collected.
PyQT: QTableWidget.setItemPrototype not working?
In a QTableWidget i want to display all values only with two decimals places. For that I subclassed QTableWidgetItem. class MyCell(QTableWidgetItem): def __init__(self, *args): QTableWidgetItem.__init__(self, *args) def clone(self): return MyCell() def data(self, role): t = QTableWidgetItem(self).data(role) if role == 0: if t.type() != 0: try: a, b = str(t.toString()).split('.') return QVariant( ".".join([a,b[:2]])) except: return t return t I read the documentation and was thinking that I can use something like: class MyDialog(QDialog): def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.table = QTableWidget() acell = MyCell() self.table.setItemPrototype(acell) self.table.setRowCount(5) self.table.setColumnCount(5) .... But this crashes more or less randomly. When I use the method self.table.setItem it works without problem. Any hints are appreciated.
[ "There are two issues here. One may be a problem with your code, the other may be a bug in PyQt.\nIn your data() method implementation, you probably meant to write this:\ndef data(self, role):\n t = QTableWidgetItem.data(self, role)\n ...\n\nThis calls the superclass's data() method rather than creating a new item and calling its data method.\nWhen you set up your dialog, you may need to keep a reference to your item prototype:\ndef __init__(self, parent=None):\n super(MyDialog, self).__init__(parent)\n\n self.table = QTableWidget()\n self.acell = MyCell()\n self.table.setItemPrototype(self.acell)\n\nAlthough the Qt documentation says that ownership of the prototype is passed to the table widget, the PyQt bindings don't appear to do this, so you will need to prevent the prototype from being garbage collected.\n" ]
[ 1 ]
[]
[]
[ "pyqt", "pyqt4", "python" ]
stackoverflow_0001078947_pyqt_pyqt4_python.txt
Q: Finding content between two words withou RegEx, BeautifulSoup, lXml ... etc How to find out the content between two words or two sets of random characters? The scraped page is not guaranteed to be Html only and the important data can be inside a javascript block. So, I can't remove the JavaScript. consider this: <html> <body> <div>StartYYYY "Extract HTML", ENDYYYY </body> Some Java Scripts code STARTXXXX "Extract JS Code" ENDXXXX. </html> So as you see the html markup may not be complete. I can fetch the page, and then without worrying about anything, I want to find the content called "Extract the name" and "Extract the data here in a JavaScript". What I am looking for is in python: Like this: data = FindBetweenText(UniqueTextBeforeContent, UniqueTextAfterContent, page) Where page is downloaded and data would have the text I am looking for. I rather stay away from regEx as some of the cases can be too complex for RegEx. A: if you are sure your markers are unique, do something like this s=""" <html> <body> <div>StartYYYY "Extract HTML", ENDYYYY </body> Some Java Scripts code STARTXXXX "Extract JS Code" ENDXXXX. </html> """ def FindBetweenText(startMarker, endMarker, text): startPos = text.find(startMarker) if startPos < 0: return endPos = text.find(endMarker) if endPos < 0: return return text[startPos+len(startMarker):endPos] print FindBetweenText('STARTXXXX', 'ENDXXXX', s) A: Well, this is what it would be in PHP. No doubt there's a much sexier Pythonic way. function FindBetweenText($before, $after, $text) { $before_pos = strpos($text, $before); if($before_pos === false) return null; $after_pos = strpos($text, $after); if($after_pos === false || $after_pos <= $before_pos) return null; return substr($text, $before_pos, $after_pos - $before_pos); } A: [Slightly tested] def bracketed_find_first(prefix, suffix, page, start=0): prefixpos = page.find(prefix, start) if prefixpos == -1: return None # NOT "" startpos = prefixpos + len(prefix) endpos = page.find(suffix, startpos) # DRY if endpos == -1: return None # NOT "" return page[startpos:endpos] Note: the above returns only the first occurrence. Here is a generator which yields each occurrence. def bracketed_finditer(prefix, suffix, page, start_at=0): while True: prefixpos = page.find(prefix, start_at) if prefixpos == -1: return # StopIteration startpos = prefixpos + len(prefix) endpos = page.find(suffix, startpos) if endpos == -1: return yield page[startpos:endpos] start_at = endpos + len(suffix) A: Here's my attempt, this is tested. While recursive, there should be no unnecessary string duplication, although a generator might be more optimal def bracketed_find(s, start, end, startat=0): startloc=s.find(start, startat) if startloc==-1: return [] endloc=s.find(end, startloc+len(start)) if endloc == -1: return [s[startloc+len(start):]] return [s[startloc+len(start):endloc]] + bracketed_find(s, start, end, endloc+len(end)) and here is a generator version def bracketed_find(s, start, end, startat=0): startloc=s.find(start, startat) if startloc==-1: return endloc=s.find(end, startloc+len(start)) if endloc == -1: yield s[startloc+len(start):] return else: yield s[startloc+len(start):endloc] for found in bracketed_find(s, start, end, endloc+len(end)): yield found
Finding content between two words withou RegEx, BeautifulSoup, lXml ... etc
How to find out the content between two words or two sets of random characters? The scraped page is not guaranteed to be Html only and the important data can be inside a javascript block. So, I can't remove the JavaScript. consider this: <html> <body> <div>StartYYYY "Extract HTML", ENDYYYY </body> Some Java Scripts code STARTXXXX "Extract JS Code" ENDXXXX. </html> So as you see the html markup may not be complete. I can fetch the page, and then without worrying about anything, I want to find the content called "Extract the name" and "Extract the data here in a JavaScript". What I am looking for is in python: Like this: data = FindBetweenText(UniqueTextBeforeContent, UniqueTextAfterContent, page) Where page is downloaded and data would have the text I am looking for. I rather stay away from regEx as some of the cases can be too complex for RegEx.
[ "if you are sure your markers are unique, do something like this\ns=\"\"\"\n<html>\n<body>\n<div>StartYYYY \"Extract HTML\", ENDYYYY\n\n</body>\n\nSome Java Scripts code STARTXXXX \"Extract JS Code\" ENDXXXX.\n\n</html>\n\"\"\"\n\ndef FindBetweenText(startMarker, endMarker, text):\n startPos = text.find(startMarker)\n if startPos < 0: return\n endPos = text.find(endMarker)\n if endPos < 0: return\n\n return text[startPos+len(startMarker):endPos]\n\nprint FindBetweenText('STARTXXXX', 'ENDXXXX', s)\n\n", "Well, this is what it would be in PHP. No doubt there's a much sexier Pythonic way.\nfunction FindBetweenText($before, $after, $text) {\n $before_pos = strpos($text, $before);\n if($before_pos === false)\n return null;\n $after_pos = strpos($text, $after);\n if($after_pos === false || $after_pos <= $before_pos)\n return null;\n return substr($text, $before_pos, $after_pos - $before_pos);\n}\n\n", "[Slightly tested]\ndef bracketed_find_first(prefix, suffix, page, start=0):\n prefixpos = page.find(prefix, start)\n if prefixpos == -1: return None # NOT \"\"\n startpos = prefixpos + len(prefix)\n endpos = page.find(suffix, startpos) # DRY\n if endpos == -1: return None # NOT \"\"\n return page[startpos:endpos]\n\nNote: the above returns only the first occurrence. Here is a generator which yields each occurrence.\ndef bracketed_finditer(prefix, suffix, page, start_at=0):\n while True:\n prefixpos = page.find(prefix, start_at)\n if prefixpos == -1: return # StopIteration\n startpos = prefixpos + len(prefix)\n endpos = page.find(suffix, startpos)\n if endpos == -1: return\n yield page[startpos:endpos]\n start_at = endpos + len(suffix)\n\n", "Here's my attempt, this is tested. While recursive, there should be no unnecessary string duplication, although a generator might be more optimal\ndef bracketed_find(s, start, end, startat=0):\n startloc=s.find(start, startat)\n if startloc==-1:\n return []\n endloc=s.find(end, startloc+len(start))\n if endloc == -1:\n return [s[startloc+len(start):]]\n return [s[startloc+len(start):endloc]] + bracketed_find(s, start, end, endloc+len(end))\n\nand here is a generator version\ndef bracketed_find(s, start, end, startat=0):\n startloc=s.find(start, startat)\n if startloc==-1:\n return\n endloc=s.find(end, startloc+len(start))\n if endloc == -1:\n yield s[startloc+len(start):]\n return\n else:\n yield s[startloc+len(start):endloc]\n\n for found in bracketed_find(s, start, end, endloc+len(end)):\n yield found\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "fetch", "python", "screen_scraping" ]
stackoverflow_0001116172_fetch_python_screen_scraping.txt
Q: Advanced SAX Parser in C# See Below is the XML Arch. I want to display it in row / column wize. What I need is I need to convert this xml file to Hashtable like, {"form" : {"attrs" : { "string" : " Partners" } {"child1": { "group" : { "attrs" : { "col" : "6", "colspan":"1" } } { "child1": { "field" : { "attrs" : { "name":"name"} { "child2": { "field" : { "attrs" : { "name":"ref"} } {"child2": { "notebook" : "attrs" : {"colspan": 4} } } } <?xml version="1.0" encoding="utf-8"?> <form string="Partners"> <group col="6" colspan="4"> <field name="name" select="1"/> <field name="ref" select="1"/> <field name="customer" select="1"/> <field domain="[('domain', '=', 'partner')]" name="title"/> <field name="lang" select="2"/> <field name="supplier" select="2"/> </group> <notebook colspan="4"> <page string="General"> <field colspan="4" mode="form,tree" name="address" nolabel="1" select="1"> </field> <separator colspan="4" string="Categories"/> <field colspan="4" name="category_id" nolabel="1" select="2"/> </page> <page string="Sales &amp; Purchases"> <separator colspan="4" string="General Information"/> <field name="user_id" select="2"/> <field name="active" select="2"/> <field name="website" widget="url"/> <field name="date" select="2"/> <field name="parent_id"/> <newline/> <newline/><group col="2" colspan="2" name="sale_list"> <separator colspan="2" string="Sales Properties"/> <field name="property_product_pricelist"/> </group><group col="2" colspan="2"> <separator colspan="2" string="Purchases Properties"/> <field name="property_product_pricelist_purchase"/> </group><group col="2" colspan="2"> <separator colspan="2" string="Stock Properties"/> <field name="property_stock_customer"/> <field name="property_stock_supplier"/> </group></page> <page string="History"> <field colspan="4" name="events" nolabel="1" widget="one2many_list"/> </page> <page string="Notes"> <field colspan="4" name="comment" nolabel="1"/> </page> <page position="inside" string="Accounting"> <group col="2" colspan="2"> <separator colspan="2" string="Customer Accounting Properties"/> <field name="property_account_receivable"/> <field name="property_account_position"/><field name="vat" on_change="vat_change(vat)" select="2"/><field name="vat_subjected"/> <field name="property_payment_term"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Supplier Accounting Properties"/> <field name="property_account_payable"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Customer Credit"/> <field name="credit" select="2"/> <field name="credit_limit" select="2"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Supplier Debit"/> <field name="debit" select="2"/> </group> <field colspan="4" context="address=address" name="bank_ids" nolabel="1" select="2"> </field> </page> </notebook> </form> A: from xml.sax.handler import ContentHandler import xml class my_handler(ContentHandler): def get_attr_dict(self, attrs): ret_dict = {} for name in attrs.getNames(): ret_dict[name] = attrs.getValue(name) #end for name in attrs.getNames(): return ret_dict def setDocumentLocator(self, locator): print "DOCUMEN T LOOCATOR" pass def startDocument(self): self.my_data = {} self.my_stack = [] print "SATART DUASDFASD:" def startElement(self, name, attrs): attr_dict = self.get_attr_dict(attrs) myname = name!='field' and name or attr_dict['name'] append_dict = { 'attrs' : attr_dict, 'childs' : [] } if not self.my_data: self.my_data[name] = append_dict else: last_dict = {} for x in self.my_stack: if last_dict: last_dict = isinstance(last_dict, list) and last_dict[-1][x] or last_dict[x] else: last_dict = self.my_data[x] last_dict.append({myname : append_dict}) self.my_stack.extend([myname, 'childs']) def endElement(self, name): self.my_stack = self.my_stack[:-2] print "ENDS ELERMERE :",name def endDocument(self): print "Sfled :",self.my_data print "ENDA DAFASDFASD" if name == 'main': fp = open('Form.xml', 'r') xml.sax.parse(fp, my_handler()) fp.close() Finally instead of C#, it has been solved using Pyton Script, here I'm sharing the script. Thanks.
Advanced SAX Parser in C#
See Below is the XML Arch. I want to display it in row / column wize. What I need is I need to convert this xml file to Hashtable like, {"form" : {"attrs" : { "string" : " Partners" } {"child1": { "group" : { "attrs" : { "col" : "6", "colspan":"1" } } { "child1": { "field" : { "attrs" : { "name":"name"} { "child2": { "field" : { "attrs" : { "name":"ref"} } {"child2": { "notebook" : "attrs" : {"colspan": 4} } } } <?xml version="1.0" encoding="utf-8"?> <form string="Partners"> <group col="6" colspan="4"> <field name="name" select="1"/> <field name="ref" select="1"/> <field name="customer" select="1"/> <field domain="[('domain', '=', 'partner')]" name="title"/> <field name="lang" select="2"/> <field name="supplier" select="2"/> </group> <notebook colspan="4"> <page string="General"> <field colspan="4" mode="form,tree" name="address" nolabel="1" select="1"> </field> <separator colspan="4" string="Categories"/> <field colspan="4" name="category_id" nolabel="1" select="2"/> </page> <page string="Sales &amp; Purchases"> <separator colspan="4" string="General Information"/> <field name="user_id" select="2"/> <field name="active" select="2"/> <field name="website" widget="url"/> <field name="date" select="2"/> <field name="parent_id"/> <newline/> <newline/><group col="2" colspan="2" name="sale_list"> <separator colspan="2" string="Sales Properties"/> <field name="property_product_pricelist"/> </group><group col="2" colspan="2"> <separator colspan="2" string="Purchases Properties"/> <field name="property_product_pricelist_purchase"/> </group><group col="2" colspan="2"> <separator colspan="2" string="Stock Properties"/> <field name="property_stock_customer"/> <field name="property_stock_supplier"/> </group></page> <page string="History"> <field colspan="4" name="events" nolabel="1" widget="one2many_list"/> </page> <page string="Notes"> <field colspan="4" name="comment" nolabel="1"/> </page> <page position="inside" string="Accounting"> <group col="2" colspan="2"> <separator colspan="2" string="Customer Accounting Properties"/> <field name="property_account_receivable"/> <field name="property_account_position"/><field name="vat" on_change="vat_change(vat)" select="2"/><field name="vat_subjected"/> <field name="property_payment_term"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Supplier Accounting Properties"/> <field name="property_account_payable"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Customer Credit"/> <field name="credit" select="2"/> <field name="credit_limit" select="2"/> </group> <group col="2" colspan="2"> <separator colspan="2" string="Supplier Debit"/> <field name="debit" select="2"/> </group> <field colspan="4" context="address=address" name="bank_ids" nolabel="1" select="2"> </field> </page> </notebook> </form>
[ "\nfrom xml.sax.handler import\n ContentHandler import xml class\n my_handler(ContentHandler):\ndef get_attr_dict(self, attrs):\n ret_dict = {}\n for name in attrs.getNames():\n ret_dict[name] = attrs.getValue(name)\n #end for name in attrs.getNames():\n return ret_dict\n\ndef setDocumentLocator(self, locator):\n print \"DOCUMEN T LOOCATOR\"\n pass\n\ndef startDocument(self):\n self.my_data = {}\n self.my_stack = []\n print \"SATART DUASDFASD:\"\n\ndef startElement(self, name, attrs):\n attr_dict = self.get_attr_dict(attrs)\n myname = name!='field' and name or attr_dict['name']\n append_dict = {\n 'attrs' : attr_dict,\n 'childs' : []\n }\n\n if not self.my_data:\n self.my_data[name] = append_dict\n else:\n last_dict = {}\n for x in self.my_stack:\n if last_dict:\n last_dict = isinstance(last_dict, list) and\n\nlast_dict[-1][x] or last_dict[x]\n else:\n last_dict = self.my_data[x]\n last_dict.append({myname : append_dict})\n self.my_stack.extend([myname, 'childs'])\n\ndef endElement(self, name):\n self.my_stack = self.my_stack[:-2]\n print \"ENDS ELERMERE :\",name\n\ndef endDocument(self):\n print \"Sfled :\",self.my_data\n print \"ENDA DAFASDFASD\"\n\nif name == 'main':\n fp = open('Form.xml', 'r')\n xml.sax.parse(fp, my_handler())\n fp.close()\nFinally instead of C#, it has been\n solved using Pyton Script, here I'm\n sharing the script. Thanks.\n\n" ]
[ 0 ]
[]
[]
[ ".net", "parsing", "python", "sax", "xml" ]
stackoverflow_0001078902_.net_parsing_python_sax_xml.txt
Q: python program choice My program is ICAPServer (similar with httpserver), it's main job is to receive data from clients and save the data to DB. There are two main steps and two threads: ICAPServer receives data from clients, puts the data in a queue (50kb <1ms); another thread pops data from the queue, and writes them to DB SO, if 2nd step is too slow, the queue will fill up memory with those data. Wondering if anyone have any suggestion... A: It is hard to say for sure, but perhaps using two processes instead of threads will help in this situation. Since Python has the Global Interpreter Lock (GIL), it has the effect of only allowing any one thread to execute Python instructions at any time. Having a system designed around processes might have the following advantages: Higher concurrency, especially on multiprocessor machines Greater throughput, since you can probably spawn multiple queue consumers / DB writer processes to spread out the work. Although, the impact of this might be minimal if it is really the DB that is the bottleneck and not the process writing to the DB. A: Put an upper limit on the amount of data in the queue? A: One note: before going for optimizations, it is very important to get some good measurement, and profiling. That said, I would bet the slow part in the second step is database communication; you could try to analyze the SQL statement and its execution plan. and then optimize it (it is one of the features of SQLAlchemy); if still it would be too slow, check about database optimizations. Of course, it is possible the bottleneck would be in a completely different place; in this case, you still have chances to optimize using C code, dedicated network, or more threads - just to give three possible example of completely different kind of optimizations. Another point: as I/O operations usually release the GIL, you could also try to improve performance just by adding another reader thread - and I think this could be a much cheaper solution.
python program choice
My program is ICAPServer (similar with httpserver), it's main job is to receive data from clients and save the data to DB. There are two main steps and two threads: ICAPServer receives data from clients, puts the data in a queue (50kb <1ms); another thread pops data from the queue, and writes them to DB SO, if 2nd step is too slow, the queue will fill up memory with those data. Wondering if anyone have any suggestion...
[ "It is hard to say for sure, but perhaps using two processes instead of threads will help in this situation. Since Python has the Global Interpreter Lock (GIL), it has the effect of only allowing any one thread to execute Python instructions at any time. \nHaving a system designed around processes might have the following advantages:\n\nHigher concurrency, especially on multiprocessor machines\nGreater throughput, since you can probably spawn multiple queue consumers / DB writer processes to spread out the work. Although, the impact of this might be minimal if it is really the DB that is the bottleneck and not the process writing to the DB.\n\n", "Put an upper limit on the amount of data in the queue?\n", "One note: before going for optimizations, it is very important to get some good measurement, and profiling.\nThat said, I would bet the slow part in the second step is database communication; you could try to analyze the SQL statement and its execution plan. and then optimize it (it is one of the features of SQLAlchemy); if still it would be too slow, check about database optimizations.\nOf course, it is possible the bottleneck would be in a completely different place; in this case, you still have chances to optimize using C code, dedicated network, or more threads - just to give three possible example of completely different kind of optimizations.\nAnother point: as I/O operations usually release the GIL, you could also try to improve performance just by adding another reader thread - and I think this could be a much cheaper solution.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python", "sqlalchemy", "twisted" ]
stackoverflow_0001116163_python_sqlalchemy_twisted.txt
Q: Should I use Unicode string by default? Is it considered as a good practice to pick Unicode string over regular string when coding in Python? I mainly work on the Windows platform, where most of the string types are Unicode these days (i.e. .NET String, '_UNICODE' turned on by default on a new c++ project, etc ). Therefore, I tend to think that the case where non-Unicode string objects are used is a sort of rare case. Anyway, I'm curious about what Python practitioners do in real-world projects. A: From my practice -- use unicode. At beginning of one project we used usuall strings, however our project was growing, we were implementing new features and using new third-party libraries. In that mess with non-unicode/unicode string some functions started failing. We started spending time localizing this problems and fixing them. However, some third-party modules doesn't supported unicode and started failing after we switched to it (but this is rather exclusion than a rule). Also I have some experience when we needed to rewrite some third party modules(e.g. SendKeys) cause they were not supporting unicode. If it was done in unicode from beginning it will be better :) So I think today we should use unicode. P.S. All that mess upwards is only my hamble opinion :) A: As you ask this question, I suppose you are using Python 2.x. Python 3.0 changed quite a lot in string representation, and all text now is unicode. I would go for unicode in any new project - in a way compatible with the switch to Python 3.0 (see details). A: Yes, use unicode. Some hints: When doing input output in any sort of binary format, decode directly after reading and encode directly before writing, so that you never need to mix strings and unicode. Because mixing that tends to lead to UnicodeEncodeDecodeErrors sooner or later. [Forget about this one, my explanations just made it even more confusing. It's only an issue when porting to Python 3, you can care about it then.] Common Python newbie errors with Unicode (not saying you are a newbie, but this may be read by newbies): Don't confuse encode and decode. Remember, UTF-8 is an ENcoding, so you ENcode Unicode to UTF-8 and DEcode from it. Do not fall into the temptation of setting the default encoding in Python (by setdefaultencoding in sitecustomize.py or similar) to whatever you use most. That is just going to give you problems if you reinstall or move to another computer or suddenly need to use another encoding. Be explicit. Remember, not all of Python 2s standard library accepts unicode. If you feed a method unicode and it doesn't work, but it should, try feeding it ascii and see. Examples: urllib.urlopen(), which fails with unhelpful errors if you give it a unicode object instead of a string. Hm. That's all I can think of now! A: It can be tricky to consistently use unicode strings in Python 2.x - be it because somebody inadvertently uses the more natural str(blah) where they meant unicode(blah), forgetting the u prefix on string literals, third-party module incompatibilities - whatever. So in Python 2.x, use unicode only if you have to, and are prepared to provide good unit test coverage. If you have the option of using Python 3.x however, you don't need to care - strings will be unicode with no extra effort. A: Additional to Mihails comment I would say: Use Unicode, since it is the future. In Python 3.0, Non-Unicode will be gone, and as much I know, all the "U"-Prefixes will make trouble, since they are also gone. A: If you are dealing with severely constrained memory or disk space, use ASCII strings. In this case, you should additionally write your software in C or something even more compact :)
Should I use Unicode string by default?
Is it considered as a good practice to pick Unicode string over regular string when coding in Python? I mainly work on the Windows platform, where most of the string types are Unicode these days (i.e. .NET String, '_UNICODE' turned on by default on a new c++ project, etc ). Therefore, I tend to think that the case where non-Unicode string objects are used is a sort of rare case. Anyway, I'm curious about what Python practitioners do in real-world projects.
[ "From my practice -- use unicode. \nAt beginning of one project we used usuall strings, however our project was growing, we were implementing new features and using new third-party libraries. In that mess with non-unicode/unicode string some functions started failing. We started spending time localizing this problems and fixing them. However, some third-party modules doesn't supported unicode and started failing after we switched to it (but this is rather exclusion than a rule). \nAlso I have some experience when we needed to rewrite some third party modules(e.g. SendKeys) cause they were not supporting unicode. If it was done in unicode from beginning it will be better :) \nSo I think today we should use unicode.\nP.S. All that mess upwards is only my hamble opinion :)\n", "As you ask this question, I suppose you are using Python 2.x. \nPython 3.0 changed quite a lot in string representation, and all text now is unicode.\nI would go for unicode in any new project - in a way compatible with the switch to Python 3.0 (see details).\n", "Yes, use unicode. \nSome hints:\n\nWhen doing input output in any sort of binary format, decode directly after reading and encode directly before writing, so that you never need to mix strings and unicode. Because mixing that tends to lead to UnicodeEncodeDecodeErrors sooner or later.\n[Forget about this one, my explanations just made it even more confusing. It's only an issue when porting to Python 3, you can care about it then.]\nCommon Python newbie errors with Unicode (not saying you are a newbie, but this may be read by newbies): Don't confuse encode and decode. Remember, UTF-8 is an ENcoding, so you ENcode Unicode to UTF-8 and DEcode from it.\nDo not fall into the temptation of setting the default encoding in Python (by setdefaultencoding in sitecustomize.py or similar) to whatever you use most. That is just going to give you problems if you reinstall or move to another computer or suddenly need to use another encoding. Be explicit.\nRemember, not all of Python 2s standard library accepts unicode. If you feed a method unicode and it doesn't work, but it should, try feeding it ascii and see. Examples: urllib.urlopen(), which fails with unhelpful errors if you give it a unicode object instead of a string.\n\nHm. That's all I can think of now!\n", "It can be tricky to consistently use unicode strings in Python 2.x - be it because somebody inadvertently uses the more natural str(blah) where they meant unicode(blah), forgetting the u prefix on string literals, third-party module incompatibilities - whatever. So in Python 2.x, use unicode only if you have to, and are prepared to provide good unit test coverage.\nIf you have the option of using Python 3.x however, you don't need to care - strings will be unicode with no extra effort.\n", "Additional to Mihails comment I would say: Use Unicode, since it is the future. In Python 3.0, Non-Unicode will be gone, and as much I know, all the \"U\"-Prefixes will make trouble, since they are also gone.\n", "If you are dealing with severely constrained memory or disk space, use ASCII strings. In this case, you should additionally write your software in C or something even more compact :)\n" ]
[ 19, 13, 13, 6, 4, 2 ]
[]
[]
[ "python", "unicode" ]
stackoverflow_0001116449_python_unicode.txt
Q: What's the point of this code pattern? I was trying to create a python wrapper for an tk extension, so I looked at Tkinter.py to learn how to do it. While looking at that file, I found the following pattern appears a lot of times: an internal method (hinted by the leading "_" in the method name) is defined, then a public method is defined just to be the internal method. I want to know what's the benefit of doing this. For example, in the code for class Misc: def _register(self, func, subst=None, needcleanup=1): # doc string and implementations is removed since it's not relevant register = _register Thank you. A: Sometimes, you may want to change a method's behavior. For example, I could do this (hypothetically within the Misc class): def _another_register(self, func, subst=None, needcleanup=1): ... def change_register(self): self.register = self._another_register def restore_register(self): self.register = self._register This can be a pretty handy way to alter the behavior of certain pieces of code without subclassing (but it's generally not advisable to do this kind of thing except within the class itself). A: From PEP8 In addition, the following special forms using leading or trailing underscores are recognized (these can generally be combined with any case convention): _single_leading_underscore: weak "internal use" indicator. E.g. "from M import *" does not import objects whose name starts with an underscore. A: Well, I'm supposing, there could be another internal callable, that could've been used, it just didn't make it to the version you have. Generally, I think it is a good idea - you expose one symbol publically and internally it can be anything, a real method, a stubbed out method, a debug version of the method, anything.
What's the point of this code pattern?
I was trying to create a python wrapper for an tk extension, so I looked at Tkinter.py to learn how to do it. While looking at that file, I found the following pattern appears a lot of times: an internal method (hinted by the leading "_" in the method name) is defined, then a public method is defined just to be the internal method. I want to know what's the benefit of doing this. For example, in the code for class Misc: def _register(self, func, subst=None, needcleanup=1): # doc string and implementations is removed since it's not relevant register = _register Thank you.
[ "Sometimes, you may want to change a method's behavior. For example, I could do this (hypothetically within the Misc class):\ndef _another_register(self, func, subst=None, needcleanup=1):\n ...\n\ndef change_register(self):\n self.register = self._another_register\n\ndef restore_register(self):\n self.register = self._register\n\nThis can be a pretty handy way to alter the behavior of certain pieces of code without subclassing (but it's generally not advisable to do this kind of thing except within the class itself).\n", "From PEP8\nIn addition, the following special forms using leading or trailing\n underscores are recognized (these can generally be combined with any case\n convention):\n\n_single_leading_underscore: weak\n \"internal use\" indicator. E.g. \"from\n M import *\" does not import objects\n whose name starts with an underscore.\n\n", "Well, I'm supposing, there could be another internal callable, that could've been used, it just didn't make it to the version you have. Generally, I think it is a good idea - you expose one symbol publically and internally it can be anything, a real method, a stubbed out method, a debug version of the method, anything.\n" ]
[ 8, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001116693_python.txt
Q: lucene / python Can I use lucene directly from python, preferably without using a binary module? I am interested mainly in read access -- being able to perform queries from python over existing lucene indexes. A: You can't use Lucene itself from CPython without using a binary module, no. You could use it directly from Jython, or you could use a Python port of Lucene, eg. Lupy (though Lupy is no longer under development). If you're prepared to relax your non-binary requirement, PyLucene is a wrapper that embeds Java Lucene into Python. This similar question offers some options: Is there a pure Python Lucene? A: PyLucene is a Python wrapper around Lucene. Therefore, you have to install Lucene as well, and its installation may be a bit complex (especially on Windows!)
lucene / python
Can I use lucene directly from python, preferably without using a binary module? I am interested mainly in read access -- being able to perform queries from python over existing lucene indexes.
[ "You can't use Lucene itself from CPython without using a binary module, no.\nYou could use it directly from Jython, or you could use a Python port of Lucene, eg. Lupy (though Lupy is no longer under development).\nIf you're prepared to relax your non-binary requirement, PyLucene is a wrapper that embeds Java Lucene into Python.\nThis similar question offers some options: Is there a pure Python Lucene?\n", "PyLucene is a Python wrapper around Lucene. Therefore, you have to install Lucene as well, and its installation may be a bit complex (especially on Windows!)\n" ]
[ 8, 8 ]
[]
[]
[ "lucene", "python" ]
stackoverflow_0001116967_lucene_python.txt
Q: Error running twisted application I am trying to run a simple twisted application echo bot that metajack blogged about, everything looks like it is going to load fine, but at the very end I get an error: 2009/07/12 15:46 -0600 [-] ImportError: cannot import name toResponse 2009/07/12 15:46 -0600 [-] Failed to load application: cannot import name toResponse Any ideas on what might be causing this? I've not played with wokkel/twisted/python at all and dont know where to start to look. It is worth nothing that I've tried another wokkel/twisted app and got this very same error. A: This error is caused because I have an outdated version of Twisted. Off to find a way to update twisted itself as the installer doesnt seem to be doing the trick. A: There's not really enough information to go on, but if I had to guess, I'd say that you've given your program the same name as one of the modules that it relies on. Try renaming it to anthonys_echo_bot.py and re-running it. Do this: rm *.pyc in the directory in which you're running it first. If that doesn't help, you'll need to track down the piece of code that's trying import toResponse - is that all the error you get? No traceback, pointing to lines of code?
Error running twisted application
I am trying to run a simple twisted application echo bot that metajack blogged about, everything looks like it is going to load fine, but at the very end I get an error: 2009/07/12 15:46 -0600 [-] ImportError: cannot import name toResponse 2009/07/12 15:46 -0600 [-] Failed to load application: cannot import name toResponse Any ideas on what might be causing this? I've not played with wokkel/twisted/python at all and dont know where to start to look. It is worth nothing that I've tried another wokkel/twisted app and got this very same error.
[ "This error is caused because I have an outdated version of Twisted. Off to find a way to update twisted itself as the installer doesnt seem to be doing the trick.\n", "There's not really enough information to go on, but if I had to guess, I'd say that you've given your program the same name as one of the modules that it relies on. Try renaming it to anthonys_echo_bot.py and re-running it. Do this:\nrm *.pyc\n\nin the directory in which you're running it first.\nIf that doesn't help, you'll need to track down the piece of code that's trying import toResponse - is that all the error you get? No traceback, pointing to lines of code?\n" ]
[ 2, 1 ]
[]
[]
[ "python", "twisted" ]
stackoverflow_0001117072_python_twisted.txt
Q: How do I tell a Python script (cygwin) to work in current (or relative) directories? I have lots of directories with text files written using (g)vim, and I have written a handful of utilities that I find useful in Python. I start off the utilities with a pound-bang-/usr/bin/env python line in order to use the Python that is installed under cygwin. I would like to type commands like this: %cd ~/SomeBook %which pythonUtil /usr/local/bin/pythonUtil %pythonUtil ./infile.txt ./outfile.txt (or % pythonUtil someRelPath/infile.txt somePossiblyDifferentRelPath/outfile.txt) pythonUtil: Found infile.txt; Writing outfile.txt; Done (or some such, if anything) However, my pythonUtil programs keep telling me that they can't find infile.txt. If I copy the utility into the current working directory, all is well, but then I have copies of my utilities littering the landscape. What should I be doing? Yet Another Edit: To summarize --- what I wanted was os.path.abspath('filename'). That returns the absolute pathname as a string, and then all ambiguity has been removed. BUT: IF the Python being used is the one installed under cygwin, THEN the absolute pathname will be a CYGWIN-relative pathname, like /home/someUser/someDir/someFile.txt. HOWEVER, IF the Python has been installed under Windows (and is here being called from a cygwin terminal commandline), THEN the absolute pathname will be the complete Windows path, from 'drive' on down, like D:\cygwin\home\someUser\someDir\someFile.txt. Moral: Don't expect the cygwin Python to generate a Windows-complete absolute pathname for a file not rooted at /; it's beyond its event horizon. However, you can reach out to any file on a WinXP system with the cygwin-python if you specify the file's path using the "/cygdrive/driveLetter" leadin convention. Remark: Don't use '\'s for separators in the WinXP path on the cygwin commandline; use '/'s and trust the snake. No idea why, but some separators may be dropped and the path may be modified to include extra levels, such as "Documents and Settings\someUser" and other Windows nonsense. Thanks to the responders for shoving me in the right direction. A: Look at os.getcwd: http://docs.python.org/library/os.html#os-file-dir Edit: For relative paths, please take a look at the os.path module: http://docs.python.org/library/os.path.html in particular, os.path.join and os.path.normpath. For instance: import os print os.path.normpath(os.path.join(os.getcwd(), '../AnotherBook/Chap2.txt')) A: What happens when you type "ls"? Do you see "infile.txt" listed there? A: os.chdir(my_dir) or os.chdir(os.getcwd())
How do I tell a Python script (cygwin) to work in current (or relative) directories?
I have lots of directories with text files written using (g)vim, and I have written a handful of utilities that I find useful in Python. I start off the utilities with a pound-bang-/usr/bin/env python line in order to use the Python that is installed under cygwin. I would like to type commands like this: %cd ~/SomeBook %which pythonUtil /usr/local/bin/pythonUtil %pythonUtil ./infile.txt ./outfile.txt (or % pythonUtil someRelPath/infile.txt somePossiblyDifferentRelPath/outfile.txt) pythonUtil: Found infile.txt; Writing outfile.txt; Done (or some such, if anything) However, my pythonUtil programs keep telling me that they can't find infile.txt. If I copy the utility into the current working directory, all is well, but then I have copies of my utilities littering the landscape. What should I be doing? Yet Another Edit: To summarize --- what I wanted was os.path.abspath('filename'). That returns the absolute pathname as a string, and then all ambiguity has been removed. BUT: IF the Python being used is the one installed under cygwin, THEN the absolute pathname will be a CYGWIN-relative pathname, like /home/someUser/someDir/someFile.txt. HOWEVER, IF the Python has been installed under Windows (and is here being called from a cygwin terminal commandline), THEN the absolute pathname will be the complete Windows path, from 'drive' on down, like D:\cygwin\home\someUser\someDir\someFile.txt. Moral: Don't expect the cygwin Python to generate a Windows-complete absolute pathname for a file not rooted at /; it's beyond its event horizon. However, you can reach out to any file on a WinXP system with the cygwin-python if you specify the file's path using the "/cygdrive/driveLetter" leadin convention. Remark: Don't use '\'s for separators in the WinXP path on the cygwin commandline; use '/'s and trust the snake. No idea why, but some separators may be dropped and the path may be modified to include extra levels, such as "Documents and Settings\someUser" and other Windows nonsense. Thanks to the responders for shoving me in the right direction.
[ "Look at os.getcwd:\n\nhttp://docs.python.org/library/os.html#os-file-dir\n\nEdit: For relative paths, please take a look at the os.path module:\n\nhttp://docs.python.org/library/os.path.html\n\nin particular, os.path.join and os.path.normpath. For instance:\nimport os\nprint os.path.normpath(os.path.join(os.getcwd(), '../AnotherBook/Chap2.txt'))\n\n", "What happens when you type \"ls\"? Do you see \"infile.txt\" listed there?\n", "os.chdir(my_dir)\n\nor\nos.chdir(os.getcwd())\n\n" ]
[ 4, 0, 0 ]
[]
[]
[ "cygwin", "filesystems", "path", "python", "utilities" ]
stackoverflow_0001117414_cygwin_filesystems_path_python_utilities.txt
Q: Cleaning up nested Try/Excepts I've just written a chunk of code that strikes me as being far more nested than is optimal. I'd like advice on how to improve the style of this, particularly so that it conforms more with "Flat is better than nested." for app in apps: if app.split('.', 1)[0] == 'zc': #only look for cron in zc apps try: a = app + '.cron' __import__(a) m = sys.modules[a] try: min = m.cron_minute() for job in min: k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB), 60*job[1], kronos.method.threaded, (), ()) except AttributeError: #no minute tasks pass try: hour = m.cron_hour() for job in hour: k.add_daytime_task(job[0], 'day task', range(1, 8), None, (job[1], r(H_LB, H_UB)), kronos.method.threaded, (), ()) except AttributeError: #no hour tasks pass except ImportError: #no cron jobs for this module pass Edit: Combining the suggestions from below, here's my rewritten form. for app in apps: if app.split('.', 1)[0] != 'zc': #only look for cron in zc apps continue try: a = app + '.cron' __import__(a) except ImportError: #no cron jobs for this module, continue to next one continue m = sys.modules[a] if hasattr(m, 'cron_minute'): min = m.cron_minute() for job in min: k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB), 60*job[1], kronos.method.threaded, (), ()) if hasattr(m, 'cron_hour'): hour = m.cron_hour() for job in hour: k.add_daytime_task(job[0], 'day task', range(1, 8), None, (job[1], r(H_LB, H_UB)), kronos.method.threaded, (), ()) A: The main problem is that your try clauses are too broad, particularly the outermost one: with that kind of habit, you WILL sooner or later run into a mysterious bug because one of your try/except has accidentally hidden an unexpected exception "bubbling up" from some other function you're calling. So I'd suggest, instead: for app in apps: if app.split('.', 1)[0] != 'zc': #only look for cron in zc apps continue try: a = app + '.cron' __import__(a) except ImportError: #no cron jobs for this module continue # etc etc As an aside, I'm also applying "flat is better than nested" in another way (not dependent on any try/except) which is "if I have nothing more to do on this leg of the loop, continue [i.e. move on to the next leg of the loop] instead of "if I have something to do:" followed by a substantial amount of nested code. I've always preferred this style (of if/continue or if/return) to nested if's in languages that supply functionality such as continue (essentially all modern ones, since C has it;-). But this is a simple "flat vs nested" style preference, and the meat of the issue is: keep your try clauses small! Worst case, when you just can't simply continue or return in the except clause, you can use try/except/else: put in the try clause only what absolutely MUST be there -- the tiny piece of code that's likely and expected to raise -- and put the rest of the following code (the part that's NOT supposed nor expected to raise) in the else clause. This doesn't change the nesting, but DOES make a huge difference in lowering the risk of accidentally hiding exceptions that are NOT expected! A: I wonder, if the jobs for each time unit is actually broken, would they raise an AttibuteError, or some other exception? In particular, if there's something about a job that is really busted, you probably aught not to catch them. Another option that can help is to wrap only the offending code with a try-catch, putting the exception handler as close to the exception as possible. Here's a stab: for app in apps: if app.split('.', 1)[0] == 'zc': #only look for cron in zc apps try: a = app + '.cron' __import__(a) m = sys.modules[a] except ImportError: #no cron jobs for this module #exception is silently ignored #since no jobs is not an error continue if hasattr(m, "cron_minute"): min = m.cron_minute() for job in min: k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB), 60*job[1], kronos.method.threaded, (), ()) if hasattr(m, "cron_hour"): hour = m.cron_hour() for job in hour: k.add_daytime_task(job[0], 'day task', range(1, 8), None, (job[1], r(H_LB, H_UB)), kronos.method.threaded, (), ()) notice there is only one exception handler here, which we handle by correctly ignoring. since we can predict the possibility of there not being one attribute or another, we check for it explicitly, which helps to make the code a bit clearer. Otherwise, it's not really obvious why you are catching the AttributeError, or what is even raising it. A: You can create a function that performs the main logic, and another function which invokes that function, wrapping it with the try...except statements. Then, in the main application, you can just invoke those function which already handle the exceptions. (Based on the recommendations of the "Clean Code" book). A: Well, the trick here is to figure out if they are broken. That is the handling part of exception handling. I mean, at least print a warning that states the comment's assumption. Worrying about excessive nesting before the actual handling seems like you are getting ahead of yourself. Worry about being right before being stylish.
Cleaning up nested Try/Excepts
I've just written a chunk of code that strikes me as being far more nested than is optimal. I'd like advice on how to improve the style of this, particularly so that it conforms more with "Flat is better than nested." for app in apps: if app.split('.', 1)[0] == 'zc': #only look for cron in zc apps try: a = app + '.cron' __import__(a) m = sys.modules[a] try: min = m.cron_minute() for job in min: k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB), 60*job[1], kronos.method.threaded, (), ()) except AttributeError: #no minute tasks pass try: hour = m.cron_hour() for job in hour: k.add_daytime_task(job[0], 'day task', range(1, 8), None, (job[1], r(H_LB, H_UB)), kronos.method.threaded, (), ()) except AttributeError: #no hour tasks pass except ImportError: #no cron jobs for this module pass Edit: Combining the suggestions from below, here's my rewritten form. for app in apps: if app.split('.', 1)[0] != 'zc': #only look for cron in zc apps continue try: a = app + '.cron' __import__(a) except ImportError: #no cron jobs for this module, continue to next one continue m = sys.modules[a] if hasattr(m, 'cron_minute'): min = m.cron_minute() for job in min: k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB), 60*job[1], kronos.method.threaded, (), ()) if hasattr(m, 'cron_hour'): hour = m.cron_hour() for job in hour: k.add_daytime_task(job[0], 'day task', range(1, 8), None, (job[1], r(H_LB, H_UB)), kronos.method.threaded, (), ())
[ "The main problem is that your try clauses are too broad, particularly the outermost one: with that kind of habit, you WILL sooner or later run into a mysterious bug because one of your try/except has accidentally hidden an unexpected exception \"bubbling up\" from some other function you're calling.\nSo I'd suggest, instead:\nfor app in apps:\n if app.split('.', 1)[0] != 'zc': #only look for cron in zc apps\n continue\n\n try:\n a = app + '.cron'\n __import__(a)\n except ImportError: #no cron jobs for this module\n continue\n\n # etc etc\n\nAs an aside, I'm also applying \"flat is better than nested\" in another way (not dependent on any try/except) which is \"if I have nothing more to do on this leg of the loop, continue [i.e. move on to the next leg of the loop] instead of \"if I have something to do:\" followed by a substantial amount of nested code. I've always preferred this style (of if/continue or if/return) to nested if's in languages that supply functionality such as continue (essentially all modern ones, since C has it;-).\nBut this is a simple \"flat vs nested\" style preference, and the meat of the issue is: keep your try clauses small! Worst case, when you just can't simply continue or return in the except clause, you can use try/except/else: put in the try clause only what absolutely MUST be there -- the tiny piece of code that's likely and expected to raise -- and put the rest of the following code (the part that's NOT supposed nor expected to raise) in the else clause. This doesn't change the nesting, but DOES make a huge difference in lowering the risk of accidentally hiding exceptions that are NOT expected!\n", "I wonder, if the jobs for each time unit is actually broken, would they raise an AttibuteError, or some other exception? In particular, if there's something about a job that is really busted, you probably aught not to catch them. \nAnother option that can help is to wrap only the offending code with a try-catch, putting the exception handler as close to the exception as possible. Here's a stab: \nfor app in apps:\n if app.split('.', 1)[0] == 'zc': #only look for cron in zc apps\n try:\n a = app + '.cron'\n __import__(a)\n m = sys.modules[a]\n except ImportError: #no cron jobs for this module\n #exception is silently ignored\n #since no jobs is not an error\n continue\n if hasattr(m, \"cron_minute\"):\n min = m.cron_minute()\n for job in min:\n k.add_interval_task(job[0], 'minute task', r(M_LB, M_UB),\n 60*job[1], \n kronos.method.threaded, (), ())\n\n if hasattr(m, \"cron_hour\"):\n hour = m.cron_hour()\n for job in hour:\n k.add_daytime_task(job[0], 'day task', range(1, 8), None,\n (job[1], r(H_LB, H_UB)), \n kronos.method.threaded, (), ())\n\nnotice there is only one exception handler here, which we handle by correctly ignoring. \nsince we can predict the possibility of there not being one attribute or another, we\ncheck for it explicitly, which helps to make the code a bit clearer. Otherwise, it's not\nreally obvious why you are catching the AttributeError, or what is even raising it. \n", "You can create a function that performs the main logic, and another function which invokes that function, wrapping it with the try...except statements. Then, in the main application, you can just invoke those function which already handle the exceptions. (Based on the recommendations of the \"Clean Code\" book).\n", "Well, the trick here is to figure out if they are broken. That is the handling part of exception handling. I mean, at least print a warning that states the comment's assumption. Worrying about excessive nesting before the actual handling seems like you are getting ahead of yourself. Worry about being right before being stylish.\n" ]
[ 8, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001117460_python.txt
Q: Efficiently importing modules in Django views I was wondering - how do people handle importing large numbers of commonly used modules within django views? And whats the best method to do this efficiently? For instance, I've got some views like, admin_views.py search_views.py . . and from what I've seen, every one of them needs to use HttpResponse or other such commonly used modules. Moreover, some of them need things like BeautifulSoup, and others need other things (md5, auth, et al). What I did when starting the project was to make an include_all.py which contained most of my common imports, and then added these specific things in the view itself. So, I had something like, admin_views.py from include_all import * ... [list of specific module imports for admin] ... search_views.py from include_all import * ... [list of specific module imports for search] ... As time progressed, the include_all became a misc file with anything being needed put into it - as a result, a number of views end up importing modules they don't need. Is this going to affect efficiency? That is, does python (django?) import all the modules once and store/cache them such that any other view needing them doesn't have to import it again? Or is my method of calling this long file a very inefficient one - and I would be better of sticking to individually importing these modules in each view? Are there any best practices for this sort of thing too? Thanks! A: Python itself guarantees that a module is loaded just once (unless reload is explicitly called, which is not the case here): after the first time, import of that module just binds its name directly from sys.modules[themodulename], an extremely fast operation. So Django does not have to do any further optimization, and neither do you. Best practice is avoiding from ... import * in production code (making it clearer and more maintainable where each name is coming from, facilitating testing, etc, etc) and importing modules, "individually" as you put it, exactly where they're needed (by possibly binding fewer names that may save a few microseconds and definitely won't waste any, but "explicit is better than implicit" -- clarity, readability, maintainability -- is the main consideration anyway). A: I guess you could slap your frequently used imports into your __init__.py file. A: Django isn't CGI (or PHP). Your app is a single (or a few) long-running Python process. It doesn't matter how long it takes to start, each HTTP request will simply call your (already loaded) view functions.
Efficiently importing modules in Django views
I was wondering - how do people handle importing large numbers of commonly used modules within django views? And whats the best method to do this efficiently? For instance, I've got some views like, admin_views.py search_views.py . . and from what I've seen, every one of them needs to use HttpResponse or other such commonly used modules. Moreover, some of them need things like BeautifulSoup, and others need other things (md5, auth, et al). What I did when starting the project was to make an include_all.py which contained most of my common imports, and then added these specific things in the view itself. So, I had something like, admin_views.py from include_all import * ... [list of specific module imports for admin] ... search_views.py from include_all import * ... [list of specific module imports for search] ... As time progressed, the include_all became a misc file with anything being needed put into it - as a result, a number of views end up importing modules they don't need. Is this going to affect efficiency? That is, does python (django?) import all the modules once and store/cache them such that any other view needing them doesn't have to import it again? Or is my method of calling this long file a very inefficient one - and I would be better of sticking to individually importing these modules in each view? Are there any best practices for this sort of thing too? Thanks!
[ "Python itself guarantees that a module is loaded just once (unless reload is explicitly called, which is not the case here): after the first time, import of that module just binds its name directly from sys.modules[themodulename], an extremely fast operation. So Django does not have to do any further optimization, and neither do you.\nBest practice is avoiding from ... import * in production code (making it clearer and more maintainable where each name is coming from, facilitating testing, etc, etc) and importing modules, \"individually\" as you put it, exactly where they're needed (by possibly binding fewer names that may save a few microseconds and definitely won't waste any, but \"explicit is better than implicit\" -- clarity, readability, maintainability -- is the main consideration anyway).\n", "I guess you could slap your frequently used imports into your __init__.py file.\n", "Django isn't CGI (or PHP). Your app is a single (or a few) long-running Python process. It doesn't matter how long it takes to start, each HTTP request will simply call your (already loaded) view functions.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "django", "import", "performance", "python", "python_module" ]
stackoverflow_0001117451_django_import_performance_python_python_module.txt
Q: Django Initialization I have a big array, that I would like to load into memory only once when django starts up and then treat it as a read only global variable. What is the best place to put the code for the initialization of that array? If I put it in settings.py it will be reinitialized every time the settings module is imported, correct? A: settings.py is for Django settings; it's fine to put your own settings in there, but using it for arbitrary non-configuration data structures isn't good practice. Just put it in the module it logically belongs to, and it'll be run just once per instance. If you want to guarantee that the module is loaded on startup and not on first use later on, import that module from your top-level __init__.py to force it to be loaded immediately. A: settings.py is the right place for that. Settings.py is, like any other module, loaded once. There is still the problem of the fact that a module must be imported once for each process, so a respawning style of web server (like apache) will reload it once for each instance in question. For mod_python this will be once per process. for mod_wsgi, this is likely to be just one time, unless you have to restart. tl;dr modules are imported once, even if multiple import statements are used. put it in settings.py
Django Initialization
I have a big array, that I would like to load into memory only once when django starts up and then treat it as a read only global variable. What is the best place to put the code for the initialization of that array? If I put it in settings.py it will be reinitialized every time the settings module is imported, correct?
[ "settings.py is for Django settings; it's fine to put your own settings in there, but using it for arbitrary non-configuration data structures isn't good practice.\nJust put it in the module it logically belongs to, and it'll be run just once per instance. If you want to guarantee that the module is loaded on startup and not on first use later on, import that module from your top-level __init__.py to force it to be loaded immediately.\n", "settings.py is the right place for that. Settings.py is, like any other module, loaded once. There is still the problem of the fact that a module must be imported once for each process, so a respawning style of web server (like apache) will reload it once for each instance in question. For mod_python this will be once per process. for mod_wsgi, this is likely to be just one time, unless you have to restart.\ntl;dr modules are imported once, even if multiple import statements are used. put it in settings.py\n" ]
[ 19, 9 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001116948_django_python.txt
Q: How to find the number of parameters to a Python function from C? I'm using the Python C API to call Python functions from my application. I'd like to present a list of functions that could be called and would like to be able to limit this list to just the ones with the expected number of parameters. I'm happy that I can walk the dictionary to extract a list of functions and use PyCallable_Check to find out if they're callable, but I'm not sure how I can find out how many parameters each function is expecting? I've found one technique involving Boost::Python, but would rather not add that for what (I hope!) will be a minor addition. Thanks :) A: Okay, so in the end I've discovered how to do it. User-defined Python functions have a member called func_code (in Python 3.0+ it's __code__), which itself has a member co_argcount, which is presumably what Boost::Python extracts in the example given by Christophe. The code I'm using looks like this (it's heavily based on a documentation example of how to walk a Python dictionary): PyObject *key, *value; int pos = 0; while(PyDict_Next(pyDictionary, &pos, &key, &value)) { if(PyCallable_Check(value)) { PyObject* fc = PyObject_GetAttrString(value, "func_code"); if(fc) { PyObject* ac = PyObject_GetAttrString(fc, "co_argcount"); if(ac) { const int count = PyInt_AsLong(ac); // we now have the argument count, do something with this function Py_DECREF(ac); } Py_DECREF(fc); } } } Thanks anyway - that thread did indeed lead me in the right direction :) A: Maybe this is helpful? (not tested, there could be relevant pieces of information along this thread)... A: Your C code can call inspect.getargspec just like any Python code would (e.g. via PyObject_CallMethod or other equivalent ways) and get all the scoop about the signature of each function or other callable that it may care about.
How to find the number of parameters to a Python function from C?
I'm using the Python C API to call Python functions from my application. I'd like to present a list of functions that could be called and would like to be able to limit this list to just the ones with the expected number of parameters. I'm happy that I can walk the dictionary to extract a list of functions and use PyCallable_Check to find out if they're callable, but I'm not sure how I can find out how many parameters each function is expecting? I've found one technique involving Boost::Python, but would rather not add that for what (I hope!) will be a minor addition. Thanks :)
[ "Okay, so in the end I've discovered how to do it. User-defined Python functions have a member called func_code (in Python 3.0+ it's __code__), which itself has a member co_argcount, which is presumably what Boost::Python extracts in the example given by Christophe.\nThe code I'm using looks like this (it's heavily based on a documentation example of how to walk a Python dictionary):\n PyObject *key, *value;\n int pos = 0;\n while(PyDict_Next(pyDictionary, &pos, &key, &value)) {\n if(PyCallable_Check(value)) {\n PyObject* fc = PyObject_GetAttrString(value, \"func_code\");\n if(fc) {\n PyObject* ac = PyObject_GetAttrString(fc, \"co_argcount\");\n if(ac) {\n const int count = PyInt_AsLong(ac);\n // we now have the argument count, do something with this function\n Py_DECREF(ac);\n }\n Py_DECREF(fc);\n }\n }\n }\n\nThanks anyway - that thread did indeed lead me in the right direction :)\n", "Maybe this is helpful? (not tested, there could be relevant pieces of information along this thread)...\n", "Your C code can call inspect.getargspec just like any Python code would (e.g. via PyObject_CallMethod or other equivalent ways) and get all the scoop about the signature of each function or other callable that it may care about.\n" ]
[ 5, 1, 0 ]
[]
[]
[ "c", "python" ]
stackoverflow_0001117164_c_python.txt
Q: In Python, is there a way to detect the use of incorrect variable names; something like VB's "Option Explicit"? I do most of my development in Java and C++ but recently had to write various scripts and picked up Python. I run python from the command line on scripts; not in interactive mode. I'm wondering if I like a lot of things about the language, but one thing that keeps reducing my productivity is the fact that I get no advance warning if I am using a variable that is not yet defined. For example, somewhere in the code I forget to prefix a variable with its declaring module, or I make a little typo, and the first time I learn about it is when the program crashes. Is there a way to get the python interpreter to throw advance warnings if something might be funky when I access a variable that hasn't been accessed or set somewhere else in the program? I realize this is somewhat against the philosophy of the language, but I can't be the only one who makes these silly errors and has no way of catching them early. A: there are some tools like pylint or pyflakes which may catch some of those. pyflakes is quite fast, and usable on many projects for this reason As reported on pyflakes webpage, the two primary categories of defects reported by PyFlakes are: Names which are used but not defined or used before they are defined Names which are redefined without having been used A: Pydev is pretty well integrated with Pylint, see here -- and pylint is a much more powerful checker than pyflakes (beyond the minor issue of misspelled variables, it will catch style violations, etc, etc -- it's highly customizable for whatever your specific purposes are!).
In Python, is there a way to detect the use of incorrect variable names; something like VB's "Option Explicit"?
I do most of my development in Java and C++ but recently had to write various scripts and picked up Python. I run python from the command line on scripts; not in interactive mode. I'm wondering if I like a lot of things about the language, but one thing that keeps reducing my productivity is the fact that I get no advance warning if I am using a variable that is not yet defined. For example, somewhere in the code I forget to prefix a variable with its declaring module, or I make a little typo, and the first time I learn about it is when the program crashes. Is there a way to get the python interpreter to throw advance warnings if something might be funky when I access a variable that hasn't been accessed or set somewhere else in the program? I realize this is somewhat against the philosophy of the language, but I can't be the only one who makes these silly errors and has no way of catching them early.
[ "there are some tools like pylint or pyflakes which may catch some of those. pyflakes is quite fast, and usable on many projects for this reason\nAs reported on pyflakes webpage, the two primary categories of defects reported by PyFlakes are:\n\nNames which are used but not defined or used before they are defined\nNames which are redefined without having been used \n\n", "Pydev is pretty well integrated with Pylint, see here -- and pylint is a much more powerful checker than pyflakes (beyond the minor issue of misspelled variables, it will catch style violations, etc, etc -- it's highly customizable for whatever your specific purposes are!).\n" ]
[ 2, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001117661_python.txt
Q: Updating part of a surface in python, or transparent surfaces I have an application written in python that's basically an etch-a-sketch, you move pixels around with WASD and arrow keys and it leaves a trail. However, I want to add a counter for the amount of pixels on the screen. How do I have the counter update without updating the entire surface and pwning the pixel drawings? Alternatively, can I make a surface that's completely transparent except for the text so you can see the drawing surface underneath? A: To solve this problem, you want to have a separate surface for your Etch-a-Sketch pixels, so that they do not get clobbered when you go to refresh the screen. Unfortunately, with Rigo's scheme, the font will continue to render on top of itself, which will get messy for more than two pixel count changes. So, here's some sample rendering code: # Fill background screen.fill((0xcc, 0xcc, 0xcc)) # Blit Etch-a-Sketch surface (with the drawing) # etch_surf should be the same size as the screen screen.blit(etch_surf, (0, 0)) # Render the pixel count arial = pygame.font.SysFont('Arial', 20) counter_surf = arial.render(str(pixel_count), True, (0, 0, 0)) screen.blit(counter_surf, (16, 16)) # Refresh entire screen pygame.display.update() Now, admittedly, updating the entire screen is rather inefficient. For this, you have two options: only refresh the screen when the drawing changes or track the location of drawing changes and refresh individual locations (see the update documentation). If you choose the second option, you will have to refresh the text and where it was previously; I would recommend having a Sprite manage this. A: What you need is pygame.font module #define a font surface spamSurface = pygame.font.SysFont('Arial', 20) #then, in your infinite cycle... eggsPixels = spamSurface.render(str(pixelsOnScreen), True, (255, 255, 255)) hamDisplay.blit(eggsPixels, (10, 10)) Where spamSurface is a new font surface, eggsPixels is the value that spamSurface will render (display/show) and hamDisplay is your main surface display.
Updating part of a surface in python, or transparent surfaces
I have an application written in python that's basically an etch-a-sketch, you move pixels around with WASD and arrow keys and it leaves a trail. However, I want to add a counter for the amount of pixels on the screen. How do I have the counter update without updating the entire surface and pwning the pixel drawings? Alternatively, can I make a surface that's completely transparent except for the text so you can see the drawing surface underneath?
[ "To solve this problem, you want to have a separate surface for your Etch-a-Sketch pixels, so that they do not get clobbered when you go to refresh the screen. Unfortunately, with Rigo's scheme, the font will continue to render on top of itself, which will get messy for more than two pixel count changes.\nSo, here's some sample rendering code:\n# Fill background\nscreen.fill((0xcc, 0xcc, 0xcc))\n# Blit Etch-a-Sketch surface (with the drawing)\n# etch_surf should be the same size as the screen\nscreen.blit(etch_surf, (0, 0))\n# Render the pixel count\narial = pygame.font.SysFont('Arial', 20)\ncounter_surf = arial.render(str(pixel_count), True, (0, 0, 0))\nscreen.blit(counter_surf, (16, 16))\n# Refresh entire screen\npygame.display.update()\n\nNow, admittedly, updating the entire screen is rather inefficient. For this, you have two options: only refresh the screen when the drawing changes or track the location of drawing changes and refresh individual locations (see the update documentation). If you choose the second option, you will have to refresh the text and where it was previously; I would recommend having a Sprite manage this.\n", "What you need is pygame.font module\n#define a font surface \nspamSurface = pygame.font.SysFont('Arial', 20)\n\n#then, in your infinite cycle... \neggsPixels = spamSurface.render(str(pixelsOnScreen), True, (255, 255, 255))\nhamDisplay.blit(eggsPixels, (10, 10))\n\nWhere spamSurface is a new font surface, eggsPixels is the value that spamSurface will render (display/show) and hamDisplay is your main surface display.\n" ]
[ 1, 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001072734_pygame_python.txt
Q: Most "pythonic" way of organising class attributes, constructor arguments and subclass constructor defaults? Being relatively new to Python 2, I'm uncertain how best to organise my class files in the most 'pythonic' way. I wouldn't be asking this but for the fact that Python seems to have quite a few ways of doing things that are very different to what I have come to expect from the languages I am used to. Initially, I was just treating classes how I'd usually treat them in C# or PHP, which of course made me trip up all over the place when I eventually discovered the mutable values gotcha: class Pants(object): pockets = 2 pocketcontents = [] class CargoPants(Pants): pockets = 200 p1 = Pants() p1.pocketcontents.append("Magical ten dollar bill") p2 = CargoPants() print p2.pocketcontents Yikes! Didn't expect that! I've spent a lot of time searching the web and through some source for other projects for hints on how best to arrange my classes, and one of the things I noticed was that people seem to declare a lot of their instance variables - mutable or otherwise - in the constructor, and also pile the default constructor arguments on quite thickly. After developing like this for a while, I'm still left scratching my head a bit about the unfamiliarity of it. Considering the lengths to which the python language goes to to make things seem more intuitive and obvious, it seems outright odd to me in the few cases where I've got quite a lot of attributes or a lot of default constructor arguments, especially when I'm subclassing: class ClassWithLotsOfAttributes(object): def __init__(self, jeebus, coolness='lots', python='isgoodfun', pythonic='nebulous', duck='goose', pants=None, magictenbucks=4, datawad=None, dataload=None, datacatastrophe=None): if pants is None: pants = [] if datawad is None: datawad = [] if dataload is None: dataload = [] if datacatastrophe is None: datacatastrophe = [] self.coolness = coolness self.python = python self.pythonic = pythonic self.duck = duck self.pants = pants self.magictenbucks = magictenbucks self.datawad = datawad self.dataload = dataload self.datacatastrophe = datacatastrophe self.bigness = None self.awesomeitude = None self.genius = None self.fatness = None self.topwise = None self.brillant = False self.strangenessfactor = 3 self.noisiness = 12 self.whatever = None self.yougettheidea = True class Dog(ClassWithLotsOfAttributes): def __init__(self, coolness='lots', python='isgoodfun', pythonic='nebulous', duck='goose', pants=None, magictenbucks=4, datawad=None, dataload=None, datacatastrophe=None): super(ClassWithLotsOfAttributes, self).__init__(coolness, python, pythonic, duck, pants, magictenbucks, datawad, dataload, datacatastrophe) self.noisiness = 1000000 def quack(self): print "woof" Mild silliness aside (I can't really help myself when cooking up these artificial example classes), assuming I have a real-world need for a set of classes with this many attributes, I suppose my questions are: What is the most, uhh, 'pythonic' way of declaring a class with that many attributes? Is it best to put them against the class if the default is immutable, ala Pants.pockets, or is it better to put them in the constructor, ala ClassWithLotsOfAttributes.noisiness? Is there a way to eliminate the need to redeclare the defaults for all of the subclass constructor arguments, as in Dog.__init__? Should I even be including this many arguments with defaults anyway? A: If attributes will vary from instance to instance make them instance attribute i.e. create them inside__init__ using self else if they need to be shared between class instances like a constant, put them at class level. If your class really need to pass, so many arguments in __init__, let derive class use argument list and keyword arguments e.g. class Dog(ClassWithLotsOfAttributes): def __init__(self, *args , **kwargs): super(ClassWithLotsOfAttributes, self).__init__(*args , **kwargs) self.coolness = "really cool!!! No need of passing all variables except few important ones, in __init__, class can assume some defaults and user can change them later on if needed. Use 4 spaces instead of tab. if you need to add an extra arg bite, to Dog and keyword arg old too class CoolDog(ClassWithLotsOfAttributes): def __init__(self, bite, *args , **kwargs): self.old = kwargs.pop('old', False) # this way we can access base class args too super(ClassWithLotsOfAttributes, self).__init__(*args , **kwargs) self.bite = bite self.coolness = "really really cool!!! various ways you useCoolDog CoolDog(True) CoolDog(True, old=False) CoolDog(bite=True, old=True) CoolDog(old=True, bite=False)
Most "pythonic" way of organising class attributes, constructor arguments and subclass constructor defaults?
Being relatively new to Python 2, I'm uncertain how best to organise my class files in the most 'pythonic' way. I wouldn't be asking this but for the fact that Python seems to have quite a few ways of doing things that are very different to what I have come to expect from the languages I am used to. Initially, I was just treating classes how I'd usually treat them in C# or PHP, which of course made me trip up all over the place when I eventually discovered the mutable values gotcha: class Pants(object): pockets = 2 pocketcontents = [] class CargoPants(Pants): pockets = 200 p1 = Pants() p1.pocketcontents.append("Magical ten dollar bill") p2 = CargoPants() print p2.pocketcontents Yikes! Didn't expect that! I've spent a lot of time searching the web and through some source for other projects for hints on how best to arrange my classes, and one of the things I noticed was that people seem to declare a lot of their instance variables - mutable or otherwise - in the constructor, and also pile the default constructor arguments on quite thickly. After developing like this for a while, I'm still left scratching my head a bit about the unfamiliarity of it. Considering the lengths to which the python language goes to to make things seem more intuitive and obvious, it seems outright odd to me in the few cases where I've got quite a lot of attributes or a lot of default constructor arguments, especially when I'm subclassing: class ClassWithLotsOfAttributes(object): def __init__(self, jeebus, coolness='lots', python='isgoodfun', pythonic='nebulous', duck='goose', pants=None, magictenbucks=4, datawad=None, dataload=None, datacatastrophe=None): if pants is None: pants = [] if datawad is None: datawad = [] if dataload is None: dataload = [] if datacatastrophe is None: datacatastrophe = [] self.coolness = coolness self.python = python self.pythonic = pythonic self.duck = duck self.pants = pants self.magictenbucks = magictenbucks self.datawad = datawad self.dataload = dataload self.datacatastrophe = datacatastrophe self.bigness = None self.awesomeitude = None self.genius = None self.fatness = None self.topwise = None self.brillant = False self.strangenessfactor = 3 self.noisiness = 12 self.whatever = None self.yougettheidea = True class Dog(ClassWithLotsOfAttributes): def __init__(self, coolness='lots', python='isgoodfun', pythonic='nebulous', duck='goose', pants=None, magictenbucks=4, datawad=None, dataload=None, datacatastrophe=None): super(ClassWithLotsOfAttributes, self).__init__(coolness, python, pythonic, duck, pants, magictenbucks, datawad, dataload, datacatastrophe) self.noisiness = 1000000 def quack(self): print "woof" Mild silliness aside (I can't really help myself when cooking up these artificial example classes), assuming I have a real-world need for a set of classes with this many attributes, I suppose my questions are: What is the most, uhh, 'pythonic' way of declaring a class with that many attributes? Is it best to put them against the class if the default is immutable, ala Pants.pockets, or is it better to put them in the constructor, ala ClassWithLotsOfAttributes.noisiness? Is there a way to eliminate the need to redeclare the defaults for all of the subclass constructor arguments, as in Dog.__init__? Should I even be including this many arguments with defaults anyway?
[ "\nIf attributes will vary from instance\nto instance make them instance\nattribute i.e. create them\ninside__init__ using self else if they need to\nbe shared between class instances\nlike a constant, put them at class\nlevel.\nIf your class really need to pass, so\nmany arguments in __init__, let\nderive class use argument list and\nkeyword arguments e.g.\n\nclass Dog(ClassWithLotsOfAttributes):\n def __init__(self, *args , **kwargs):\n super(ClassWithLotsOfAttributes, self).__init__(*args , **kwargs)\n self.coolness = \"really cool!!!\n\nNo need of passing all variables except few important ones, in\n__init__, class can assume some\ndefaults and user can change them\nlater on if needed.\nUse 4 spaces instead of tab.\nif you need to add an extra arg bite, to Dog and keyword arg old too\n\nclass CoolDog(ClassWithLotsOfAttributes):\n def __init__(self, bite, *args , **kwargs):\n self.old = kwargs.pop('old', False) # this way we can access base class args too\n super(ClassWithLotsOfAttributes, self).__init__(*args , **kwargs)\n self.bite = bite\n self.coolness = \"really really cool!!!\nvarious ways you useCoolDog\nCoolDog(True)\nCoolDog(True, old=False)\nCoolDog(bite=True, old=True)\nCoolDog(old=True, bite=False)\n\n" ]
[ 7 ]
[]
[]
[ "python", "python_2.6" ]
stackoverflow_0001118006_python_python_2.6.txt
Q: Fake a cookie to scrape a site in python The site that I'm trying to scrape uses js to create a cookie. What I was thinking was that I can create a cookie in python and then use that cookie to scrape the site. However, I don't know any way of doing that. Does anybody have any ideas? A: Please see Python httplib2 - Handling Cookies in HTTP Form Posts for an example of adding a cookie to a request. I often need to automate tasks in web based applications. I like to do this at the protocol level by simulating a real user's interactions via HTTP. Python comes with two built-in modules for this: urllib (higher level Web interface) and httplib (lower level HTTP interface). A: If you want to do more involved browser emulation (including setting cookies) take a look at mechanize. It's simulation capabilities are almost complete (no Javascript support unfortunately): I've used it to build several scrapers with much success.
Fake a cookie to scrape a site in python
The site that I'm trying to scrape uses js to create a cookie. What I was thinking was that I can create a cookie in python and then use that cookie to scrape the site. However, I don't know any way of doing that. Does anybody have any ideas?
[ "Please see Python httplib2 - Handling Cookies in HTTP Form Posts for an example of adding a cookie to a request.\n\nI often need to automate tasks in web\n based applications. I like to do this\n at the protocol level by simulating a\n real user's interactions via HTTP. \n Python comes with two built-in modules\n for this: urllib (higher level Web\n interface) and httplib (lower level\n HTTP interface).\n\n", "If you want to do more involved browser emulation (including setting cookies) take a look at mechanize. It's simulation capabilities are almost complete (no Javascript support unfortunately): I've used it to build several scrapers with much success.\n" ]
[ 2, 2 ]
[]
[]
[ "cookiejar", "cookies", "python" ]
stackoverflow_0001117491_cookiejar_cookies_python.txt
Q: Retrieve cookie created using javascript in python I've had a look at many tutorials regarding cookiejar, but my problem is that the webpage that i want to scape creates the cookie using javascript and I can't seem to retrieve the cookie. Does anybody have a solution to this problem? A: If all pages have the same JavaScript then maybe you could parse the HTML to find that piece of code, and from that get the value the cookie would be set to? That would make your scraping quite vulnerable to changes in the third party website, but that's most often the case while scraping. (Please bear in mind that the third-party website owner may not like that you're getting the content this way.) A: I responded to your other question as well: take a look at mechanize. It's probably the most fully featured scraping module I know: if the cookie is sent, then I'm sure you can get to it with this module. A: Maybe you can execute the JavaScript code in a JavaScript engine with Python bindings (like python-spidermonkey or pyv8) and then retrieve the cookie. Or, as the javascript code is executed client side anyway, you may be able to convert the cookie-generating code to Python. A: You could access the page using a real browser, via PAMIE, win32com or similar, then the JavaScript will be running in its native environment.
Retrieve cookie created using javascript in python
I've had a look at many tutorials regarding cookiejar, but my problem is that the webpage that i want to scape creates the cookie using javascript and I can't seem to retrieve the cookie. Does anybody have a solution to this problem?
[ "If all pages have the same JavaScript then maybe you could parse the HTML to find that piece of code, and from that get the value the cookie would be set to? \nThat would make your scraping quite vulnerable to changes in the third party website, but that's most often the case while scraping. (Please bear in mind that the third-party website owner may not like that you're getting the content this way.)\n", "I responded to your other question as well: take a look at mechanize. It's probably the most fully featured scraping module I know: if the cookie is sent, then I'm sure you can get to it with this module.\n", "Maybe you can execute the JavaScript code in a JavaScript engine with Python bindings (like python-spidermonkey or pyv8) and then retrieve the cookie. Or, as the javascript code is executed client side anyway, you may be able to convert the cookie-generating code to Python.\n", "You could access the page using a real browser, via PAMIE, win32com or similar, then the JavaScript will be running in its native environment.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "cookiejar", "cookies", "python", "urllib2" ]
stackoverflow_0001116362_cookiejar_cookies_python_urllib2.txt
Q: Python MySQLdb exceptions Just starting to get to grips with python and MySQLdb and was wondering Where is the best play to put a try/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query? What exceptions should i be catching on any of these blocks? thanks for any help Cheers Mark A: Catch the MySQLdb.Error, while connecting and while executing query A: I think that the connections and the query can raised errors so you should have try/excepy for both of them.
Python MySQLdb exceptions
Just starting to get to grips with python and MySQLdb and was wondering Where is the best play to put a try/catch block for the connection to MySQL. At the MySQLdb.connect point? Also should there be one when ever i query? What exceptions should i be catching on any of these blocks? thanks for any help Cheers Mark
[ "Catch the MySQLdb.Error, while connecting and while executing query\n", "I think that the connections and the query can raised errors so you should have try/excepy for both of them. \n" ]
[ 16, 1 ]
[]
[]
[ "exception", "mysql", "python" ]
stackoverflow_0001117828_exception_mysql_python.txt
Q: Is there a database implementation that has notifications and revisions? I am looking for a database library that can be used within an editor to replace a custom document format. In my case the document would contain a functional program. I want application data to be persistent even while editing, so that when the program crashes, no data is lost. I know that all databases offer that. On top of that, I want to access and edit the document from multiple threads, processes, possibly even multiple computers. Format: a simple key/value database would totally suffice. SQL usually needs to be wrapped, and if I can avoid pulling in a heavy ORM dependency, that would be splendid. Revisions: I want to be able to roll back changes up to the first change to the document that has ever been made, not only in one session, but also between sessions/program runs. I need notifications: each process must be able to be notified of changes to the document so it can update its view accordingly. I see these requirements as rather basic, a foundation to solve the usual tough problems of an editing application: undo/redo, multiple views on the same data. Thus, the database system should be lightweight and undemanding. Thank you for your insights in advance :) A: Berkeley DB is an undemanding, light-weight key-value database that supports locking and transactions. There are bindings for it in a lot of programming languages, including C++ and python. You'll have to implement revisions and notifications yourself, but that's actually not all that difficult. A: It might be a bit more power than what you ask for, but You should definitely look at CouchDB. It is a document database with "document" being defined as a JSON record. It stores all the changes to the documents as revisions, so you instantly get revisions. It has powerful javascript based view engine to aggregate all the data you need from the database. All the commits to the database are written to the end of the repository file and the writes are atomic, meaning that unsuccessful writes do not corrupt the database. Another nice bonus You'll get is easy and flexible replication and of your database. See the full feature list on their homepage On the minus side (depending on Your point of view) is the fact that it is written in Erlang and (as far as I know) runs as an external process... I don't know anything about notifications though - it seems that if you are working with replicated databases, the changes are instantly replicated/synchronized between databases. Other than that I suppose you should be able to roll your own notification schema... A: Check out ZODB. It doesn't have notifications built in, so you would need a messaging system there (since you may use separate computers). But it has transactions, you can roll back forever (unless you pack the database, which removes earlier revisions), you can access it directly as an integrated part of the application, or it can run as client/server (with multiple clients of course), you can have automatic persistency, there is no ORM, etc. It's pretty much Python-only though (it's based on Pickles). http://en.wikipedia.org/wiki/Zope_Object_Database http://pypi.python.org/pypi/ZODB3 http://wiki.zope.org/ZODB/guide/index.html http://wiki.zope.org/ZODB/Documentation
Is there a database implementation that has notifications and revisions?
I am looking for a database library that can be used within an editor to replace a custom document format. In my case the document would contain a functional program. I want application data to be persistent even while editing, so that when the program crashes, no data is lost. I know that all databases offer that. On top of that, I want to access and edit the document from multiple threads, processes, possibly even multiple computers. Format: a simple key/value database would totally suffice. SQL usually needs to be wrapped, and if I can avoid pulling in a heavy ORM dependency, that would be splendid. Revisions: I want to be able to roll back changes up to the first change to the document that has ever been made, not only in one session, but also between sessions/program runs. I need notifications: each process must be able to be notified of changes to the document so it can update its view accordingly. I see these requirements as rather basic, a foundation to solve the usual tough problems of an editing application: undo/redo, multiple views on the same data. Thus, the database system should be lightweight and undemanding. Thank you for your insights in advance :)
[ "Berkeley DB is an undemanding, light-weight key-value database that supports locking and transactions. There are bindings for it in a lot of programming languages, including C++ and python. You'll have to implement revisions and notifications yourself, but that's actually not all that difficult.\n", "It might be a bit more power than what you ask for, but You should definitely look at CouchDB.\nIt is a document database with \"document\" being defined as a JSON record.\nIt stores all the changes to the documents as revisions, so you instantly get revisions.\nIt has powerful javascript based view engine to aggregate all the data you need from the database.\nAll the commits to the database are written to the end of the repository file and the writes are atomic, meaning that unsuccessful writes do not corrupt the database.\nAnother nice bonus You'll get is easy and flexible replication and of your database.\nSee the full feature list on their homepage\nOn the minus side (depending on Your point of view) is the fact that it is written in Erlang and (as far as I know) runs as an external process...\nI don't know anything about notifications though - it seems that if you are working with replicated databases, the changes are instantly replicated/synchronized between databases. Other than that I suppose you should be able to roll your own notification schema...\n", "Check out ZODB. It doesn't have notifications built in, so you would need a messaging system there (since you may use separate computers). But it has transactions, you can roll back forever (unless you pack the database, which removes earlier revisions), you can access it directly as an integrated part of the application, or it can run as client/server (with multiple clients of course), you can have automatic persistency, there is no ORM, etc.\nIt's pretty much Python-only though (it's based on Pickles).\nhttp://en.wikipedia.org/wiki/Zope_Object_Database\nhttp://pypi.python.org/pypi/ZODB3\nhttp://wiki.zope.org/ZODB/guide/index.html\nhttp://wiki.zope.org/ZODB/Documentation\n" ]
[ 1, 1, 0 ]
[]
[]
[ "c++", "database_design", "editor", "python" ]
stackoverflow_0001118272_c++_database_design_editor_python.txt
Q: Is there a way to know which versions of python are supported by my code? You may know the Windows compliance tool that helps people to know if their code is supported by any version of the MS OS. I am looking something similar for Python. I am writing a lib with Python 2.6 and I realized that it was not compatible with Python 2.5 due to the use of the with keyword. I would like to know if there is a simple and automatic way to avoid this situation in the future. I am also interested in something similar to know which OS are supported. Thanks for your help A: In response to a previous question about this, I wrote pyqver. If you have any improvements, please feel free to fork and contribute! A: I recommend you rather use automated tests than a code analysis tool. Be aware that there are subtle behaviour changes in the Python standard library that your code may or may not depend upon. For example httplib: When uploading files, it is normal to give the data as a str. In Python 2.6 you can give stream objects instead (useful for >1GB files) if you nudge them correctly, but in Python 2.5 you will get an error. A comprehensive set of unit tests and integration tests will be much more reliable because they test that your program actually works on Python version X.Y. $ python2.6 tests/run_all.py ................................. 33 tests passed [OK] You're Python 2.6 compatible. $ python2.4 tests/run_all.py ...........EEE.........EEE....... 27 tests passed, 6 errors [FAIL] You're not Python 2.4 compatible. A: Python 2.5 can still be saved, since it can use the with keyword: from __future__ import with_statement
Is there a way to know which versions of python are supported by my code?
You may know the Windows compliance tool that helps people to know if their code is supported by any version of the MS OS. I am looking something similar for Python. I am writing a lib with Python 2.6 and I realized that it was not compatible with Python 2.5 due to the use of the with keyword. I would like to know if there is a simple and automatic way to avoid this situation in the future. I am also interested in something similar to know which OS are supported. Thanks for your help
[ "In response to a previous question about this, I wrote pyqver. If you have any improvements, please feel free to fork and contribute!\n", "I recommend you rather use automated tests than a code analysis tool.\nBe aware that there are subtle behaviour changes in the Python standard library that your code may or may not depend upon. For example httplib: When uploading files, it is normal to give the data as a str. In Python 2.6 you can give stream objects instead (useful for >1GB files) if you nudge them correctly, but in Python 2.5 you will get an error.\nA comprehensive set of unit tests and integration tests will be much more reliable because they test that your program actually works on Python version X.Y.\n$ python2.6 tests/run_all.py\n.................................\n33 tests passed\n[OK]\n\nYou're Python 2.6 compatible.\n$ python2.4 tests/run_all.py\n...........EEE.........EEE.......\n27 tests passed, 6 errors\n[FAIL]\n\nYou're not Python 2.4 compatible.\n", "Python 2.5 can still be saved, since it can use the with keyword:\nfrom __future__ import with_statement\n\n" ]
[ 7, 6, 0 ]
[]
[]
[ "dependencies", "python", "versioning" ]
stackoverflow_0001118208_dependencies_python_versioning.txt
Q: Wavelet plot with Python libraries I know that SciPy has some signal processing tools for wavelets in scipy.signal.wavelets and a chart can be drawn using Matplotlib, but it seems I can't get it right. I have tried plotting a Daubechies wavelet against a linear space, but it's not what I am looking for. I am highly unskilled about wavelets and math in general . :) A: With a recent trunk version of PyWavelets, getting approximations of scaling function and wavelet function on x-grid is pretty straightforward: [phi, psi, x] = pywt.Wavelet('db2').wavefun(level=4) Note that x-grid output is not available in v0.1.6, so if you need that you will have to use the trunk version. Having that data, you can plot it using your favourite plotting package, for example: import pylab pylab.plot(x, psi) pylab.show() A very similar method is used on wavelets.pybytes.com demo page, but there the charts are done with Google Charts for online presentation.
Wavelet plot with Python libraries
I know that SciPy has some signal processing tools for wavelets in scipy.signal.wavelets and a chart can be drawn using Matplotlib, but it seems I can't get it right. I have tried plotting a Daubechies wavelet against a linear space, but it's not what I am looking for. I am highly unskilled about wavelets and math in general . :)
[ "With a recent trunk version of PyWavelets, getting approximations of scaling function and wavelet function on x-grid is pretty straightforward:\n[phi, psi, x] = pywt.Wavelet('db2').wavefun(level=4)\n\nNote that x-grid output is not available in v0.1.6, so if you need that you will have to use the trunk version.\nHaving that data, you can plot it using your favourite plotting package, for example:\nimport pylab\npylab.plot(x, psi)\npylab.show()\n\nA very similar method is used on wavelets.pybytes.com demo page, but there the charts are done with Google Charts for online presentation.\n" ]
[ 14 ]
[]
[]
[ "matplotlib", "python", "pywt", "scipy", "wavelet" ]
stackoverflow_0001094655_matplotlib_python_pywt_scipy_wavelet.txt
Q: Interpret this particular REGEX I did a REGEX pattern some time ago and I don't remember its meaning. For me this is a write-only language :) Here is the REGEX: "(?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$" I need to know, in plain English, what does it means. A: (?!^[0-9]*$) don't match only numbers, (?!^[a-zA-Z]*$) don't match only letters, ^([a-zA-Z0-9]{8,10})$ match letters and number 8 to 10 characters long. A: Perl (and Python accordingly) says to the (?!...) part: A zero-width negative lookahead assertion. For example /foo(?!bar)/ matches any occurrence of 'foo' that isn't followed by 'bar'. Note however that lookahead and lookbehind are NOT the same thing. You cannot use this for lookbehind. That means, (?!^[0-9]*$) means: don't match, if the string contains only numbers. (^: start of line/string, $: end of line/string) The other accordingly. Your regexp matches any string, that contains both numbers and letters, but not only one of them. Cheers, Update: For your future RegExp tailoring, take a look at the (?#...) pattern. It allows you to embed comments in your regexp. There is also a modifier, re.X, but I don't like this very much. It's your choice. A: RegexBuddy says the following (!?!): (?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$ Options: ^ and $ match at line breaks Assert that it is impossible to match the regex below starting at this position (negative lookahead) «(?!^[0-9]*$)» Assert position at the beginning of a line (at beginning of the string or after a line break character) «^» Match a single character in the range between “0” and “9” «[0-9]*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» Assert position at the end of a line (at the end of the string or before a line break character) «$» Assert that it is impossible to match the regex below starting at this position (negative lookahead) «(?!^[a-zA-Z]*$)» Assert position at the beginning of a line (at beginning of the string or after a line break character) «^» Match a single character present in the list below «[a-zA-Z]*» Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*» A character in the range between “a” and “z” «a-z» A character in the range between “A” and “Z” «A-Z» Assert position at the end of a line (at the end of the string or before a line break character) «$» Assert position at the beginning of a line (at beginning of the string or after a line break character) «^» Match the regular expression below and capture its match into backreference number 1 «([a-zA-Z0-9]{8,10})» Match a single character present in the list below «[a-zA-Z0-9]{8,10}» Between 8 and 10 times, as many times as possible, giving back as needed (greedy) «{8,10}» A character in the range between “a” and “z” «a-z» A character in the range between “A” and “Z” «A-Z» A character in the range between “0” and “9” «0-9» Assert position at the end of a line (at the end of the string or before a line break character) «$»
Interpret this particular REGEX
I did a REGEX pattern some time ago and I don't remember its meaning. For me this is a write-only language :) Here is the REGEX: "(?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$" I need to know, in plain English, what does it means.
[ "(?!^[0-9]*$)\n\ndon't match only numbers,\n(?!^[a-zA-Z]*$)\n\ndon't match only letters,\n^([a-zA-Z0-9]{8,10})$\n\nmatch letters and number 8 to 10 characters long.\n", "Perl (and Python accordingly) says to the (?!...) part:\n\nA zero-width negative lookahead assertion. For example /foo(?!bar)/ matches any occurrence of 'foo' that isn't followed by 'bar'. Note however that lookahead and lookbehind are NOT the same thing. You cannot use this for lookbehind.\n\nThat means,\n(?!^[0-9]*$)\n\nmeans: don't match, if the string contains only numbers. (^: start of line/string, $: end of line/string) The other accordingly.\nYour regexp matches any string, that contains both numbers and letters, but not only one of them.\nCheers,\nUpdate: For your future RegExp tailoring, take a look at the (?#...) pattern. It allows you to embed comments in your regexp. There is also a modifier, re.X, but I don't like this very much. It's your choice.\n", "RegexBuddy says the following (!?!):\n(?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$\n\nOptions: ^ and $ match at line breaks\n\nAssert that it is impossible to match the regex below starting at this position (negative lookahead) «(?!^[0-9]*$)»\n Assert position at the beginning of a line (at beginning of the string or after a line break character) «^»\n Match a single character in the range between “0” and “9” «[0-9]*»\n Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*»\n Assert position at the end of a line (at the end of the string or before a line break character) «$»\nAssert that it is impossible to match the regex below starting at this position (negative lookahead) «(?!^[a-zA-Z]*$)»\n Assert position at the beginning of a line (at beginning of the string or after a line break character) «^»\n Match a single character present in the list below «[a-zA-Z]*»\n Between zero and unlimited times, as many times as possible, giving back as needed (greedy) «*»\n A character in the range between “a” and “z” «a-z»\n A character in the range between “A” and “Z” «A-Z»\n Assert position at the end of a line (at the end of the string or before a line break character) «$»\nAssert position at the beginning of a line (at beginning of the string or after a line break character) «^»\nMatch the regular expression below and capture its match into backreference number 1 «([a-zA-Z0-9]{8,10})»\n Match a single character present in the list below «[a-zA-Z0-9]{8,10}»\n Between 8 and 10 times, as many times as possible, giving back as needed (greedy) «{8,10}»\n A character in the range between “a” and “z” «a-z»\n A character in the range between “A” and “Z” «A-Z»\n A character in the range between “0” and “9” «0-9»\nAssert position at the end of a line (at the end of the string or before a line break character) «$» \n\n" ]
[ 5, 4, 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001118672_python_regex.txt
Q: Python script: import sys failing on WinXP setup I'm trying out a hello world python script on WinXP. When I execute: python test.py arg1.log I get an error for the first line of the script, which is 'import sys': File "test.py", line 1, in <module> i NameError: name 'i' is not defined Any suggestions? A: You've saved the file as Windows Unicode (aka UTF-16, aka UCS-2) rather than ASCII or UTF-8. If your editor has an Encoding option (or something under "Save As" for the encoding) change it to UTF-8. If your editor has no such option, you can load it into Notepad and save it as UTF-8.
Python script: import sys failing on WinXP setup
I'm trying out a hello world python script on WinXP. When I execute: python test.py arg1.log I get an error for the first line of the script, which is 'import sys': File "test.py", line 1, in <module> i NameError: name 'i' is not defined Any suggestions?
[ "You've saved the file as Windows Unicode (aka UTF-16, aka UCS-2) rather than ASCII or UTF-8.\nIf your editor has an Encoding option (or something under \"Save As\" for the encoding) change it to UTF-8.\nIf your editor has no such option, you can load it into Notepad and save it as UTF-8.\n" ]
[ 10 ]
[]
[]
[ "python", "windows_xp" ]
stackoverflow_0001118979_python_windows_xp.txt
Q: swfupload failing in my django runserver I have copied and pasted the code from http://demo.swfupload.org/v220/simpledemo/ into a django template, but when I upload a photo, the demo says "Server (IO) Error" before it actually uploads the entire file. The runserver is getting the request and returning a 200. Is there something I am missing here? What steps should I take to debug? Thanks, Collin Anderson A: Found it. I was uploading to a dummy view, and for some reason if you don't actually access request.POST or request.FILES it gives that error message. A: Make sure you have write permission to your server. In the folder you installed that thing. Check the user that is running runserver. If in windows - check folder is not readonly.
swfupload failing in my django runserver
I have copied and pasted the code from http://demo.swfupload.org/v220/simpledemo/ into a django template, but when I upload a photo, the demo says "Server (IO) Error" before it actually uploads the entire file. The runserver is getting the request and returning a 200. Is there something I am missing here? What steps should I take to debug? Thanks, Collin Anderson
[ "Found it. I was uploading to a dummy view, and for some reason if you don't actually access request.POST or request.FILES it gives that error message.\n", "Make sure you have write permission to your server. In the folder you installed that thing. Check the user that is running runserver. If in windows - check folder is not readonly.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "swfupload" ]
stackoverflow_0001110082_django_python_swfupload.txt
Q: PyQt: event is not triggered, what's wrong with my code? I'm a Python newbie and I'm trying to write a trivial app with an event handler that gets activated when an item in a custom QTreeWidget is clicked. For some reason it doesn't work. Since I'm only at the beginning of learning it, I can't figure out what I'm doing wrong. Here is the code: #!/usr/bin/env python import sys from PyQt4.QtCore import SIGNAL from PyQt4.QtGui import QApplication from PyQt4.QtGui import QMainWindow from PyQt4.QtGui import QTreeWidget from PyQt4.QtGui import QTreeWidgetItem class MyTreeItem(QTreeWidgetItem): def __init__(self, s, parent = None): super(MyTreeItem, self).__init__(parent, [s]) class MyTree(QTreeWidget): def __init__(self, parent = None): super(MyTree, self).__init__(parent) self.setMinimumWidth(200) self.setMinimumHeight(200) for s in ['foo', 'bar']: MyTreeItem(s, self) self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, column)'), self.onClick) def onClick(self, item, column): print item class MainWindow(QMainWindow): def __init__(self, parent = None): super(MainWindow, self).__init__(parent) self.tree = MyTree(self) def main(): app = QApplication(sys.argv) win = MainWindow() win.show() app.exec_() if __name__ == '__main__': main() My initial goal is to make MyTree.onClick() print something when I click a tree item (and have access to the clicked item in this handler). A: You should have said self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, int)'), self.onClick) Notice it says int rather than column in the first argument to SIGNAL. You also only need to do the connect call once for the tree widget, not once for each node in the tree.
PyQt: event is not triggered, what's wrong with my code?
I'm a Python newbie and I'm trying to write a trivial app with an event handler that gets activated when an item in a custom QTreeWidget is clicked. For some reason it doesn't work. Since I'm only at the beginning of learning it, I can't figure out what I'm doing wrong. Here is the code: #!/usr/bin/env python import sys from PyQt4.QtCore import SIGNAL from PyQt4.QtGui import QApplication from PyQt4.QtGui import QMainWindow from PyQt4.QtGui import QTreeWidget from PyQt4.QtGui import QTreeWidgetItem class MyTreeItem(QTreeWidgetItem): def __init__(self, s, parent = None): super(MyTreeItem, self).__init__(parent, [s]) class MyTree(QTreeWidget): def __init__(self, parent = None): super(MyTree, self).__init__(parent) self.setMinimumWidth(200) self.setMinimumHeight(200) for s in ['foo', 'bar']: MyTreeItem(s, self) self.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, column)'), self.onClick) def onClick(self, item, column): print item class MainWindow(QMainWindow): def __init__(self, parent = None): super(MainWindow, self).__init__(parent) self.tree = MyTree(self) def main(): app = QApplication(sys.argv) win = MainWindow() win.show() app.exec_() if __name__ == '__main__': main() My initial goal is to make MyTree.onClick() print something when I click a tree item (and have access to the clicked item in this handler).
[ "You should have said\nself.connect(self, SIGNAL('itemClicked(QTreeWidgetItem*, int)'), self.onClick)\n\nNotice it says int rather than column in the first argument to SIGNAL. You also only need to do the connect call once for the tree widget, not once for each node in the tree.\n" ]
[ 10 ]
[]
[]
[ "pyqt", "python" ]
stackoverflow_0001119110_pyqt_python.txt
Q: How to put infinity and minus infinity in Django FloatField? I am trying to put infinity in a FloatField, but that doesn't seem to work. How do I solve this? f = DjangoModel(float_value=float('inf')) #ok f.save() #crashes Results in: Traceback (most recent call last): ... ProgrammingError: column "inf" does not exist LINE 1: ... "float_value") VALUES (inf) I'm using Django 1.0.2 with PostgreSQL 8.3 A: It seems like Djangos ORM doesn't have any special handling for this. Pythons representaton of the values are inf and -inf, while PostgrSQL wants 'Infinity' and '-Infinity'. Obviously Djangos ORM doesn't handle that conversion. So you need to fix the ORM, I guess. And then think about the fact that other SQL databases may very well want other formats, or won't handle infinities at all. A: It should be 'infinity' shouldn't it? However, I don't know if that can be stored in a float field. float('infinity') returns inf, and that would be a char field. Edit: Unfortunately, it is hard to say what C library Django uses without digging deep. I'd say figure out what the value stored in the float field actually is, or at least determine what is being returned. If it is indeed "inf", then that cannot be stored properly in a float field.
How to put infinity and minus infinity in Django FloatField?
I am trying to put infinity in a FloatField, but that doesn't seem to work. How do I solve this? f = DjangoModel(float_value=float('inf')) #ok f.save() #crashes Results in: Traceback (most recent call last): ... ProgrammingError: column "inf" does not exist LINE 1: ... "float_value") VALUES (inf) I'm using Django 1.0.2 with PostgreSQL 8.3
[ "It seems like Djangos ORM doesn't have any special handling for this. Pythons representaton of the values are inf and -inf, while PostgrSQL wants 'Infinity' and '-Infinity'. Obviously Djangos ORM doesn't handle that conversion.\nSo you need to fix the ORM, I guess. And then think about the fact that other SQL databases may very well want other formats, or won't handle infinities at all.\n", "It should be 'infinity' shouldn't it? However, I don't know if that can be stored in a float field. float('infinity') returns inf, and that would be a char field.\nEdit: Unfortunately, it is hard to say what C library Django uses without digging deep. I'd say figure out what the value stored in the float field actually is, or at least determine what is being returned. If it is indeed \"inf\", then that cannot be stored properly in a float field.\n" ]
[ 5, 0 ]
[]
[]
[ "django", "infinity", "python" ]
stackoverflow_0001119497_django_infinity_python.txt
Q: Emulator Framework Are there any good open source frameworks for developing computer system emulators? I am particularly interested in something written in Python or Java that can reduce the effort involved in developing emulators for 8-bit processors (e.g. 6502, 6510, etc.). A: Isn't the 6510 in the C64? You might be able to make use of the java libraries that emulate c64 code http://www.dreamfabric.com/c64/ http://www.jac64.com/jac64-java-based-c64-emulator.html If you aren't afraid of C++ try this general purpose one: http://cef.sourceforge.net/index.php A: You may want to check out VICE, which can emulates a variety of Commodore 8-bit computers: "the C64, the C64DTV, the C128, the VIC20, almost all PET models, the PLUS4 and the CBM-II (aka C610)". That includes 6502, 6510 and 8502 processors. VICE is released under GPL and is written in C. A: I've developed a complete emulator for the MIX machine (Knuth's imaginary computer from TAOCP) in Perl a few years ago. The source code is well documented and the simulator is runnable, so one can practice with examples. It wasn't too difficult and I don't recall needing any special framework. The machine's registers are just state variables in the simulator, and the rest is interpreting instructions and changing this internal state. Do you have more specific questions? Perhaps it will then be easier to point you in the right direction.
Emulator Framework
Are there any good open source frameworks for developing computer system emulators? I am particularly interested in something written in Python or Java that can reduce the effort involved in developing emulators for 8-bit processors (e.g. 6502, 6510, etc.).
[ "Isn't the 6510 in the C64? \nYou might be able to make use of the java libraries that emulate c64 code\nhttp://www.dreamfabric.com/c64/\nhttp://www.jac64.com/jac64-java-based-c64-emulator.html\nIf you aren't afraid of C++ try this general purpose one:\nhttp://cef.sourceforge.net/index.php\n", "You may want to check out VICE, which can emulates a variety of Commodore 8-bit computers: \"the C64, the C64DTV, the C128, the VIC20, almost all PET models, the PLUS4 and the CBM-II (aka C610)\". That includes 6502, 6510 and 8502 processors. VICE is released under GPL and is written in C.\n", "I've developed a complete emulator for the MIX machine (Knuth's imaginary computer from TAOCP) in Perl a few years ago. The source code is well documented and the simulator is runnable, so one can practice with examples. It wasn't too difficult and I don't recall needing any special framework. The machine's registers are just state variables in the simulator, and the rest is interpreting instructions and changing this internal state.\nDo you have more specific questions? Perhaps it will then be easier to point you in the right direction.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "6502", "6510", "emulation", "java", "python" ]
stackoverflow_0001120709_6502_6510_emulation_java_python.txt
Q: Django templates: adding sections conditionally I just started using django for development. At the moment, I have the following issue: I have to write a page template able to represent different categories of data. For example, suppose I have a medical record of a patient. The represented information about this patient are, for example: name, surname and similar data data about current treatments ..and beyond: specific data about any other analysis (eg. TAC, NMR, heart, blood, whatever) Suppose that for each entry at point 3, I need to present a specific section. The template for this page would probably look like a long series of if statements, one for each data entry, which will be used only if that information is present. This would result in a very long template. One possible solution is to use the include directive in the template, and then fragment the main template so that instead of a list of if's i have a list of includes, one for each if. Just out of curiosity, I was wondering if someone know an alternative strategy for this kind of pattern, either at the template level or at the view level. A: See this example: http://www.djangosnippets.org/snippets/1057/ Essentially, you can loop through a model's fields in the template. I assume you just want to display the data present in all of these different fields correct? Looping through each field should provide you with the results you're looking for. Alternatively, you can set up what you want to display in the view by adding your conditionals there. It will make your view functions messier but will clean up the template. The view also makes it easier to test for the existence of certain sections. A: The answer to this depends a lot on how you've structured your data, which you don't say - are the extra bits of information in separate related tables, subclassed models, individual fields on the same model...? In general, this sounds like a job for a template tag. I would probably write a custom tag that took your parent object as a parameter, and inspected the data to determine what to output. Each choice could potentially be rendered by a different sub-template, called by the tag itself.
Django templates: adding sections conditionally
I just started using django for development. At the moment, I have the following issue: I have to write a page template able to represent different categories of data. For example, suppose I have a medical record of a patient. The represented information about this patient are, for example: name, surname and similar data data about current treatments ..and beyond: specific data about any other analysis (eg. TAC, NMR, heart, blood, whatever) Suppose that for each entry at point 3, I need to present a specific section. The template for this page would probably look like a long series of if statements, one for each data entry, which will be used only if that information is present. This would result in a very long template. One possible solution is to use the include directive in the template, and then fragment the main template so that instead of a list of if's i have a list of includes, one for each if. Just out of curiosity, I was wondering if someone know an alternative strategy for this kind of pattern, either at the template level or at the view level.
[ "See this example: http://www.djangosnippets.org/snippets/1057/\nEssentially, you can loop through a model's fields in the template.\nI assume you just want to display the data present in all of these different fields correct? Looping through each field should provide you with the results you're looking for.\nAlternatively, you can set up what you want to display in the view by adding your conditionals there. It will make your view functions messier but will clean up the template. The view also makes it easier to test for the existence of certain sections.\n", "The answer to this depends a lot on how you've structured your data, which you don't say - are the extra bits of information in separate related tables, subclassed models, individual fields on the same model...?\nIn general, this sounds like a job for a template tag. I would probably write a custom tag that took your parent object as a parameter, and inspected the data to determine what to output. Each choice could potentially be rendered by a different sub-template, called by the tag itself.\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0001120914_django_django_templates_python.txt
Q: How to use QTP to test the application which operates in citrix of Remote Machine? When i tried recording using QTP, every thing goes well till the application sign in. i.e it gets upto the user Id and password entry, But QTP fails to recognise afterthat. Is there any way to handle this? Application is to be invoked using Citirx, in VPN. A: QTP performs GUI recognition and interaction through Windows Handle. So it has to be running under Citrix (i.e. installed on the same virtual machine as your Application Under Test). If you have the above, make sure Screen Resolution, Windows Theme, Font size, and other global GUI settings are the same.
How to use QTP to test the application which operates in citrix of Remote Machine?
When i tried recording using QTP, every thing goes well till the application sign in. i.e it gets upto the user Id and password entry, But QTP fails to recognise afterthat. Is there any way to handle this? Application is to be invoked using Citirx, in VPN.
[ "QTP performs GUI recognition and interaction through Windows Handle. \nSo it has to be running under Citrix (i.e. installed on the same virtual machine as your Application Under Test). \nIf you have the above, make sure Screen Resolution, Windows Theme, Font size, and other global GUI settings are the same.\n" ]
[ 0 ]
[]
[]
[ "c#", "python", "ruby" ]
stackoverflow_0001086758_c#_python_ruby.txt
Q: How do you address data returned to a socket in python? Say you are telneting into IRC to figure out how it all works. As you issue commands the IRC server returns data telling you what it's doing. Once I have created a default script that basically is how a normal IRC connection between server and client occurs, if it ever deviates from that it won't tell me what is wrong. I need to be able to throw exceptions based on what the server returns to me. How do I do that in python? A: Here's a tutorial which pretty much walks you through an IRC client using sockets in Python: Python and IRC A: Twisted is an event-driven networking engine written in Python, and includes support for IRC protocols. To access IRC functionality, import it: from twisted.words.protocols import irc See an example here: ircLogBot.py - connects to an IRC server and logs all messages. The example __doc__: """An example IRC log bot - logs a channel's events to a file. If someone says the bot's name in the channel followed by a ':', e.g. <foo> logbot: hello! the bot will reply: <logbot> foo: I am a log bot Run this script with two arguments, the channel name the bot should connect to, and file to log to, e.g.: $ python ircLogBot.py test test.log will log channel #test to the file 'test.log'. """
How do you address data returned to a socket in python?
Say you are telneting into IRC to figure out how it all works. As you issue commands the IRC server returns data telling you what it's doing. Once I have created a default script that basically is how a normal IRC connection between server and client occurs, if it ever deviates from that it won't tell me what is wrong. I need to be able to throw exceptions based on what the server returns to me. How do I do that in python?
[ "Here's a tutorial which pretty much walks you through an IRC client using sockets in Python:\n\nPython and IRC\n\n", "Twisted is an event-driven networking engine written in Python, and includes support for IRC protocols. To access IRC functionality, import it:\nfrom twisted.words.protocols import irc\n\nSee an example here: ircLogBot.py - connects to an IRC server and logs all messages. The example __doc__:\n\"\"\"An example IRC log bot - logs a channel's events to a file.\n\nIf someone says the bot's name in the channel followed by a ':',\ne.g.\n\n <foo> logbot: hello!\n\n the bot will reply:\n\n <logbot> foo: I am a log bot\n\nRun this script with two arguments, the channel name the bot should\nconnect to, and file to log to, e.g.:\n\n $ python ircLogBot.py test test.log\n\nwill log channel #test to the file 'test.log'.\n\"\"\"\n\n" ]
[ 1, 0 ]
[]
[]
[ "irc", "python", "sockets" ]
stackoverflow_0001120976_irc_python_sockets.txt
Q: Improve a IRC Client in Python How i can make some improvement in my IRC client made in Python. The improvement is: How i can put something that the user can type the HOST, PORT, NICK, INDENT and REALNAME strings and the message? And here is the code of the program: simplebot.py import sys import socket import string HOST="irc.freenode.net" PORT=6667 NICK="MauBot" IDENT="maubot" REALNAME="MauritsBot" readbuffer="" s=socket.socket( ) s.connect((HOST, PORT)) s.send("NICK %s\r\n" % NICK) s.send("USER %s %s bla :%s\r\n" % (IDENT, HOST, REALNAME)) while 1: readbuffer=readbuffer+s.recv(1024) temp=string.split(readbuffer, "\n") readbuffer=temp.pop( ) for line in temp: line=string.rstrip(line) line=string.split(line) if(line[0]=="PING"): s.send("PONG %s\r\n" % line[1]) Remember that i'm starting in Python development. Here is where i have found this code: http://oreilly.com/pub/h/1968.Thanks. A: You already have the blueprint there for what you want it to do. You're doing: if(line[0]=="PING"): No reason you couldn't adapt that scheme to accept input of PORT, NICK, etc. Also, while 1 isn't very Pythonic. Yes it works, but really there is no reason not to use True. It's not a big deal, but it makes the code slightly more readable. A: So you want the user to control the exact connection information that the IRC client uses? In order to do this, you must collect input from the user before you start your connection using the raw_input function. NOTE: raw_input will strip a trailing newline character. HOST = raw_input('Enter Host: ') PORT = int(raw_input('Enter Port: ')) ...for all of the values that you want the user to be able to configure. Example: HOST = raw_input('Enter host: ') print HOST >>> Enter host: stackoverflow.com stackoverflow.com >>> A: Not a direct answer, but you should check the IRC implementation in twisted, an event-driven networking engine written in Python that includes support for irc in twisted.words.protocols.irc. A: If you're trying to carry out actions in response to user input, maybe the cmd module will help you out: cmd — Support for line-oriented command interpreters PyMOTW: cmd If you're interested in the IRC protocol itself, this tutorial on using sockets to write an IRC client in python may be of use: Python and IRC A: If you're brand new to Python, an IRC client is quite an undertaking, especially if you haven't worked with similar clients before in other languages. I would recommend you to look up on threading, so that you can put your IRC handler on a separate thread, and receive user input on another thread (If you do both on the same thread, one will block the other, making for a bad experience.) To answer your question though, the simplest way to get input from the user in the console is to use in = raw_input(), but as I said, it will not interact well with the socket on the same thread.
Improve a IRC Client in Python
How i can make some improvement in my IRC client made in Python. The improvement is: How i can put something that the user can type the HOST, PORT, NICK, INDENT and REALNAME strings and the message? And here is the code of the program: simplebot.py import sys import socket import string HOST="irc.freenode.net" PORT=6667 NICK="MauBot" IDENT="maubot" REALNAME="MauritsBot" readbuffer="" s=socket.socket( ) s.connect((HOST, PORT)) s.send("NICK %s\r\n" % NICK) s.send("USER %s %s bla :%s\r\n" % (IDENT, HOST, REALNAME)) while 1: readbuffer=readbuffer+s.recv(1024) temp=string.split(readbuffer, "\n") readbuffer=temp.pop( ) for line in temp: line=string.rstrip(line) line=string.split(line) if(line[0]=="PING"): s.send("PONG %s\r\n" % line[1]) Remember that i'm starting in Python development. Here is where i have found this code: http://oreilly.com/pub/h/1968.Thanks.
[ "You already have the blueprint there for what you want it to do. You're doing:\nif(line[0]==\"PING\"):\n\nNo reason you couldn't adapt that scheme to accept input of PORT, NICK, etc.\nAlso, while 1 isn't very Pythonic. Yes it works, but really there is no reason not to use True. It's not a big deal, but it makes the code slightly more readable.\n", "So you want the user to control the exact connection information that the IRC client uses? In order to do this, you must collect input from the user before you start your connection using the raw_input function.\nNOTE: raw_input will strip a trailing newline character.\nHOST = raw_input('Enter Host: ')\nPORT = int(raw_input('Enter Port: '))\n\n...for all of the values that you want the user to be able to configure.\nExample:\nHOST = raw_input('Enter host: ')\nprint HOST\n\n>>> \nEnter host: stackoverflow.com\nstackoverflow.com\n>>> \n\n", "Not a direct answer, but you should check the IRC implementation in twisted, an event-driven networking engine written in Python that includes support for irc in twisted.words.protocols.irc.\n", "If you're trying to carry out actions in response to user input, maybe the cmd module will help you out:\n\ncmd — Support for line-oriented command interpreters\nPyMOTW: cmd\n\nIf you're interested in the IRC protocol itself, this tutorial on using sockets to write an IRC client in python may be of use:\n\nPython and IRC\n\n", "If you're brand new to Python, an IRC client is quite an undertaking, especially if you haven't worked with similar clients before in other languages.\nI would recommend you to look up on threading, so that you can put your IRC handler on a separate thread, and receive user input on another thread (If you do both on the same thread, one will block the other, making for a bad experience.)\nTo answer your question though, the simplest way to get input from the user in the console is to use in = raw_input(), but as I said, it will not interact well with the socket on the same thread.\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "client", "irc", "python", "python_3.x", "sockets" ]
stackoverflow_0001121002_client_irc_python_python_3.x_sockets.txt
Q: Eliminating certain Django Session Calls I was wondering if I could eliminate django session calls for specific views. For example, if I have a password reset form I don't want a call to the DB to check for a session or not. Thanks! A: Sessions are lazily loaded: if you don't use the session during a request, Django won't load it. This includes request.user: if you access it, it accesses the session to find the user. (It loads lazily, too--if you don't access request.user, it won't access the session, either.) So, figure out what's accessing the session and eliminate it--and if you can't, at least you'll know why the session is being pulled in.
Eliminating certain Django Session Calls
I was wondering if I could eliminate django session calls for specific views. For example, if I have a password reset form I don't want a call to the DB to check for a session or not. Thanks!
[ "Sessions are lazily loaded: if you don't use the session during a request, Django won't load it.\nThis includes request.user: if you access it, it accesses the session to find the user. (It loads lazily, too--if you don't access request.user, it won't access the session, either.)\nSo, figure out what's accessing the session and eliminate it--and if you can't, at least you'll know why the session is being pulled in.\n" ]
[ 1 ]
[]
[]
[ "django", "python", "session" ]
stackoverflow_0001121299_django_python_session.txt
Q: Python to drive Emacs; pymacs doesn't work I've got a python script that loops indefinitely waiting for input, and then does something when the input happens. My problem is then making python tell emacs to do something. I just need some way to send emacs input and make emacs evaluate that input. Here's some code to illustrate my problem... while(1): on_off = query_lightswitch if on_off == 0: send_text_to_emacs("(setq 'lightswitch t)") Ideally I'd send emacs a string that it evaluates in its elisp interpreter. I've tried pymacs, but it looks like pymacs is made to start stuff from emacs rather than python. When I try something like this in pymacs it locks up until the loop terminates. This looks like a problem I could solve with unix pipelines, if I knew enough. If anybody out there has any ideas on how to solve this problem I'd be much obliged, thanks. A: You can use gnuclient (shipped with Emacs 22) (or emacsclient for earlier Emacsen), to evaluate code from external programs and connect to a running Emacs. Getting Emacs to evaluate code by itself would look something like this: gnuclient -q -batch -eval "(setq 'lightswitch t)"
Python to drive Emacs; pymacs doesn't work
I've got a python script that loops indefinitely waiting for input, and then does something when the input happens. My problem is then making python tell emacs to do something. I just need some way to send emacs input and make emacs evaluate that input. Here's some code to illustrate my problem... while(1): on_off = query_lightswitch if on_off == 0: send_text_to_emacs("(setq 'lightswitch t)") Ideally I'd send emacs a string that it evaluates in its elisp interpreter. I've tried pymacs, but it looks like pymacs is made to start stuff from emacs rather than python. When I try something like this in pymacs it locks up until the loop terminates. This looks like a problem I could solve with unix pipelines, if I knew enough. If anybody out there has any ideas on how to solve this problem I'd be much obliged, thanks.
[ "You can use gnuclient (shipped with Emacs 22) (or emacsclient for earlier Emacsen), to evaluate code from external programs and connect to a running Emacs.\nGetting Emacs to evaluate code by itself would look something like this:\ngnuclient -q -batch -eval \"(setq 'lightswitch t)\"\n\n" ]
[ 4 ]
[]
[]
[ "emacs", "pymacs", "python" ]
stackoverflow_0001121759_emacs_pymacs_python.txt
Q: python windows directory mtime: how to detect package directory new file? I'm working on an auto-reload feature for WHIFF http://whiff.sourceforge.net (so you have to restart the HTTP server less often, ideally never). I have the following code to reload a package module "location" if a file is added to the package directory. It doesn't work on Windows XP. How can I fix it? I think the problem is that getmtime(dir) doesn't change on Windows when the directory content changes? I'd really rather not compare an os.listdir(dir) with the last directory content every time I access the package... if not do_reload and hasattr(location, "__path__"): path0 = location.__path__[0] if os.path.exists(path0): dir_mtime = int( os.path.getmtime(path0) ) if fn_mtime<dir_mtime: print "dir change: reloading package root", location do_reload = True md_mtime = dir_mtime In the code the "fn_mtime" is the recorded mtime from the last (re)load. ... added comment: I came up with the following work around, which I think may work, but I don't care for it too much since it involves code generation. I dynamically generate a code fragment to load a module and if it fails it tries again after a reload. Not tested yet. GET_MODULE_FUNCTION = """ def f(): import %(parent)s try: from %(parent)s import %(child)s except ImportError: # one more time... reload(%(parent)s) from %(parent)s import %(child)s return %(child)s """ def my_import(partname, parent): f = None # for pychecker parentname = parent.__name__ defn = GET_MODULE_FUNCTION % {"parent": parentname, "child": partname} #pr "executing" #pr defn try: exec(defn) # defines function f() except SyntaxError: raise ImportError, "bad function name "+repr(partname)+"?" partmodule = f() #pr "got", partmodule setattr(parent, partname, partmodule) #pr "setattr", parent, ".", partname, "=", getattr(parent, partname) return partmodule Other suggestions welcome. I'm not happy about this... A: long time no see. I'm not sure exactly what you're doing, but the equivalent of your code: GET_MODULE_FUNCTION = """ def f(): import %(parent)s try: from %(parent)s import %(child)s except ImportError: # one more time... reload(%(parent)s) from %(parent)s import %(child)s return %(child)s """ to be execed with: defn = GET_MODULE_FUNCTION % {"parent": parentname, "child": partname} exec(defn) is (per the docs), assuming parentname names a package and partname names a module in that package (if partname is a top-level name of the parentname package, such as a function or class, you'll have to use a getattr at the end): import sys def f(parentname, partname): name = '%s.%s' % (parentname, partname) try: __import__(name) except ImportError: parent = __import__(parentname) reload(parent) __import__(name) return sys.modules[name] without exec or anything weird, just call this f appropriately. A: you can try using getatime() instead. A: I'm not understanding your question completely... Are you calling getmtime() on a directory or an individual file? A: There are two things about your first code snippet that concern me: You cast the float from getmtime to int. Dependening on the frequency this code is run, you might get unreliable results. At the end of the code you assign dir_mtime to a variable md_mtime. fn_mtime, which you check against, seems not to be updated.
python windows directory mtime: how to detect package directory new file?
I'm working on an auto-reload feature for WHIFF http://whiff.sourceforge.net (so you have to restart the HTTP server less often, ideally never). I have the following code to reload a package module "location" if a file is added to the package directory. It doesn't work on Windows XP. How can I fix it? I think the problem is that getmtime(dir) doesn't change on Windows when the directory content changes? I'd really rather not compare an os.listdir(dir) with the last directory content every time I access the package... if not do_reload and hasattr(location, "__path__"): path0 = location.__path__[0] if os.path.exists(path0): dir_mtime = int( os.path.getmtime(path0) ) if fn_mtime<dir_mtime: print "dir change: reloading package root", location do_reload = True md_mtime = dir_mtime In the code the "fn_mtime" is the recorded mtime from the last (re)load. ... added comment: I came up with the following work around, which I think may work, but I don't care for it too much since it involves code generation. I dynamically generate a code fragment to load a module and if it fails it tries again after a reload. Not tested yet. GET_MODULE_FUNCTION = """ def f(): import %(parent)s try: from %(parent)s import %(child)s except ImportError: # one more time... reload(%(parent)s) from %(parent)s import %(child)s return %(child)s """ def my_import(partname, parent): f = None # for pychecker parentname = parent.__name__ defn = GET_MODULE_FUNCTION % {"parent": parentname, "child": partname} #pr "executing" #pr defn try: exec(defn) # defines function f() except SyntaxError: raise ImportError, "bad function name "+repr(partname)+"?" partmodule = f() #pr "got", partmodule setattr(parent, partname, partmodule) #pr "setattr", parent, ".", partname, "=", getattr(parent, partname) return partmodule Other suggestions welcome. I'm not happy about this...
[ "long time no see. I'm not sure exactly what you're doing, but the equivalent of your code:\nGET_MODULE_FUNCTION = \"\"\"\ndef f():\n import %(parent)s\n try:\n from %(parent)s import %(child)s\n except ImportError:\n # one more time...\n reload(%(parent)s)\n from %(parent)s import %(child)s\n return %(child)s\n\"\"\"\n\nto be execed with:\ndefn = GET_MODULE_FUNCTION % {\"parent\": parentname, \"child\": partname}\nexec(defn)\n\nis (per the docs), assuming parentname names a package and partname names a module in that package (if partname is a top-level name of the parentname package, such as a function or class, you'll have to use a getattr at the end):\nimport sys\n\ndef f(parentname, partname):\n name = '%s.%s' % (parentname, partname)\n try:\n __import__(name)\n except ImportError:\n parent = __import__(parentname)\n reload(parent)\n __import__(name)\n return sys.modules[name]\n\nwithout exec or anything weird, just call this f appropriately.\n", "you can try using getatime() instead.\n", "I'm not understanding your question completely...\nAre you calling getmtime() on a directory or an individual file?\n", "There are two things about your first code snippet that concern me:\n\nYou cast the float from getmtime to int. Dependening on the frequency this code is run, you might get unreliable results.\nAt the end of the code you assign dir_mtime to a variable md_mtime. fn_mtime, which you check against, seems not to be updated.\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "python", "windows_xp" ]
stackoverflow_0001116144_python_windows_xp.txt
Q: How do you extract a JAR in a UNIX filesystem with a single command and specify its target directory using the JAR command? I am creating a Python script within which I am executing UNIX system commands. I have a war archive named Binaries.war which is within an ear archive named Portal.ear The Portal ear file resides in, say /home/foo/bar/ jar xf /home/foo/bar/Portal.ear Binaries.war Will extract the Binaries.war file out of the /home/foo/bar/Portal.ear archive into the current directory I am running the script from. How do you specify a target directory to be extracted to using just one command? I would like to do something like this to extract Binaries.war into the directory /home/foo/bar/baz jar xf /home/foo/bar/Portal.ear Binaries.war [into target directory /home/foo/bar/baz] I searched the the JAR man page for options and can't seem to find a simple way to do this. Of course I can extract the archive into my current directory and then move it using mv but I'd like to do this in one shot and not shuffle directories and files around. A: If your jar file already has an absolute pathname as shown, it is particularly easy: cd /where/you/want/it; jar xf /path/to/jarfile.jar That is, you have the shell executed by Python change directory for you and then run the extraction. If your jar file does not already have an absolute pathname, then you have to convert the relative name to absolute (by prefixing it with the path of the current directory) so that jar can find it after the change of directory. The only issues left to worry about are things like blanks in the path names. A: I don't think the jar tool supports this natively, but you can just unzip a JAR file with "unzip" and specify the output directory with that with the "-d" option, so something like: $ unzip -d /home/foo/bar/baz /home/foo/bar/Portal.ear Binaries.war A: Can't you just change working directory within the python script using os.chdir(target)? I agree, I can't see any way of doing it from the jar command itself. If you don't want to permanently change directory, then store the current directory (using os.getcwd())in a variable and change back afterwards. A: If this is a personal script, rather than one you're planning on distributing, it might be simpler to write a shell function for this: function warextract { jar xf $1 $2 && mv $2 $3 } which you could then call from python like so: warextract /home/foo/bar/Portal.ear Binaries.war /home/foo/bar/baz/ If you really feel like it, you could use sed to parse out the filename from the path, so that you'd be able to call it with warextract /home/foo/bar/Portal.ear /home/foo/bar/baz/Binaries.war I'll leave that as an excercise to the reader, though. Of course, since this will extract the .war out into the current directory first, and then move it, it has the possibility of overwriting something with the same name where you are. Changing directory, extracting it, and cd-ing back is a bit cleaner, but I find myself using little one-line shell functions like this all the time when I want to reduce code clutter.
How do you extract a JAR in a UNIX filesystem with a single command and specify its target directory using the JAR command?
I am creating a Python script within which I am executing UNIX system commands. I have a war archive named Binaries.war which is within an ear archive named Portal.ear The Portal ear file resides in, say /home/foo/bar/ jar xf /home/foo/bar/Portal.ear Binaries.war Will extract the Binaries.war file out of the /home/foo/bar/Portal.ear archive into the current directory I am running the script from. How do you specify a target directory to be extracted to using just one command? I would like to do something like this to extract Binaries.war into the directory /home/foo/bar/baz jar xf /home/foo/bar/Portal.ear Binaries.war [into target directory /home/foo/bar/baz] I searched the the JAR man page for options and can't seem to find a simple way to do this. Of course I can extract the archive into my current directory and then move it using mv but I'd like to do this in one shot and not shuffle directories and files around.
[ "If your jar file already has an absolute pathname as shown, it is particularly easy:\ncd /where/you/want/it; jar xf /path/to/jarfile.jar\n\nThat is, you have the shell executed by Python change directory for you and then run the extraction.\nIf your jar file does not already have an absolute pathname, then you have to convert the relative name to absolute (by prefixing it with the path of the current directory) so that jar can find it after the change of directory.\nThe only issues left to worry about are things like blanks in the path names.\n", "I don't think the jar tool supports this natively, but you can just unzip a JAR file with \"unzip\" and specify the output directory with that with the \"-d\" option, so something like:\n$ unzip -d /home/foo/bar/baz /home/foo/bar/Portal.ear Binaries.war\n\n", "Can't you just change working directory within the python script using os.chdir(target)? I agree, I can't see any way of doing it from the jar command itself.\nIf you don't want to permanently change directory, then store the current directory (using os.getcwd())in a variable and change back afterwards.\n", "If this is a personal script, rather than one you're planning on distributing, it might be simpler to write a shell function for this:\nfunction warextract { jar xf $1 $2 && mv $2 $3 }\n\nwhich you could then call from python like so:\nwarextract /home/foo/bar/Portal.ear Binaries.war /home/foo/bar/baz/\n\nIf you really feel like it, you could use sed to parse out the filename from the path, so that you'd be able to call it with\nwarextract /home/foo/bar/Portal.ear /home/foo/bar/baz/Binaries.war\n\nI'll leave that as an excercise to the reader, though.\nOf course, since this will extract the .war out into the current directory first, and then move it, it has the possibility of overwriting something with the same name where you are.\nChanging directory, extracting it, and cd-ing back is a bit cleaner, but I find myself using little one-line shell functions like this all the time when I want to reduce code clutter.\n" ]
[ 77, 66, 1, 1 ]
[]
[]
[ "jar", "java", "linux", "python", "unix" ]
stackoverflow_0001079693_jar_java_linux_python_unix.txt
Q: Per-session transactions in Django I'm making a Django web-app which allows a user to build up a set of changes over a series of GETs/POSTs before committing them to the database (or reverting) with a final POST. I have to keep the updates isolated from any concurrent database users until they are confirmed (this is a configuration front-end), ruling out committing after each POST. My preferred solution is to use a per-session transaction. This keeps all the problems of remembering what's changed (and how it affects subsequent queries), together with implementing commit/rollback, in the database where it belongs. Deadlock and long-held locks are not an issue, as due to external constraints there can only be one user configuring the system at any one time, and they are well-behaved. However, I cannot find documentation on setting up Django's ORM to use this sort of transaction model. I have thrown together a minimal monkey-patch (ew!) to solve the problem, but dislike such a fragile solution. Has anyone else done this before? Have I missed some documentation somewhere? (My version of Django is 1.0.2 Final, and I am using an Oracle database.) A: Multiple, concurrent, session-scale transactions will generally lead to deadlocks or worse (worse == livelock, long delays while locks are held by another session.) This design is not the best policy, which is why Django discourages it. The better solution is the following. Design a Memento class that records the user's change. This could be a saved copy of their form input. You may need to record additional information if the state changes are complex. Otherwise, a copy of the form input may be enough. Accumulate the sequence of Memento objects in their session. Note that each step in the transaction will involve fetches from the data and validation to see if the chain of mementos will still "work". Sometimes they won't work because someone else changed something in this chain of mementos. What now? When you present the 'ready to commit?' page, you've replayed the sequence of Mementos and are pretty sure they'll work. When the submit "Commit", you have to replay the Mementos one last time, hoping they're still going to work. If they do, great. If they don't, someone changed something, and you're back at step 2: what now? This seems complex. Yes, it does. However it does not hold any locks, allowing blistering speed and little opportunity for deadlock. The transaction is confined to the "Commit" view function which actually applies the sequence of Mementos to the database, saves the results, and does a final commit to end the transaction. The alternative -- holding locks while the user steps out for a quick cup of coffee on step n-1 out of n -- is unworkable. For more information on Memento, see this. A: In case anyone else ever has the exact same problem as me (I hope not), here is my monkeypatch. It's fragile and ugly, and changes private methods, but thankfully it's small. Please don't use it unless you really have to. As mentioned by others, any application using it effectively prevents multiple users doing updates at the same time, on penalty of deadlock. (In my application, there may be many readers, but multiple concurrent updates are deliberately excluded.) I have a "user" object which persists across a user session, and contains a persistent connection object. When I validate a particular HTTP interaction is part of a session, I also store the user object on django.db.connection, which is thread-local. def monkeyPatchDjangoDBConnection(): import django.db def validConnection(): if django.db.connection.connection is None: django.db.connection.connection = django.db.connection.user.connection return True def close(): django.db.connection.connection = None django.db.connection._valid_connection = validConnection django.db.connection.close = close monkeyPatchDBConnection() def setUserOnThisThread(user): import django.db django.db.connection.user = user This last is called automatically at the start of any method annotated with @login_required, so 99% of my code is insulated from the specifics of this hack. A: I came up with something similar to the Memento pattern, but different enough that I think it bears posting. When a user starts an editing session, I duplicate the target object to a temporary object in the database. All subsequent editing operations affect the duplicate. Instead of saving the object state in a memento at each change, I store operation objects. When I apply an operation to an object, it returns the inverse operation, which I store. Saving operations is much cheaper for me than mementos, since the operations can be described with a few small data items, while the object being edited is much bigger. Also I apply the operations as I go and save the undos, so that the temporary in the db always corresponds to the version in the user's browser. I never have to replay a collection of changes; the temporary is always only one operation away from the next version. To implement "undo," I pop the last undo object off the stack (as it were--by retrieving the latest operation for the temporary object from the db) apply it to the temporary and return the transformed temporary. I could also push the resultant operation onto a redo stack if I cared to implement redo. To implement "save changes," i.e. commit, I de-activate and time-stamp the original object and activate the temporary in it's place. To implement "cancel," i.e. rollback, I do nothing! I could delete the temporary, of course, because there's no way for the user to retrieve it once the editing session is over, but I like to keep the canceled edit sessions so I can run stats on them before clearing them out with a cron job.
Per-session transactions in Django
I'm making a Django web-app which allows a user to build up a set of changes over a series of GETs/POSTs before committing them to the database (or reverting) with a final POST. I have to keep the updates isolated from any concurrent database users until they are confirmed (this is a configuration front-end), ruling out committing after each POST. My preferred solution is to use a per-session transaction. This keeps all the problems of remembering what's changed (and how it affects subsequent queries), together with implementing commit/rollback, in the database where it belongs. Deadlock and long-held locks are not an issue, as due to external constraints there can only be one user configuring the system at any one time, and they are well-behaved. However, I cannot find documentation on setting up Django's ORM to use this sort of transaction model. I have thrown together a minimal monkey-patch (ew!) to solve the problem, but dislike such a fragile solution. Has anyone else done this before? Have I missed some documentation somewhere? (My version of Django is 1.0.2 Final, and I am using an Oracle database.)
[ "Multiple, concurrent, session-scale transactions will generally lead to deadlocks or worse (worse == livelock, long delays while locks are held by another session.)\nThis design is not the best policy, which is why Django discourages it.\nThe better solution is the following.\n\nDesign a Memento class that records the user's change. This could be a saved copy of their form input. You may need to record additional information if the state changes are complex. Otherwise, a copy of the form input may be enough.\nAccumulate the sequence of Memento objects in their session. Note that each step in the transaction will involve fetches from the data and validation to see if the chain of mementos will still \"work\". Sometimes they won't work because someone else changed something in this chain of mementos. What now?\nWhen you present the 'ready to commit?' page, you've replayed the sequence of Mementos and are pretty sure they'll work. When the submit \"Commit\", you have to replay the Mementos one last time, hoping they're still going to work. If they do, great. If they don't, someone changed something, and you're back at step 2: what now?\n\nThis seems complex.\nYes, it does. However it does not hold any locks, allowing blistering speed and little opportunity for deadlock. The transaction is confined to the \"Commit\" view function which actually applies the sequence of Mementos to the database, saves the results, and does a final commit to end the transaction.\nThe alternative -- holding locks while the user steps out for a quick cup of coffee on step n-1 out of n -- is unworkable.\nFor more information on Memento, see this.\n", "In case anyone else ever has the exact same problem as me (I hope not), here is my monkeypatch. It's fragile and ugly, and changes private methods, but thankfully it's small. Please don't use it unless you really have to. As mentioned by others, any application using it effectively prevents multiple users doing updates at the same time, on penalty of deadlock. (In my application, there may be many readers, but multiple concurrent updates are deliberately excluded.)\nI have a \"user\" object which persists across a user session, and contains a persistent connection object. When I validate a particular HTTP interaction is part of a session, I also store the user object on django.db.connection, which is thread-local.\ndef monkeyPatchDjangoDBConnection():\n import django.db\n def validConnection():\n if django.db.connection.connection is None:\n django.db.connection.connection = django.db.connection.user.connection\n return True\n def close():\n django.db.connection.connection = None\n django.db.connection._valid_connection = validConnection\n django.db.connection.close = close\nmonkeyPatchDBConnection()\n\ndef setUserOnThisThread(user):\n import django.db\n django.db.connection.user = user\n\nThis last is called automatically at the start of any method annotated with @login_required, so 99% of my code is insulated from the specifics of this hack.\n", "I came up with something similar to the Memento pattern, but different enough that I think it bears posting. When a user starts an editing session, I duplicate the target object to a temporary object in the database. All subsequent editing operations affect the duplicate. Instead of saving the object state in a memento at each change, I store operation objects. When I apply an operation to an object, it returns the inverse operation, which I store. \nSaving operations is much cheaper for me than mementos, since the operations can be described with a few small data items, while the object being edited is much bigger. Also I apply the operations as I go and save the undos, so that the temporary in the db always corresponds to the version in the user's browser. I never have to replay a collection of changes; the temporary is always only one operation away from the next version.\nTo implement \"undo,\" I pop the last undo object off the stack (as it were--by retrieving the latest operation for the temporary object from the db) apply it to the temporary and return the transformed temporary. I could also push the resultant operation onto a redo stack if I cared to implement redo.\nTo implement \"save changes,\" i.e. commit, I de-activate and time-stamp the original object and activate the temporary in it's place.\nTo implement \"cancel,\" i.e. rollback, I do nothing! I could delete the temporary, of course, because there's no way for the user to retrieve it once the editing session is over, but I like to keep the canceled edit sessions so I can run stats on them before clearing them out with a cron job.\n" ]
[ 8, 2, 2 ]
[]
[]
[ "django", "python", "transactions" ]
stackoverflow_0001033934_django_python_transactions.txt
Q: What is the maximum simultaneous HTTP connections allowed on one machine (windows server 2008) using python To be more specific, I'm using python and making a pool of HTTPConnection (httplib) and was wondering if there is an limit on the number of concurrent HTTP connections on a windows server. A: AFAIK, the numbers of internet sockets (necessary to make TCP/IP connections) is naturally limited on every machine, but it's pretty high. 1000 simulatneous connections shouldn't be a problem for the client machine, as each socket uses only little memory. If you start receiving data through all these channels, this might change though. I've heard of test setups that created a couple of thousands connections simultaneously from a single client. The story is usually different for the server, when it does heavy lifting for each incoming connection (like forking off a worker process etc.). 1000 incoming connections will impact its performance, and coming from the same client they can easily be taken for a DoS attack. I hope you're in charge of both the client and the server... or is it the same machine? A: Per the HTTP RFC, a client should not maintain more than 2 simultaneous connections to a webserver or proxy. However, most browsers don't honor that - firefox 3.5 allows 6 per server and 8 per proxy. In short, you should not be opening 1000 connections to a single server, unless your intent is to impact the performance of the server. Stress testing your server would be a good legitimate example. [Edit] If this is a proxy you're talking about, then that's a little different story. My suggestion is to use connection pooling. Figure out how many simultaneous connections give you the most requests per second and set a hard limit. Extra requests just have to wait in a queue until the pool frees up. Just be aware than a single process is usually capped at 1024 file descriptors by default. Take a look through apache's mod_proxy for ideas on how to handle this.
What is the maximum simultaneous HTTP connections allowed on one machine (windows server 2008) using python
To be more specific, I'm using python and making a pool of HTTPConnection (httplib) and was wondering if there is an limit on the number of concurrent HTTP connections on a windows server.
[ "AFAIK, the numbers of internet sockets (necessary to make TCP/IP connections) is naturally limited on every machine, but it's pretty high. 1000 simulatneous connections shouldn't be a problem for the client machine, as each socket uses only little memory. If you start receiving data through all these channels, this might change though. I've heard of test setups that created a couple of thousands connections simultaneously from a single client.\nThe story is usually different for the server, when it does heavy lifting for each incoming connection (like forking off a worker process etc.). 1000 incoming connections will impact its performance, and coming from the same client they can easily be taken for a DoS attack. I hope you're in charge of both the client and the server... or is it the same machine?\n", "Per the HTTP RFC, a client should not maintain more than 2 simultaneous connections to a webserver or proxy. However, most browsers don't honor that - firefox 3.5 allows 6 per server and 8 per proxy.\nIn short, you should not be opening 1000 connections to a single server, unless your intent is to impact the performance of the server. Stress testing your server would be a good legitimate example.\n\n[Edit]\nIf this is a proxy you're talking about, then that's a little different story. My suggestion is to use connection pooling. Figure out how many simultaneous connections give you the most requests per second and set a hard limit. Extra requests just have to wait in a queue until the pool frees up. Just be aware than a single process is usually capped at 1024 file descriptors by default.\nTake a look through apache's mod_proxy for ideas on how to handle this.\n" ]
[ 3, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001121951_python.txt
Q: Platform for developing all things google? I am interested in developing things for google apps and android using python and java. I am new to both and was wondering if a environment set in windows or linux would be more productive for these tasks? A: Google has tools for Eclipse only for both Android and for Google Apps. They haven't made any other tools as far as I know. Oh yeah, so to answer your question, it doesn't matter that much. Windows, Unix, or Mac, all the same really (people in our office use all of them). A: I'd throw down another vote for Eclipse. I've been using it on the mac and I find it to be very buggy. Not sure if that's just the nature of the beast... My experiences with it on XP have been more stable. Haven't had time to check it out on Ubuntu. A: Internally, I believe Google uses Eclipse running on Ubuntu for Android development, so that'd be your best bet if you're completely paranoid about avoiding all potential issues. Of course, this is impossible, and really you should just use whatever you're comfortable in.
Platform for developing all things google?
I am interested in developing things for google apps and android using python and java. I am new to both and was wondering if a environment set in windows or linux would be more productive for these tasks?
[ "Google has tools for Eclipse only for both Android and for Google Apps. They haven't made any other tools as far as I know.\nOh yeah, so to answer your question, it doesn't matter that much. Windows, Unix, or Mac, all the same really (people in our office use all of them).\n", "I'd throw down another vote for Eclipse. I've been using it on the mac and I find it to be very buggy. Not sure if that's just the nature of the beast... My experiences with it on XP have been more stable. Haven't had time to check it out on Ubuntu.\n", "Internally, I believe Google uses Eclipse running on Ubuntu for Android development, so that'd be your best bet if you're completely paranoid about avoiding all potential issues. Of course, this is impossible, and really you should just use whatever you're comfortable in.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "android", "java", "platform", "python" ]
stackoverflow_0001120297_android_java_platform_python.txt
Q: How to model one way one-to-one relationship in Django I want to model an article with revisions in Django: I have following in my article's models.py: class Article(models.Model): title = models.CharField(blank=False, max_length=80) slug = models.SlugField(max_length=80) def __unicode__(self): return self.title class ArticleRevision(models.Model): article = models.ForeignKey(Article) revision_nr = models.PositiveSmallIntegerField(blank=True, null=True) body = models.TextField(blank=False) On the artlcle model I want to have 2 direct references to a revision - one would point to a published revision and another to a revision that is being actively edited. However from what I understand, OneToOne and ForeignKey references generate a backreference on the other side of the model reference, so my question is, how do i create a one-way one-to-one reference in Django? Is there some special incantation for that or do I have to fake it by including state into revision and custom implementations of the fields that ask for a revision in specific state? Edit: I guess, I've done somewhat poor job of explaining my intent. Let's try it on a higher abstraction level: My original intent was to implement a sort of revisioned article model, where each article may have multiple revisions, where one of those revisions may be "published" and one actively edited. This means that the article will have one-to-many relationship to revisions (represented by ForeignKey(Article) reference in ArticleRevision class) and two one way references from Article to revision: published_revision and edited_revision. My question is mainly, how can I model this with Django's ORM. A: The back-references that Django produces are programatic, and do not affect the underlying Database schema. In other words, if you have a one-to-one or foreign key field on your Article pointing to your Revision, a column will be added to the Article table in the database, but not to the Revision table. Thus, removing the reverse relationship from the revision to the article is unnecessary. If you really feel strongly about it, and want to document in your code that the backlink is never used, a fairly common Django idiom is to give the fields a related_name attribute like _unused_1. So your Article model might look like the following: class Article(models.Model): title = models.CharField(blank=False, max_length=80) slug = models.SlugField(max_length=80) revision_1 = models.OneToOneField(ArticleRevision, related_name='_unused_1') revision_2 = models.OneToOneField(ArticleRevision, related_name='_unused_2') def __unicode__(self): return self.title That said, it's rare that a one-to-one relationship is actually useful in an application (unless you're optimizing for some reason) and I'd suggest carefully reviewing your DB schema to make sure this is really what you want. It may make sense to keep a single ForeignKey field on your ArticleRevision pointing back to an Article (since an ArticleRevision will, presumably, always need to be associated with an Article) and adding another column to Revision indicating whether it's published. A: What is wrong with the link going both ways? I would think that the OneToOneField would be the perfect choice here. Is there a specific reason why this will be a detriment to your application? If you don't need the backreference why can't you just ignore it?
How to model one way one-to-one relationship in Django
I want to model an article with revisions in Django: I have following in my article's models.py: class Article(models.Model): title = models.CharField(blank=False, max_length=80) slug = models.SlugField(max_length=80) def __unicode__(self): return self.title class ArticleRevision(models.Model): article = models.ForeignKey(Article) revision_nr = models.PositiveSmallIntegerField(blank=True, null=True) body = models.TextField(blank=False) On the artlcle model I want to have 2 direct references to a revision - one would point to a published revision and another to a revision that is being actively edited. However from what I understand, OneToOne and ForeignKey references generate a backreference on the other side of the model reference, so my question is, how do i create a one-way one-to-one reference in Django? Is there some special incantation for that or do I have to fake it by including state into revision and custom implementations of the fields that ask for a revision in specific state? Edit: I guess, I've done somewhat poor job of explaining my intent. Let's try it on a higher abstraction level: My original intent was to implement a sort of revisioned article model, where each article may have multiple revisions, where one of those revisions may be "published" and one actively edited. This means that the article will have one-to-many relationship to revisions (represented by ForeignKey(Article) reference in ArticleRevision class) and two one way references from Article to revision: published_revision and edited_revision. My question is mainly, how can I model this with Django's ORM.
[ "The back-references that Django produces are programatic, and do not affect the underlying Database schema. In other words, if you have a one-to-one or foreign key field on your Article pointing to your Revision, a column will be added to the Article table in the database, but not to the Revision table.\nThus, removing the reverse relationship from the revision to the article is unnecessary. If you really feel strongly about it, and want to document in your code that the backlink is never used, a fairly common Django idiom is to give the fields a related_name attribute like _unused_1. So your Article model might look like the following:\nclass Article(models.Model):\n title = models.CharField(blank=False, max_length=80)\n slug = models.SlugField(max_length=80)\n revision_1 = models.OneToOneField(ArticleRevision, related_name='_unused_1')\n revision_2 = models.OneToOneField(ArticleRevision, related_name='_unused_2')\n\n def __unicode__(self):\n return self.title\n\nThat said, it's rare that a one-to-one relationship is actually useful in an application (unless you're optimizing for some reason) and I'd suggest carefully reviewing your DB schema to make sure this is really what you want. It may make sense to keep a single ForeignKey field on your ArticleRevision pointing back to an Article (since an ArticleRevision will, presumably, always need to be associated with an Article) and adding another column to Revision indicating whether it's published.\n", "What is wrong with the link going both ways? I would think that the OneToOneField would be the perfect choice here. Is there a specific reason why this will be a detriment to your application? If you don't need the backreference why can't you just ignore it?\n" ]
[ 5, 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001121488_django_django_models_python.txt
Q: How do I satisfy a 3rd-party shared library reference to stat when I'm creating a shared library shim rather than an executable? I am the new maintainer for an in-house Python system that uses a set of 3rd-party shared C libraries via a shared library shim that is created using a combination of swig and a setup.py script. This has been working well until recently. The 3rd-party shared C libraries were updated for new functionality and now I get the following run-time error, after a clean build, when I try to run our main Python program (which imports the generated shared library shim): -sh-3.00$ python ams.py ImportError: /usr/lib/libz4lnx.so: undefined symbol: stat I found a discussion thread from 1999 that explains that the problem is that stat is not present in libc.so.6, but rather in libc_nonshared.a, and provides a solution: Link against the c library, by adding -lc to your build command line. http://www.redhat.com/archives/pam-list/1999-February/msg00082.html I've added 'c' to the list of libraries in the setup.py script, but this doesn't change my results. I suspect that this is because I am creating a shared library shim rather than an executable. How can I satisfy the 3rd-party shared library's reference to stat, given my build environment? My build system is: -sh-3.00$ lsb_release -a LSB Version: :core-3.0-ia32:core-3.0-noarch:graphics-3.0-ia32:graphics-3.0-noarch Distributor ID: CentOS Description: CentOS release 4.6 (Final) Release: 4.6 Codename: Final My gcc version is: -sh-3.00$ gcc --version gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10) My Python version is: -sh-3.00$ python -V Python 2.3.4 A: The solution was to create to a new Centos 5.3 VM and re-build and/or re-install components as needed. A: As it turns out, while moving to Centos 5.3 was probably a good thing in the long run, the actual problem turns out to have been the way that libz4lnx was built on the DVD that I was originally using. In the process of moving to Centos 5.3, I also moved to a newer build of the libz4lnx library. Today, while testing something else, I used the library from the original DVD and got the exact same undefined symbol error when running the Python program. Switching back to the newest DVD (some two months newer) solved the problem again.
How do I satisfy a 3rd-party shared library reference to stat when I'm creating a shared library shim rather than an executable?
I am the new maintainer for an in-house Python system that uses a set of 3rd-party shared C libraries via a shared library shim that is created using a combination of swig and a setup.py script. This has been working well until recently. The 3rd-party shared C libraries were updated for new functionality and now I get the following run-time error, after a clean build, when I try to run our main Python program (which imports the generated shared library shim): -sh-3.00$ python ams.py ImportError: /usr/lib/libz4lnx.so: undefined symbol: stat I found a discussion thread from 1999 that explains that the problem is that stat is not present in libc.so.6, but rather in libc_nonshared.a, and provides a solution: Link against the c library, by adding -lc to your build command line. http://www.redhat.com/archives/pam-list/1999-February/msg00082.html I've added 'c' to the list of libraries in the setup.py script, but this doesn't change my results. I suspect that this is because I am creating a shared library shim rather than an executable. How can I satisfy the 3rd-party shared library's reference to stat, given my build environment? My build system is: -sh-3.00$ lsb_release -a LSB Version: :core-3.0-ia32:core-3.0-noarch:graphics-3.0-ia32:graphics-3.0-noarch Distributor ID: CentOS Description: CentOS release 4.6 (Final) Release: 4.6 Codename: Final My gcc version is: -sh-3.00$ gcc --version gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10) My Python version is: -sh-3.00$ python -V Python 2.3.4
[ "The solution was to create to a new Centos 5.3 VM and re-build and/or re-install components as needed.\n", "As it turns out, while moving to Centos 5.3 was probably a good thing in the long run, the actual problem turns out to have been the way that libz4lnx was built on the DVD that I was originally using. In the process of moving to Centos 5.3, I also moved to a newer build of the libz4lnx library. Today, while testing something else, I used the library from the original DVD and got the exact same undefined symbol error when running the Python program. Switching back to the newest DVD (some two months newer) solved the problem again.\n" ]
[ 1, 1 ]
[]
[]
[ "c", "python", "swig" ]
stackoverflow_0001072068_c_python_swig.txt
Q: Join Records on Multiple Line File based on Criteria I am trying to write a python script that takes record data like this 6xxxxxxxx 7xxxxxxxx 6xxxxxxxx 7xxxxxxxx 7xxxxxxxx 6xxxxxxxx 6xxxxxxxx 6xxxxxxxx 7xxxxxxxx 7xxxxxxxx 7xxxxxxxx and performs the following logic newline = "" read in a record if the record starts with a 6 and newline = '' newline = record if the records starts with a 7 newline = newline + record if the record starts with a 6 and newline != '' print newline newline = record So it should print out like this: 6xxxxxx 7xxxxxxxx 6xxxxxx 7xxxxxxxx 7xxxxxxx 7xxxxxxx 6xxxxxx 6xxxxxx etc.. Here is my code: han1 = open("file","r") newline = "" for i in han1: if i[0] == "6" and newline == "": newline = i elif i[0] == "7": newline = newline + i elif i[0] == "6" and newline != "": print newline newline = "" newline = i han1.close() When I run my script the output looks untouched. Where do you think I'm going wrong. Is it because the newline variable won't store values between iterations of the loop? Any guidance would be appreciated. A: None of the branches in your if statement finish with newline set to "". Therefore, the first branch will never evaluate because newline is never "" except for the very first case. A: You can simplify this by simply appending a newline for a record that starts with 6, and not appending one if it doens't. for line in open('infile'): if line[0] == '6': print '' print line.strip() , OK, this creates one empty line first in the file, and may not end the file with an newline. Still, that's easy to fix. Or a solution that doens't have that problem and is closer to yours: newline = '' for line in open('infile'): if line[0] == '6': if newline: print newline newline = '' newline += ' ' + line.strip() if newline: print newline Also works, but is slightly longer. That said I think your main problem is that you don't strip the records, so you preserve the line feed. A: if you file is not in GB, data=open("file").read().split() a = [n for n,l in enumerate(data) if l.startswith("6") ] for i,j in enumerate(a): if i+1 == len(a): r=data[a[i]:] else: r=data[a[i]:a[i+1]] print ' '.join(r)
Join Records on Multiple Line File based on Criteria
I am trying to write a python script that takes record data like this 6xxxxxxxx 7xxxxxxxx 6xxxxxxxx 7xxxxxxxx 7xxxxxxxx 6xxxxxxxx 6xxxxxxxx 6xxxxxxxx 7xxxxxxxx 7xxxxxxxx 7xxxxxxxx and performs the following logic newline = "" read in a record if the record starts with a 6 and newline = '' newline = record if the records starts with a 7 newline = newline + record if the record starts with a 6 and newline != '' print newline newline = record So it should print out like this: 6xxxxxx 7xxxxxxxx 6xxxxxx 7xxxxxxxx 7xxxxxxx 7xxxxxxx 6xxxxxx 6xxxxxx etc.. Here is my code: han1 = open("file","r") newline = "" for i in han1: if i[0] == "6" and newline == "": newline = i elif i[0] == "7": newline = newline + i elif i[0] == "6" and newline != "": print newline newline = "" newline = i han1.close() When I run my script the output looks untouched. Where do you think I'm going wrong. Is it because the newline variable won't store values between iterations of the loop? Any guidance would be appreciated.
[ "None of the branches in your if statement finish with newline set to \"\". Therefore, the first branch will never evaluate because newline is never \"\" except for the very first case.\n", "You can simplify this by simply appending a newline for a record that starts with 6, and not appending one if it doens't.\nfor line in open('infile'):\n if line[0] == '6':\n print ''\n print line.strip() ,\n\nOK, this creates one empty line first in the file, and may not end the file with an newline. Still, that's easy to fix.\nOr a solution that doens't have that problem and is closer to yours:\nnewline = ''\nfor line in open('infile'):\n if line[0] == '6':\n if newline:\n print newline\n newline = ''\n newline += ' ' + line.strip()\nif newline:\n print newline\n\nAlso works, but is slightly longer.\nThat said I think your main problem is that you don't strip the records, so you preserve the line feed.\n", "if you file is not in GB, \ndata=open(\"file\").read().split()\na = [n for n,l in enumerate(data) if l.startswith(\"6\") ] \nfor i,j in enumerate(a):\n if i+1 == len(a):\n r=data[a[i]:]\n else:\n r=data[a[i]:a[i+1]]\n print ' '.join(r)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "file_io", "python" ]
stackoverflow_0001120555_file_io_python.txt
Q: bash/cygwin/$PATH: Do I really have to reboot to alter $PATH? I wanted to use the Python installed under cygwin rather than one installed under WinXP directly, so I edited ~/.bashrc and sourced it. Nothing changed. I tried other things, but nothing I did changed $PATH in any way. So I rebooted. Aha; now $PATH has changed to what I wanted. But, can anyone explain WHY this happened? When do changes to the environment (and its variables) made via cygwin (and bash) take effect only after a reboot? (Is this any way to run a railroad?) (This question is unlikely to win any points, but I'm curious, and I'm also tired of wading through docs which don't help on this point.) A: Try: PATH="${PATH}:${PYTHON}"; export PATH Or: export PATH="${PATH}:${PYTHON}" the quotes preserve the spaces and newlines that you don't have in your directory names. I repeat "don't". If you want to change the path for the current environment and any subsequent processes, use something similar to either of the commands above; they are equivalent. If you want to change the path for the next time you start Bash, edit ~/.bashrc and add one of the above (for example) or edit the existing PATH setting command that you find there. If you want to affect both the current environment and any subsequent ones (i.e. have an immediate and a "permanent" affect), edit ~/.bashrc and do one of the following: type one of the first two forms shown above or source the ~/.bashrc file. Sometimes, you may not want to do the sourcing if, for example, it would undo some temporary thing that you're making use of currently like have some other variables set differently than ~/.bashrc would set (reset) them to. I don't think you need to worry about hash unless you're either doing some serious rearranging or adding some local replacements for system utilities perhaps. A: If you want your changes to be permanent, you should modify the proper file (.bashrc in this case) and perform ONE of the following actions: Restart the cygwin window source .bashrc (This should work, even if is not working for you) . .bashrc (that is dot <space> <filename>) However, .bashrc is used by default when using a BASH shell, so if you are using another shell (csh, ksh, zsh, etc) then your changes will not be reflected by modifying .bashrc. A: A couple of things to try and rule out at least: Do you get the same behavior as the following from the shell? (Pasted from my cygwin, which works as expected.) $ echo $PATH /usr/local/bin:/usr/bin:/bin $ export PATH=$PATH:/cygdrive/c/python/bin $ echo $PATH /usr/local/bin:/usr/bin:/bin:/cygdrive/c/python/bin Is your bashrc setting the PATH in a similar way to the above? (i.e. the second command). Does your bashrc contain a "source" or "." command anywhere? (Maybe it's sourcing another file which overwrites your PATH variable.) A: You may need to re-initialize bash's hashes after modifying the path variable: hash -r
bash/cygwin/$PATH: Do I really have to reboot to alter $PATH?
I wanted to use the Python installed under cygwin rather than one installed under WinXP directly, so I edited ~/.bashrc and sourced it. Nothing changed. I tried other things, but nothing I did changed $PATH in any way. So I rebooted. Aha; now $PATH has changed to what I wanted. But, can anyone explain WHY this happened? When do changes to the environment (and its variables) made via cygwin (and bash) take effect only after a reboot? (Is this any way to run a railroad?) (This question is unlikely to win any points, but I'm curious, and I'm also tired of wading through docs which don't help on this point.)
[ "Try:\nPATH=\"${PATH}:${PYTHON}\"; export PATH\n\nOr:\nexport PATH=\"${PATH}:${PYTHON}\"\n\nthe quotes preserve the spaces and newlines that you don't have in your directory names. I repeat \"don't\".\nIf you want to change the path for the current environment and any subsequent processes, use something similar to either of the commands above; they are equivalent.\nIf you want to change the path for the next time you start Bash, edit ~/.bashrc and add one of the above (for example) or edit the existing PATH setting command that you find there.\nIf you want to affect both the current environment and any subsequent ones (i.e. have an immediate and a \"permanent\" affect), edit ~/.bashrc and do one of the following: type one of the first two forms shown above or source the ~/.bashrc file. Sometimes, you may not want to do the sourcing if, for example, it would undo some temporary thing that you're making use of currently like have some other variables set differently than ~/.bashrc would set (reset) them to.\nI don't think you need to worry about hash unless you're either doing some serious rearranging or adding some local replacements for system utilities perhaps.\n", "If you want your changes to be permanent, you should modify the proper file (.bashrc in this case) and perform ONE of the following actions:\n\nRestart the cygwin window\nsource .bashrc (This should work, even if is not working for you)\n. .bashrc (that is dot <space> <filename>)\n\nHowever, .bashrc is used by default when using a BASH shell, so if you are using another shell (csh, ksh, zsh, etc) then your changes will not be reflected by modifying .bashrc.\n", "A couple of things to try and rule out at least:\n\nDo you get the same behavior as the following from the shell? (Pasted from my cygwin, which works as expected.)\n\n$ echo $PATH\n/usr/local/bin:/usr/bin:/bin\n\n$ export PATH=$PATH:/cygdrive/c/python/bin\n\n$ echo $PATH\n/usr/local/bin:/usr/bin:/bin:/cygdrive/c/python/bin\n\nIs your bashrc setting the PATH in a similar way to the above? (i.e. the second command).\nDoes your bashrc contain a \"source\" or \".\" command anywhere? (Maybe it's sourcing another file which overwrites your PATH variable.)\n\n", "You may need to re-initialize bash's hashes after modifying the path variable:\nhash -r\n\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ "bash", "cygwin", "path", "python", "reboot" ]
stackoverflow_0001122924_bash_cygwin_path_python_reboot.txt
Q: Migrating Django Application to Google App Engine? I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine. I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor. Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture. Thanks A: Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE. You've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem. I think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that. Personally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it. It's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time. Of course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed. A: Basically, you will change the data model base class and some APIs if you use them (PIL, urllib2, etc). If your goal is app-engine, I would use the app engine helper http://code.google.com/appengine/articles/appengine_helper_for_django.html. It can run it on your server with a file based DB and then push it to app-engine with no changes. A: It sounds like you have awareness of the major limitation in building/migrating your app -- that AppEngine doesn't support Django's ORM. Keep in mind that this doesn't just affect the code you write yourself -- it also limits your ability to use a lot of existing Django code. That includes other applications (such as the built-in admin and auth apps) and ORM-based features such as generic views. A: There are a few things that you can't do on the App Engine that you can do on your own server like uploading of files. On the App Engine you kinda have to upload it and store the datastore which can cause a few problems. Other than that it should be fine from the Presentation part. There are a number of other little things that are better on your own dedicated server but I think eventually a lot of those things will be in the App Engine
Migrating Django Application to Google App Engine?
I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine. I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor. Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture. Thanks
[ "Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE.\nYou've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem.\nI think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that.\nPersonally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it.\nIt's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time.\nOf course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed.\n", "Basically, you will change the data model base class and some APIs if you use them (PIL, urllib2, etc). \nIf your goal is app-engine, I would use the app engine helper http://code.google.com/appengine/articles/appengine_helper_for_django.html. It can run it on your server with a file based DB and then push it to app-engine with no changes. \n", "It sounds like you have awareness of the major limitation in building/migrating your app -- that AppEngine doesn't support Django's ORM.\nKeep in mind that this doesn't just affect the code you write yourself -- it also limits your ability to use a lot of existing Django code. That includes other applications (such as the built-in admin and auth apps) and ORM-based features such as generic views.\n", "There are a few things that you can't do on the App Engine that you can do on your own server like uploading of files. On the App Engine you kinda have to upload it and store the datastore which can cause a few problems.\nOther than that it should be fine from the Presentation part. There are a number of other little things that are better on your own dedicated server but I think eventually a lot of those things will be in the App Engine\n" ]
[ 8, 2, 2, 1 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0001118761_django_google_app_engine_python.txt
Q: Python Function Decorators in Google App Engine I'm having trouble using python function decorators in Google's AppEngine. I'm not that familiar with decorators, but they seem useful in web programming when you might want to force a user to login before executing certain functions. Anyway, I was following along with a flickr login example here that uses django and decorates a function to require the flickr login. I can't seem to get this type of decorator to work in AppEngine. I've boiled it down to this: def require_auth(func): def check_auth(*args, **kwargs): print "Authenticated." return func(*args, **kwargs) return check_auth @require_auth def content(): print "Release sensitive data!" content() This code works from the commandline, but when I run it in GoogleAppEngineLauncher (OS X), I get the following error: check_auth() takes at least 1 argument (0 given) And I'm not really sure why... EDIT to include actual code: @asperous.us I changed the definition of content() to include variable arguments, is that what you meant? @Alex Martelli, 'print' does work within AppEngine, but still a completely fair criticism. Like I said, I'm trying to use the flickr login from the link above. I tried to put it into my app like so: def require_flickr_auth(view): def protected_view(request,*args, **kwargs): if 'token' in request.session: token = request.session['token'] log.info('Getting token from session: %s' % token) else: token = None log.info('No token in session') f = flickrapi.FlickrAPI(api_key, api_secret, token=token, store_token=False) if token: # We have a token, but it might not be valid log.info('Verifying token') try: f.auth_checkToken() except flickrapi.FlickrError: token = None del request.session['token'] if not token: # No valid token, so redirect to Flickr log.info('Redirecting user to Flickr to get frob') url = f.web_login_url(perms='read') print "Redirect to %s" % url # If the token is valid, we can call the decorated view. log.info('Token is valid') return view(request,*args, **kwargs) return protected_view @require_flickr_auth def content(*args, **kwargs): print 'Welcome, oh authenticated user!' def main(): print 'Content-Type: text/plain' content() if __name__ == "__main__": main() When I remove the @require_flickr_auth decoration, the string 'Welcome ...' prints out just fine. Otherwise I get a big ugly AppEngine exception page with type 'exceptions.TypeError': protected_view() takes at least 1 argument (0 given) at the bottom. A: You're calling content() without any arguments, but the decorated version protected_view requires the request argument. Either add the argument to content or remove it from protected_view. If you're getting that error with your simple version then I'd suspect that content is a class method as Alex suggested. Otherwise it looks like you're telling it to expect at least one argument, then not supplying it any. A: @Owen, the "takes at least 1 argument" error suggests you're defining content within a class (i.e. as a method) and not as a bare function as you show -- indeed, how exactly are you trying to execute that code in GAE? I.e. what's your app.yaml &c? If you put your code exactly as you gave it in silly.py and in your app.yaml you have: handlers: - url: /silly script: silly.py then when you visit yourapp.appspot.com/silly you will see absolutely nothing on either the browser or the logs (besides the "GET /silly HTTP/1.1" 200 - in the latter of course;-): there is no error but the print doesn't DO anything in particular either. So I have to imagine you tried running code different from what you're showing us...!-) A: Decorators were added in python 2.4 (I think), maybe googleapp is using an older version? You can also do: def content(): print "Release sensitive data!" content = require_auth(content) it will do the same thing as the decorator, it is just a little more work. A: Why return func(*args, **kwargs) wouldn't that execute func and then return the result? If so, you shouldn't give it any arguments, since it doesn't take any. If you edited content() and added the (*args, **kwargs) arguments onto it, will it work?
Python Function Decorators in Google App Engine
I'm having trouble using python function decorators in Google's AppEngine. I'm not that familiar with decorators, but they seem useful in web programming when you might want to force a user to login before executing certain functions. Anyway, I was following along with a flickr login example here that uses django and decorates a function to require the flickr login. I can't seem to get this type of decorator to work in AppEngine. I've boiled it down to this: def require_auth(func): def check_auth(*args, **kwargs): print "Authenticated." return func(*args, **kwargs) return check_auth @require_auth def content(): print "Release sensitive data!" content() This code works from the commandline, but when I run it in GoogleAppEngineLauncher (OS X), I get the following error: check_auth() takes at least 1 argument (0 given) And I'm not really sure why... EDIT to include actual code: @asperous.us I changed the definition of content() to include variable arguments, is that what you meant? @Alex Martelli, 'print' does work within AppEngine, but still a completely fair criticism. Like I said, I'm trying to use the flickr login from the link above. I tried to put it into my app like so: def require_flickr_auth(view): def protected_view(request,*args, **kwargs): if 'token' in request.session: token = request.session['token'] log.info('Getting token from session: %s' % token) else: token = None log.info('No token in session') f = flickrapi.FlickrAPI(api_key, api_secret, token=token, store_token=False) if token: # We have a token, but it might not be valid log.info('Verifying token') try: f.auth_checkToken() except flickrapi.FlickrError: token = None del request.session['token'] if not token: # No valid token, so redirect to Flickr log.info('Redirecting user to Flickr to get frob') url = f.web_login_url(perms='read') print "Redirect to %s" % url # If the token is valid, we can call the decorated view. log.info('Token is valid') return view(request,*args, **kwargs) return protected_view @require_flickr_auth def content(*args, **kwargs): print 'Welcome, oh authenticated user!' def main(): print 'Content-Type: text/plain' content() if __name__ == "__main__": main() When I remove the @require_flickr_auth decoration, the string 'Welcome ...' prints out just fine. Otherwise I get a big ugly AppEngine exception page with type 'exceptions.TypeError': protected_view() takes at least 1 argument (0 given) at the bottom.
[ "You're calling content() without any arguments, but the decorated version protected_view requires the request argument. Either add the argument to content or remove it from protected_view.\nIf you're getting that error with your simple version then I'd suspect that content is a class method as Alex suggested. Otherwise it looks like you're telling it to expect at least one argument, then not supplying it any.\n", "@Owen, the \"takes at least 1 argument\" error suggests you're defining content within a class (i.e. as a method) and not as a bare function as you show -- indeed, how exactly are you trying to execute that code in GAE? I.e. what's your app.yaml &c? If you put your code exactly as you gave it in silly.py and in your app.yaml you have:\nhandlers:\n- url: /silly\n script: silly.py\n\nthen when you visit yourapp.appspot.com/silly you will see absolutely nothing on either the browser or the logs (besides the \"GET /silly HTTP/1.1\" 200 - in the latter of course;-): there is no error but the print doesn't DO anything in particular either. So I have to imagine you tried running code different from what you're showing us...!-)\n", "Decorators were added in python 2.4 (I think), maybe googleapp is using an older version?\nYou can also do:\ndef content():\n print \"Release sensitive data!\"\n content = require_auth(content)\n\nit will do the same thing as the decorator, it is just a little more work.\n", "Why\nreturn func(*args, **kwargs)\nwouldn't that execute func and then return the result?\nIf so, you shouldn't give it any arguments, since it doesn't take any. If you edited content() and added the (*args, **kwargs) arguments onto it, will it work?\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "decorator", "google_app_engine", "python" ]
stackoverflow_0001123117_decorator_google_app_engine_python.txt
Q: Setting a timeout function in django So I'm creating a django app that allows a user to add a new line of text to an existing group of text lines. However I don't want multiple users adding lines to the same group of text lines concurrently. So I created a BoolField isBeingEdited that is set to True once a user decides to append a specific group. Once the Bool is True no one else can append the group until the edit is submitted, whereupon the Bool is set False again. Works alright, unless someone decides to make an edit then changes their mind or forgets about it, etc. I want isBeingEdited to flip back to False after 10 minutes or so. Is this a job for cron, or is there something easier out there? Any suggestions? A: Change the boolean to a "lock time" To lock the model, set the Lock time to the current time. To unlock the model, set the lock time to None Add an "is_locked" method. That method returns "not locked" if the current time is more than 10 minutes after the lock time. This gives you your timeout without Cron and without regular hits into the DB to check flags and unset them. Instead, the time is only checked if you are interest in wieither this model is locked. A Cron would likely have to check all models. from django.db import models from datetime import datetime, timedelta # Create your models here. class yourTextLineGroup(models.Model): # fields go here lock_time = models.DateTimeField(null=True) locked_by = models.ForeignKey()#Point me to your user model def lock(self): if self.is_locked(): #and code here to see if current user is not locked_by user #exception / bad return value here pass self.lock_time = datetime.now() def unlock(self): self.lock_time = None def is_locked(self): return self.lock_time and datetime.now() - self.lock_time < timedelta(minutes=10) Code above assumes that the caller will call the save method after calling lock or unlock.
Setting a timeout function in django
So I'm creating a django app that allows a user to add a new line of text to an existing group of text lines. However I don't want multiple users adding lines to the same group of text lines concurrently. So I created a BoolField isBeingEdited that is set to True once a user decides to append a specific group. Once the Bool is True no one else can append the group until the edit is submitted, whereupon the Bool is set False again. Works alright, unless someone decides to make an edit then changes their mind or forgets about it, etc. I want isBeingEdited to flip back to False after 10 minutes or so. Is this a job for cron, or is there something easier out there? Any suggestions?
[ "Change the boolean to a \"lock time\"\n\nTo lock the model, set the Lock time to the current time. \nTo unlock the model, set the lock time to None\nAdd an \"is_locked\" method. That method returns \"not locked\" if the current time is more than 10 minutes after the lock time. \n\nThis gives you your timeout without Cron and without regular hits into the DB to check flags and unset them. Instead, the time is only checked if you are interest in wieither this model is locked. A Cron would likely have to check all models.\nfrom django.db import models\nfrom datetime import datetime, timedelta\n# Create your models here.\nclass yourTextLineGroup(models.Model):\n # fields go here \n lock_time = models.DateTimeField(null=True)\n locked_by = models.ForeignKey()#Point me to your user model\n\n def lock(self):\n if self.is_locked(): #and code here to see if current user is not locked_by user\n #exception / bad return value here\n pass\n\n self.lock_time = datetime.now()\n\n def unlock(self):\n self.lock_time = None\n\n def is_locked(self):\n return self.lock_time and datetime.now() - self.lock_time < timedelta(minutes=10)\n\nCode above assumes that the caller will call the save method after calling lock or unlock.\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001123367_django_python.txt
Q: Django ManyToMany Template rendering and performance issues I've got a django model that contains a manytomany relationship, of the type, class MyModel(models.Model): name = .. refby = models.ManyToManyField(MyModel2) .. class MyModel2(..): name = .. date = .. I need to render it in my template such that I am able to render all the mymodel2 objects that refer to mymodel. Currently, I do something like the following, {% for i in mymodel_obj_list %} {{i.name}} {% for m in i.refby.all|dictsortreversed:"date"|slice:"3" %} {{.. }} {% endfor %} <div> <!--This div toggles hidden/visible, shows next 12--> {% for n in i.refby.all|dictsortreversed:"date"|slice:"3:15" %} {{.. }} {% endfor %} </div> {% endfor %} As the code suggests, I only want to show the latest 3 mymodel2 objects, sorted in reverse order by date, although the next 12 do get loaded. Is this a very inefficient method of doing so? (Given that results for the refby.all could be a few 100s, and the total no of results in "mymodel_obj_list" is also in 100s - I use a paginator there). In which case, whats the best method to pre-compute these refby's and render them to the template? Should I do the sorting and computation in the view, and then pass it? I wasn't sure how to do this in order to maintain my pagination. View code looks something like, obj_list = Table.objects.filter(..) # Few 100 records pl = CustomPaginatorClass(obj_list...) And I pass the pl to the page as mymodel_obj_list. Thanks! A: I assume mymodel_obj_list is a QuerySet. You're accessing a foreign key field inside the loop, which means, by default, Django will look up each object's refby one at a time, when you access it. If you're displaying a lot of rows, this is extremely slow. Call select_related on the QuerySet, to pull in all of these foreign key fields in advance. https://docs.djangoproject.com/en/dev/ref/models/querysets/#select-related A: Should I do the sorting and computation in the view, and then pass it? Yes, definitely. It is not really a matter of performance (as Django's querysets are lazily evaluated, I suspect the final performance could be very similar in both cases) but of code organization. Templates should not contain any business logic, only presentation. Of course, sometimes this model breaks down, but in general you should try as much as possible to keep into that direction.
Django ManyToMany Template rendering and performance issues
I've got a django model that contains a manytomany relationship, of the type, class MyModel(models.Model): name = .. refby = models.ManyToManyField(MyModel2) .. class MyModel2(..): name = .. date = .. I need to render it in my template such that I am able to render all the mymodel2 objects that refer to mymodel. Currently, I do something like the following, {% for i in mymodel_obj_list %} {{i.name}} {% for m in i.refby.all|dictsortreversed:"date"|slice:"3" %} {{.. }} {% endfor %} <div> <!--This div toggles hidden/visible, shows next 12--> {% for n in i.refby.all|dictsortreversed:"date"|slice:"3:15" %} {{.. }} {% endfor %} </div> {% endfor %} As the code suggests, I only want to show the latest 3 mymodel2 objects, sorted in reverse order by date, although the next 12 do get loaded. Is this a very inefficient method of doing so? (Given that results for the refby.all could be a few 100s, and the total no of results in "mymodel_obj_list" is also in 100s - I use a paginator there). In which case, whats the best method to pre-compute these refby's and render them to the template? Should I do the sorting and computation in the view, and then pass it? I wasn't sure how to do this in order to maintain my pagination. View code looks something like, obj_list = Table.objects.filter(..) # Few 100 records pl = CustomPaginatorClass(obj_list...) And I pass the pl to the page as mymodel_obj_list. Thanks!
[ "I assume mymodel_obj_list is a QuerySet. You're accessing a foreign key field inside the loop, which means, by default, Django will look up each object's refby one at a time, when you access it. If you're displaying a lot of rows, this is extremely slow.\nCall select_related on the QuerySet, to pull in all of these foreign key fields in advance.\nhttps://docs.djangoproject.com/en/dev/ref/models/querysets/#select-related\n", "\nShould I do the sorting and\n computation in the view, and then pass\n it?\n\nYes, definitely.\nIt is not really a matter of performance (as Django's querysets are lazily evaluated, I suspect the final performance could be very similar in both cases) but of code organization.\nTemplates should not contain any business logic, only presentation. Of course, sometimes this model breaks down, but in general you should try as much as possible to keep into that direction.\n" ]
[ 4, 0 ]
[]
[]
[ "django", "django_templates", "many_to_many", "performance", "python" ]
stackoverflow_0001122605_django_django_templates_many_to_many_performance_python.txt
Q: How do I grab an instance of a dynamic php script output? The following link outputs a different image every time you visit it: http://www.biglickmedia.com/art/random/index.php From a web browser, you can obviously right click it and save what you see. But if I were to visit this link from a command line (like through python+mechanize), how would I save the image that would output? So basically, I need a command-line method to imitate right clicking and saving the image after initially visiting the site from a web browser. I can already use iMacro to do this, but I'd like a more elegant method. What can I use to accomplish this? Thanks! A: you might need something that creates a socket to the server and then issues a http GET request for "art/random/index.php". save the payload from the HTTP response, and then you have your data what you would be creating is a simple HTTP client the unix command wget does this: $ wget http://www.biglickmedia.com/art/random/index.php A: <?php file_put_contents('C:\\random.gif', file_get_contents('http://www.biglickmedia.com/art/random/index.php')); ?> A: With Python and mechanize: import mechanize b = mechanize.Browser() t = b.open(url) image_type = t.info().typeheader # mime-type of image data = t.read() #bytes of image open(filename, 'wb').write(data)
How do I grab an instance of a dynamic php script output?
The following link outputs a different image every time you visit it: http://www.biglickmedia.com/art/random/index.php From a web browser, you can obviously right click it and save what you see. But if I were to visit this link from a command line (like through python+mechanize), how would I save the image that would output? So basically, I need a command-line method to imitate right clicking and saving the image after initially visiting the site from a web browser. I can already use iMacro to do this, but I'd like a more elegant method. What can I use to accomplish this? Thanks!
[ "you might need something that creates a socket to the server and then issues a http GET request for \"art/random/index.php\". save the payload from the HTTP response, and then you have your data\nwhat you would be creating is a simple HTTP client\nthe unix command wget does this:\n$ wget http://www.biglickmedia.com/art/random/index.php\n\n", "<?php\n file_put_contents('C:\\\\random.gif', file_get_contents('http://www.biglickmedia.com/art/random/index.php'));\n?>\n\n", "With Python and mechanize:\nimport mechanize\n\nb = mechanize.Browser()\nt = b.open(url)\nimage_type = t.info().typeheader # mime-type of image\ndata = t.read() #bytes of image\nopen(filename, 'wb').write(data)\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "download", "image", "php", "python", "screen_scraping" ]
stackoverflow_0001123622_download_image_php_python_screen_scraping.txt
Q: Modifying Collection when using a foreach loop in c# Basically, I would like to remove an item from a list whilst inside the foreach loop. I know that this is possible when using a for loop, but for other purposes, I would like to know if this is achievable using a foreach loop. In python we can achieve this by doing the following: a = [1, 2, 3, 4, 5, 6, 7, 8, 9] for i in a: print i if i == 1: a.pop(1) This gives the following Output >>>1 3 4 5 6 7 8 9 But when doing something similar in c#, I get an InvalidOperationException, I was wondering if there was a way of getting around this, without just simply using a for loop. The code in c# that I used when the exception was thrown: static void Main(string[] args) { List<string> MyList = new List<string>(new string[] { "1", "2", "3", "4", "5", "6", "7", "8", "9"}); foreach (string Item in MyList) { if (MyList.IndexOf(Item) == 0) { MyList.RemoveAt(1); } Console.WriteLine(Item); } } Thanks in advance A: You can't do this. From the docs for IEnumerator<T>: An enumerator remains valid as long as the collection remains unchanged. If changes are made to the collection, such as adding, modifying, or deleting elements, the enumerator is irrecoverably invalidated and its behavior is undefined. Alternatives are: Build up a new list of items to remove, then remove them all afterwards Use a normal "for" loop and make sure you're careful about not going over the same element twice or missing any out. (You've said you don't want to do this, but what you're trying to do just won't work.) Build a new collection containing only the elements you want to retain The last of these alternatives is the LINQ-like solution, where you'd typically write: var newList = oldList.Where(x => ShouldBeRetained(x)).ToList(); (Where ShouldBeRetained is whatever logic you want, of course.) The call to ToList() is only necessary if you actually want it in a list. This leads to more declarative code which is often easier to read. I can't easily guess what your original loop is meant to do (it seems pretty odd at the moment) whereas if you can express the logic purely in terms of the item, it can be a lot clearer. A: If all you need is to remove all items that satisfy a condition you could use the List<T>.RemoveAll method: List<string> MyList = new List<string>(new string[] { "1", "2", "3", "4", "5", "6", "7", "8", "9" }); MyList.RemoveAll(item => item == "1"); Note that this modifies the initial list. A: You definitely may not change a collection in any way when using a foreach loop on it. You can use a for loop and manage the index for yourself or make a copy of the collection and as you are looping the original, remove items from the copy that equal the item in the original. In both cases it's not quite as clear or convenient :).
Modifying Collection when using a foreach loop in c#
Basically, I would like to remove an item from a list whilst inside the foreach loop. I know that this is possible when using a for loop, but for other purposes, I would like to know if this is achievable using a foreach loop. In python we can achieve this by doing the following: a = [1, 2, 3, 4, 5, 6, 7, 8, 9] for i in a: print i if i == 1: a.pop(1) This gives the following Output >>>1 3 4 5 6 7 8 9 But when doing something similar in c#, I get an InvalidOperationException, I was wondering if there was a way of getting around this, without just simply using a for loop. The code in c# that I used when the exception was thrown: static void Main(string[] args) { List<string> MyList = new List<string>(new string[] { "1", "2", "3", "4", "5", "6", "7", "8", "9"}); foreach (string Item in MyList) { if (MyList.IndexOf(Item) == 0) { MyList.RemoveAt(1); } Console.WriteLine(Item); } } Thanks in advance
[ "You can't do this. From the docs for IEnumerator<T>:\n\nAn enumerator remains valid as long as\n the collection remains unchanged. If\n changes are made to the collection,\n such as adding, modifying, or deleting\n elements, the enumerator is\n irrecoverably invalidated and its\n behavior is undefined.\n\nAlternatives are:\n\nBuild up a new list of items to remove, then remove them all afterwards\nUse a normal \"for\" loop and make sure you're careful about not going over the same element twice or missing any out. (You've said you don't want to do this, but what you're trying to do just won't work.)\nBuild a new collection containing only the elements you want to retain\n\nThe last of these alternatives is the LINQ-like solution, where you'd typically write:\nvar newList = oldList.Where(x => ShouldBeRetained(x)).ToList();\n\n(Where ShouldBeRetained is whatever logic you want, of course.) The call to ToList() is only necessary if you actually want it in a list. This leads to more declarative code which is often easier to read. I can't easily guess what your original loop is meant to do (it seems pretty odd at the moment) whereas if you can express the logic purely in terms of the item, it can be a lot clearer.\n", "If all you need is to remove all items that satisfy a condition you could use the List<T>.RemoveAll method:\nList<string> MyList = new List<string>(new string[] { \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\" });\nMyList.RemoveAll(item => item == \"1\");\n\nNote that this modifies the initial list.\n", "You definitely may not change a collection in any way when using a foreach loop on it.\nYou can use a for loop and manage the index for yourself or make a copy of the collection and as you are looping the original, remove items from the copy that equal the item in the original.\nIn both cases it's not quite as clear or convenient :).\n" ]
[ 26, 6, 1 ]
[]
[]
[ "c#", "invalidoperationexception", "python" ]
stackoverflow_0001124221_c#_invalidoperationexception_python.txt
Q: Socket? python -m SimpleHTTPServer Problem: to get the command working here. My domain is http://cs.edu.com/user/share_dir, but I cannot get the command working by typing it to a browser: http://cs.edu.com/user/share_dir:8000 Question: How can I get the command working? A: Your URL is incorrect. The port number should be specified after the domain name: http://cs.edu.com:8000/ Some other things you should keep in mind: If this is a shared host, port 8000 might already be in use by someone else The host might not be accessible from 'outside' of the network, due to firewall restrictions on non-standard ports The system you see internally could map to a different system outside, so the domain/hostname could be different from what you expect.
Socket? python -m SimpleHTTPServer
Problem: to get the command working here. My domain is http://cs.edu.com/user/share_dir, but I cannot get the command working by typing it to a browser: http://cs.edu.com/user/share_dir:8000 Question: How can I get the command working?
[ "Your URL is incorrect. The port number should be specified after the domain name:\nhttp://cs.edu.com:8000/\nSome other things you should keep in mind:\n\nIf this is a shared host, port 8000 might already be in use by someone else\nThe host might not be accessible from 'outside' of the network, due to firewall restrictions on non-standard ports\nThe system you see internally could map to a different system outside, so the domain/hostname could be different from what you expect.\n\n" ]
[ 5 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001124420_python_sockets.txt
Q: Python's subprocess.Popen returns the same stdout even though it shouldn't I'm having a very strange issue with Python's subprocess.Popen. I'm using it to call several times an external exe and keep the output in a list. Every time you call this external exe, it will return a different string. However, if I call it several times using Popen, it will always return the SAME string. =:-O It looks like Popen is returning always the same value from stdout, without recalling the exe. Maybe doing some sort of caching without actually calling again the exe. This is my code: def get_key(): from subprocess import Popen, PIPE args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ] process = Popen(args, stdout=PIPE) output = process.communicate()[0].strip() return output if __name__ == '__main__': print get_key() # Returns a certain string print get_key() # Should return another string, but returns the same! What on Earth am I doing wrong?! A: It is possible (if C_KEY_MAKER's random behaviour is based on the current time in seconds, or similar) that when you run it twice on the command line, the time has changed in between runs and so you get a different output, but when python runs it, it runs it twice in such quick succession that the time hasn't changed and so it returns the same value twice in a row. A: Nothing. That works fine, on my own tests (aside from your indentation error at the bottom). The problem is either in your exe. or elsewhere. To clarify, I created a python program tfile.py cat > tfile.py #!/usr/bin/env python import random print random.random() And then altered tthe program to get rid of the indentation problem at the bottom, and to call tfile.py . It did give two different results. A: I don't know what is going wrong with your example, I cannot replicate this behaviour, however try a more by-the-book approach: def get_key(): from subprocess import Popen, PIPE args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ] output = Popen(args, stdout=PIPE).stdout data = output.read().strip() output.close() return data A: Your code is not executable as is so it's hard to help you out much. Consider fixing indentation and syntax and making it self-contained, so that we can give it a try. On Linux, it seems to work fine according to Devin Jeanpierre.
Python's subprocess.Popen returns the same stdout even though it shouldn't
I'm having a very strange issue with Python's subprocess.Popen. I'm using it to call several times an external exe and keep the output in a list. Every time you call this external exe, it will return a different string. However, if I call it several times using Popen, it will always return the SAME string. =:-O It looks like Popen is returning always the same value from stdout, without recalling the exe. Maybe doing some sort of caching without actually calling again the exe. This is my code: def get_key(): from subprocess import Popen, PIPE args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ] process = Popen(args, stdout=PIPE) output = process.communicate()[0].strip() return output if __name__ == '__main__': print get_key() # Returns a certain string print get_key() # Should return another string, but returns the same! What on Earth am I doing wrong?!
[ "It is possible (if C_KEY_MAKER's random behaviour is based on the current time in seconds, or similar) that when you run it twice on the command line, the time has changed in between runs and so you get a different output, but when python runs it, it runs it twice in such quick succession that the time hasn't changed and so it returns the same value twice in a row.\n", "Nothing. That works fine, on my own tests (aside from your indentation error at the bottom). The problem is either in your exe. or elsewhere.\nTo clarify, I created a python program tfile.py\ncat > tfile.py\n#!/usr/bin/env python\nimport random\nprint random.random()\n\nAnd then altered tthe program to get rid of the indentation problem at the bottom, and to call tfile.py . It did give two different results.\n", "I don't know what is going wrong with your example, I cannot replicate this behaviour, however try a more by-the-book approach:\ndef get_key():\n\n from subprocess import Popen, PIPE\n\n args = [C_KEY_MAKER, '/26', USER_NAME, ENCRYPTION_TEMPLATE, '0', ]\n output = Popen(args, stdout=PIPE).stdout\n data = output.read().strip()\n output.close()\n return data\n\n", "Your code is not executable as is so it's hard to help you out much. Consider fixing indentation and syntax and making it self-contained, so that we can give it a try.\nOn Linux, it seems to work fine according to Devin Jeanpierre.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "popen", "python", "stdout", "subprocess" ]
stackoverflow_0000717120_popen_python_stdout_subprocess.txt
Q: Problem deploying Python program (packaged with py2exe) I have a problem: I used py2exe for my program, and it worked on my computer. I packaged it with Inno Setup (still worked on my computer), but when I sent it to a different computer, I got the following error when trying to run the application: "CreateProcess failed; code 14001." The app won't run. (Note: I am using wxPython and the multiprocessing module in my program.) I googled for it a bit and found that the the user should install some MS redistributable something, but I don't want to make life complicated for my users. Is there a solution? Versions: Python 2.6.2c1, py2exe 0.6.9, Windows XP Pro A: You need to include msvcr90.dll, Microsoft.VC90.CRT.manifest, and python.exe.manifest (renamed to [yourappname].exe.manifest) in your install directory. These files will be in the Python26 directory on your system if you installed Python with the "Just for me" option. Instructions for doing this can be found here. Don't forget to call multiprocessing.freeze_support() in your main function also, or you will have problems when you start a new process. While others have discussed including the MSVC runtime in your install package, the above solution works when you only want to distribute a single .zip file containing all your files. It avoids having to create a separate install package when you don't want that additional complication. A: You should be able to install that MS redistributable thingy as a part of your InnoSetup setup exe. A: When you run py2exe, look closely at the final messages when it's completed. It gives you a list of DLLs that it says are needed by the program, but that py2exe doesn't automatically bundle. Many in the list are reliably available on any Windows install, but there will be a few that you should manually bundle into your Inno Setup installation. Some are only needed if you want to deploy on older Windows installs e.g. Win 2000 or earlier. A: You can ship the runtime DLLs in question with your application as a "private assembly". This simply means putting a copy of a specially-named directory containing the runtime DLLs and their manifests alongside your executable. See my answer to this post.
Problem deploying Python program (packaged with py2exe)
I have a problem: I used py2exe for my program, and it worked on my computer. I packaged it with Inno Setup (still worked on my computer), but when I sent it to a different computer, I got the following error when trying to run the application: "CreateProcess failed; code 14001." The app won't run. (Note: I am using wxPython and the multiprocessing module in my program.) I googled for it a bit and found that the the user should install some MS redistributable something, but I don't want to make life complicated for my users. Is there a solution? Versions: Python 2.6.2c1, py2exe 0.6.9, Windows XP Pro
[ "You need to include msvcr90.dll, Microsoft.VC90.CRT.manifest, and python.exe.manifest (renamed to [yourappname].exe.manifest) in your install directory. These files will be in the Python26 directory on your system if you installed Python with the \"Just for me\" option.\nInstructions for doing this can be found here. \nDon't forget to call multiprocessing.freeze_support() in your main function also, or you will have problems when you start a new process.\nWhile others have discussed including the MSVC runtime in your install package, the above solution works when you only want to distribute a single .zip file containing all your files. It avoids having to create a separate install package when you don't want that additional complication.\n", "You should be able to install that MS redistributable thingy as a part of your InnoSetup setup exe.\n", "When you run py2exe, look closely at the final messages when it's completed. It gives you a list of DLLs that it says are needed by the program, but that py2exe doesn't automatically bundle.\nMany in the list are reliably available on any Windows install, but there will be a few that you should manually bundle into your Inno Setup installation. Some are only needed if you want to deploy on older Windows installs e.g. Win 2000 or earlier.\n", "You can ship the runtime DLLs in question with your application as a \"private assembly\". This simply means putting a copy of a specially-named directory containing the runtime DLLs and their manifests alongside your executable.\nSee my answer to this post.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "deployment", "multiprocessing", "py2exe", "python", "wxpython" ]
stackoverflow_0001048651_deployment_multiprocessing_py2exe_python_wxpython.txt
Q: What's the simplest possible buildout.cfg to install Zope 2? I know that the reccomended way to install Zope is with Buildout, but I can't seem to find a simple buildout.cfg to install a minimal Zope 2 environment. There are lots to install Plone and other things. I've tried: [buildout] parts = zope [zope] recipe = plone.recipe.zope2install eggs = But I get: An internal error occured due to a bug in either zc.buildout or in a recipe being used: Traceback (most recent call last): File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 1519, in main File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 357, in install File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 898, in __getitem__ File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 982, in _initialize File "/home/analyser/site/eggs/plone.recipe.zope2install-3.1-py2.4.egg/plone/recipe/zope2install/__init__.py", line 73, in __init__ assert self.location or self.svn or self.url AssertionError A: You need to tell plone.recipe.zope2install where to download Zope. Also, you'll need a zope2instance section, to create a Zope instance for you. These recipes are only needed for Zope up to version 2.11, as of 2.12 Zope has been fully eggified. Here is a minimal Zope 2.11 buildout.cfg: [buildout] parts = instance [zope2] recipe = plone.recipe.zope2install url = http://www.zope.org/Products/Zope/2.11.3/Zope-2.11.3-final.tgz [instance] recipe = plone.recipe.zope2instance zope2-location = ${zope2:location} user = admin:admin http-address = 127.0.0.1:8080 Note that the instance part pulls in the zope2 part automatically as it depends on information provided by that part. As of Zope 2.12 installation is fully egg based. The following sample buildout.cfg is all you need to install the latest beta: [buildout] parts = scripts extends = http://svn.zope.org/*checkout*/Zope/tags/2.12.0b3/versions.cfg [versions] Zope2 = 2.12.0b3 [scripts] recipe = zc.recipe.egg:scripts eggs = Zope2 Note the extends; it pulls in a list of versions for all Zope2 egg dependencies from the Zope subversion tag for 2.12.0b3, to make sure you get a stable combination of eggs. Without it you may end up with newer egg versions that have introduced incompatibilities.
What's the simplest possible buildout.cfg to install Zope 2?
I know that the reccomended way to install Zope is with Buildout, but I can't seem to find a simple buildout.cfg to install a minimal Zope 2 environment. There are lots to install Plone and other things. I've tried: [buildout] parts = zope [zope] recipe = plone.recipe.zope2install eggs = But I get: An internal error occured due to a bug in either zc.buildout or in a recipe being used: Traceback (most recent call last): File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 1519, in main File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 357, in install File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 898, in __getitem__ File "/tmp/tmp2wqykW/zc.buildout-1.3.0-py2.4.egg/zc/buildout/buildout.py", line 982, in _initialize File "/home/analyser/site/eggs/plone.recipe.zope2install-3.1-py2.4.egg/plone/recipe/zope2install/__init__.py", line 73, in __init__ assert self.location or self.svn or self.url AssertionError
[ "You need to tell plone.recipe.zope2install where to download Zope. Also, you'll need a zope2instance section, to create a Zope instance for you. These recipes are only needed for Zope up to version 2.11, as of 2.12 Zope has been fully eggified.\nHere is a minimal Zope 2.11 buildout.cfg:\n[buildout]\nparts = instance\n\n[zope2]\nrecipe = plone.recipe.zope2install\nurl = http://www.zope.org/Products/Zope/2.11.3/Zope-2.11.3-final.tgz\n\n[instance]\nrecipe = plone.recipe.zope2instance\nzope2-location = ${zope2:location}\nuser = admin:admin\nhttp-address = 127.0.0.1:8080\n\nNote that the instance part pulls in the zope2 part automatically as it depends on information provided by that part.\nAs of Zope 2.12 installation is fully egg based. The following sample buildout.cfg is all you need to install the latest beta:\n[buildout]\nparts = scripts\nextends = http://svn.zope.org/*checkout*/Zope/tags/2.12.0b3/versions.cfg\n\n[versions]\nZope2 = 2.12.0b3\n\n[scripts]\nrecipe = zc.recipe.egg:scripts\neggs = Zope2\n\nNote the extends; it pulls in a list of versions for all Zope2 egg dependencies from the Zope subversion tag for 2.12.0b3, to make sure you get a stable combination of eggs. Without it you may end up with newer egg versions that have introduced incompatibilities.\n" ]
[ 5 ]
[]
[]
[ "buildout", "python", "zope" ]
stackoverflow_0001120758_buildout_python_zope.txt
Q: What is the best practice in deploying application on Windows? I have an application that consists of several .dlls, .libs, .pyd (python library), .exe, .class-es. What is the best practice in the deployment process? I plan to put .dlls - managed into GAC and unmanaged into WinSxS folder. What should I do with .libs, .exe, .class and .pyd? Is it ok to put it to /ProgramFiles/ApplicationName/bin /ProgramFiles/ApplicationName/lib /ProgramFiles/ApplicationName/java /ProgramFiles/ApplicationName/python ? Thanks Tamara A: The current convention seems to be "/ProgramFiles/YourCompany/YourApplication/..." As for how to structure things under that folder, it is really dependent on what your application is doing, and how it's structured. Do make sure to store per-user information in Isolated Storage. A: I agree that /ProgramFiles/CompanyName/AppName is the convention. But you might also have to look at who will install the application. More and more users are no longer given admin rights on their Windows box at work, so they can't install under ProgramFiles. So depending on your target users and how you envisage them getting your application, you might want to install it in a location they can write in (like the user's AppData).
What is the best practice in deploying application on Windows?
I have an application that consists of several .dlls, .libs, .pyd (python library), .exe, .class-es. What is the best practice in the deployment process? I plan to put .dlls - managed into GAC and unmanaged into WinSxS folder. What should I do with .libs, .exe, .class and .pyd? Is it ok to put it to /ProgramFiles/ApplicationName/bin /ProgramFiles/ApplicationName/lib /ProgramFiles/ApplicationName/java /ProgramFiles/ApplicationName/python ? Thanks Tamara
[ "The current convention seems to be \n\"/ProgramFiles/YourCompany/YourApplication/...\" \nAs for how to structure things under that folder, it is really dependent on what your application is doing, and how it's structured. Do make sure to store per-user information in Isolated Storage.\n", "I agree that /ProgramFiles/CompanyName/AppName is the convention. But you might also have to look at who will install the application. More and more users are no longer given admin rights on their Windows box at work, so they can't install under ProgramFiles. So depending on your target users and how you envisage them getting your application, you might want to install it in a location they can write in (like the user's AppData). \n" ]
[ 2, 1 ]
[]
[]
[ "deployment", "python", "windows" ]
stackoverflow_0001124667_deployment_python_windows.txt
Q: Generic object "ownership" in Django Suppose I have the following models: class User(models.Model): pass class A(models.Model): user = models.ForeignKey(User) class B(models.Model): a = models.ForeignKey(A) That is, each user owns some objects of type A, and also some of type B. Now, I'm writing a generic interface that will allow the user to view any objects that it owns. In a view, of course I can't say something like "objects = model.objects.filter(user=user)", since B has no attribute 'user'. What's the best approach to take here? A: The way I would do it is to simply go through the object 'a' on class B. So in the view, I would do: objects = B.objects.get(user=a.user) objects += A.objects.get(user=user) The reason I would do it this way is because these are essentially two database queries, one to retrieve a bunch of object A's and one to retrieve a bunch of object B's. I'm not certain it's possible in Django to retrieve a list of both, simply because of the way database inheritance works. You could use model inheritance as well. This would be making a base class for both objects A and B that contains the common fields and then retrieving a list of the base classes, then convert to their proper types. Edit: In response to your comment, I suggest then making a base class that contains this line: user = models.ForeignKey(User) Class A and B can then inherit from that base class, and you can thus now just get all of the objects from that class. Say your base class was called 'C': objects = C.objects.get(user=user) That will obtain all of the C's, and you can then figure out their specific types by going through each object in objects and determining their type: for object in objects: if object.A: #code if object.B: #code
Generic object "ownership" in Django
Suppose I have the following models: class User(models.Model): pass class A(models.Model): user = models.ForeignKey(User) class B(models.Model): a = models.ForeignKey(A) That is, each user owns some objects of type A, and also some of type B. Now, I'm writing a generic interface that will allow the user to view any objects that it owns. In a view, of course I can't say something like "objects = model.objects.filter(user=user)", since B has no attribute 'user'. What's the best approach to take here?
[ "The way I would do it is to simply go through the object 'a' on class B. So in the view, I would do:\nobjects = B.objects.get(user=a.user)\nobjects += A.objects.get(user=user)\n\nThe reason I would do it this way is because these are essentially two database queries, one to retrieve a bunch of object A's and one to retrieve a bunch of object B's. I'm not certain it's possible in Django to retrieve a list of both, simply because of the way database inheritance works.\nYou could use model inheritance as well. This would be making a base class for both objects A and B that contains the common fields and then retrieving a list of the base classes, then convert to their proper types.\nEdit: In response to your comment, I suggest then making a base class that contains this line:\nuser = models.ForeignKey(User)\n\nClass A and B can then inherit from that base class, and you can thus now just get all of the objects from that class. Say your base class was called 'C':\nobjects = C.objects.get(user=user)\n\nThat will obtain all of the C's, and you can then figure out their specific types by going through each object in objects and determining their type:\nfor object in objects:\n if object.A:\n #code\n if object.B:\n #code \n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001125006_django_python.txt
Q: Caching PHP script outputs on the client side I have a php script that outputs a random image each time it's called. So when I open the script in a web browser, it shows one image and if I refresh, another image shows up. I'm trying to capture the correct image from visiting the web site through a command line (via mechanize). I used urllib2.urlopen(...) to grab the image, but each time I do that I get a different image. I want to be able to consistently grab the same image. How can I accomplish that? Thanks! UPDATE: Here's an example of what I'm talking about. If you reload this image in a web browser, a different image pops up each time. If you right click and save, you get the correct image. And if you keep doing that, you keep getting the correct image... BUT, how do you that from a command line? http://www.biglickmedia.com/art/random/index.php A: Most likely your image is cached by browser set this: <?php header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past ?> and each time you generating the image use another name for it (can be done by adding milliseconds to original one). A: You can try caching the image generated to disk and make it available to the user via a different link. After the image is generated, move it to a temp folder, where the user can download the generated static image. After it is downloaded, delete to make space. A: When you save it from the browser, it is not going back to the server to re-request the image, it is serving the one that it is displaying from its cache. Your command line client needs to do the same thing. You need to save the image every time you request it. Then when you find the one you want to keep you need to copy the already saved image to wherever it is you want to keep it permanently. If the server is always serving a new random image there's nothing else you can do. A: Grab it once and cache it on your side.
Caching PHP script outputs on the client side
I have a php script that outputs a random image each time it's called. So when I open the script in a web browser, it shows one image and if I refresh, another image shows up. I'm trying to capture the correct image from visiting the web site through a command line (via mechanize). I used urllib2.urlopen(...) to grab the image, but each time I do that I get a different image. I want to be able to consistently grab the same image. How can I accomplish that? Thanks! UPDATE: Here's an example of what I'm talking about. If you reload this image in a web browser, a different image pops up each time. If you right click and save, you get the correct image. And if you keep doing that, you keep getting the correct image... BUT, how do you that from a command line? http://www.biglickmedia.com/art/random/index.php
[ "Most likely your image is cached by browser set this:\n<?php\n header(\"Cache-Control: no-cache, must-revalidate\"); // HTTP/1.1\n header(\"Expires: Sat, 26 Jul 1997 05:00:00 GMT\"); // Date in the past\n?>\n\nand each time you generating the image use another name for it (can be done by adding milliseconds to original one).\n", "You can try caching the image generated to disk and make it available to the user via a different link.\nAfter the image is generated, move it to a temp folder, where the user can download the generated static image. After it is downloaded, delete to make space.\n", "When you save it from the browser, it is not going back to the server to re-request the image, it is serving the one that it is displaying from its cache.\nYour command line client needs to do the same thing. You need to save the image every time you request it. Then when you find the one you want to keep you need to copy the already saved image to wherever it is you want to keep it permanently.\nIf the server is always serving a new random image there's nothing else you can do.\n", "Grab it once and cache it on your side.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "caching", "mechanize", "php", "python" ]
stackoverflow_0001117890_caching_mechanize_php_python.txt
Q: Question about paths in Python let's say i have directory paths looking like this: this/is/the/basedir/path/a/include this/is/the/basedir/path/b/include this/is/the/basedir/path/a this/is/the/basedir/path/b In Python, how can i split these paths up so they will look like this instead: a/include b/include a b If i run os.path.split(path)[1] it will display: include include a b What should i be trying out here, should i be looking at some regex command or can this be done without it? Thanks in advance. EDIT ALL: I solved it using regular expressions, damn handy tool :) A: Perhaps something like this, depends on how hardcoded your prefix is: def removePrefix(path, prefix): plist = path.split(os.sep) pflist = prefix.split(os.sep) rest = plist[len(pflist):] return os.path.join(*rest) Usage: print removePrefix("this/is/the/basedir/path/b/include", "this/is/the/basedir/path") b/include Assuming you're on a platform where the directory separator (os.sep) really is the forward slash). This code tries to handle paths as something a little more high-level than mere strings. It's not optimal though, you could (or should) do more cleaning and canonicalization to be safer. A: what about partition? It Split the string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings. data = """this/is/the/basedir/path/a/include this/is/the/basedir/path/b/include this/is/the/basedir/path/a this/is/the/basedir/path/b""" for line in data.splitlines(): print line.partition("this/is/the/basedir/path/")[2] #output a/include b/include a b Updated for the new comment by author: It looks like u need rsplit for different directories by whether the directory endswith "include" of not: import os.path data = """this/is/the/basedir/path/a/include this/is/the/basedir/path/b/include this/is/the/basedir/path/a this/is/the/basedir/path/b""" for line in data.splitlines(): if line.endswith('include'): print '/'.join(line.rsplit("/",2)[-2:]) else: print os.path.split(line)[1] #or just # print line.rsplit("/",1)[-1] #output a/include b/include a b A: Maybe something like this: result = [] prefix = os.path.commonprefix(list_of_paths) for path in list_of_paths: result.append(os.path.relpath(path, prefix)) This works only in 2.6. The relapath in 2.5 and before does the work only in case the path is the current working directory. A: While the criterion is not 100% clear, it seems from the OP's comment that the key issue is specifically whether the path's last component ends in "include". If that is the case, and to avoid going wrong when the last component is e.g. "dontinclude" (as another answer does by trying string matching instead of path matching), I suggest: def lastpart(apath): pieces = os.path.split(apath) final = -1 if pieces[-1] == 'include': final = -2 return '/'.join(pieces[final:])
Question about paths in Python
let's say i have directory paths looking like this: this/is/the/basedir/path/a/include this/is/the/basedir/path/b/include this/is/the/basedir/path/a this/is/the/basedir/path/b In Python, how can i split these paths up so they will look like this instead: a/include b/include a b If i run os.path.split(path)[1] it will display: include include a b What should i be trying out here, should i be looking at some regex command or can this be done without it? Thanks in advance. EDIT ALL: I solved it using regular expressions, damn handy tool :)
[ "Perhaps something like this, depends on how hardcoded your prefix is:\ndef removePrefix(path, prefix):\n plist = path.split(os.sep)\n pflist = prefix.split(os.sep)\n rest = plist[len(pflist):]\n return os.path.join(*rest)\n\nUsage:\nprint removePrefix(\"this/is/the/basedir/path/b/include\", \"this/is/the/basedir/path\")\nb/include\n\nAssuming you're on a platform where the directory separator (os.sep) really is the forward slash).\nThis code tries to handle paths as something a little more high-level than mere strings. It's not optimal though, you could (or should) do more cleaning and canonicalization to be safer.\n", "what about partition?\nIt Split the string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings.\ndata = \"\"\"this/is/the/basedir/path/a/include\nthis/is/the/basedir/path/b/include\nthis/is/the/basedir/path/a\nthis/is/the/basedir/path/b\"\"\"\nfor line in data.splitlines():\n print line.partition(\"this/is/the/basedir/path/\")[2]\n\n#output\na/include\nb/include\na\nb\n\nUpdated for the new comment by author:\nIt looks like u need rsplit for different directories by whether the directory endswith \"include\" of not:\nimport os.path\ndata = \"\"\"this/is/the/basedir/path/a/include\nthis/is/the/basedir/path/b/include\nthis/is/the/basedir/path/a\nthis/is/the/basedir/path/b\"\"\"\nfor line in data.splitlines():\n if line.endswith('include'):\n print '/'.join(line.rsplit(\"/\",2)[-2:])\n else:\n print os.path.split(line)[1]\n #or just\n # print line.rsplit(\"/\",1)[-1]\n#output\na/include\nb/include\na\nb\n\n", "Maybe something like this:\nresult = []\n\nprefix = os.path.commonprefix(list_of_paths)\nfor path in list_of_paths:\n result.append(os.path.relpath(path, prefix))\n\nThis works only in 2.6. The relapath in 2.5 and before does the work only in case the path is the current working directory.\n", "While the criterion is not 100% clear, it seems from the OP's comment that the key issue is specifically whether the path's last component ends in \"include\". If that is the case, and to avoid going wrong when the last component is e.g. \"dontinclude\" (as another answer does by trying string matching instead of path matching), I suggest:\ndef lastpart(apath):\n pieces = os.path.split(apath)\n final = -1\n if pieces[-1] == 'include':\n final = -2\n return '/'.join(pieces[final:])\n\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "operating_system", "python" ]
stackoverflow_0001125399_operating_system_python.txt
Q: Running Python code from a server? Problem: to run one.py from a server. Error When I try to do it in Mac, I get errors: $python http://cs.edu.com/u/user/TEST/one.py ~ /Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python: can't open file 'http://cs.edu.com/u/user/TEST/one.py': [Errno 2] No such file or directory one.py is like: print 1 When I do it in Ubuntu, I get "file is not found". Question: How can I run Python code from a server? A: So far as I know, the standard Python shell doesn't know how to execute remote scripts. Try using curl or wget to retrieve the script and run it from the local copy. $ wget http://cs.edu.com/u/user/TEST/one.py $ python one.py UPDATE: Based on the question referenced in the comment to this answer, you need to execute one.py based on incoming HTTP requests from end users. The simplest solution is probably CGI, but depending on what else you need to do, a more robust solution might involve a framework of some sort. They each have there strengths and weaknesses, so you should probably consider your requirements carefully before jumping in. A: You can't do this. If you have SSH access to the server you can then run the python script located on the server using your SSH connection. If you want to write websites in python google python web frameworks for examples of how to set up and run websites with Python. A: wget http://cs.edu.com/u/user/TEST/one.py python one.py A: You can mount the remote servers directory with some sort of file networking, like NFS or something. That way it becomes local. But a better idea is that you explain why you are trying to do this, so we can solve the real usecase. There is most likely tons of better solutions, depending on what the real problem is. A: The python interpreter doesn't know how to read from a URL. The file needs to be local. However, if you are trying to get the server to execute the python code, you can use mod_python or various kinds of CGI. You can't do what you are trying to do the way you are trying to do it. A: Maybe something like this? python -c "import urllib; eval(urllib.urlopen(\"http://cs.edu.com/u/user/TEST/one.py").read())" A: OK, now when you explained, here is a new answer. You run that script with python one.py It's a server side-script. It's run on the server. It's also located on the server. Why you try to access it via http is beyond me. Run it from the file system. Although, you should probably look into running Grok or Django or something. This way you'll just end up writing your own Python web framework, you may just as well use one that exists instead. ;)
Running Python code from a server?
Problem: to run one.py from a server. Error When I try to do it in Mac, I get errors: $python http://cs.edu.com/u/user/TEST/one.py ~ /Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python: can't open file 'http://cs.edu.com/u/user/TEST/one.py': [Errno 2] No such file or directory one.py is like: print 1 When I do it in Ubuntu, I get "file is not found". Question: How can I run Python code from a server?
[ "So far as I know, the standard Python shell doesn't know how to execute remote scripts. Try using curl or wget to retrieve the script and run it from the local copy.\n$ wget http://cs.edu.com/u/user/TEST/one.py\n$ python one.py\n\nUPDATE: Based on the question referenced in the comment to this answer, you need to execute one.py based on incoming HTTP requests from end users. The simplest solution is probably CGI, but depending on what else you need to do, a more robust solution might involve a framework of some sort. They each have there strengths and weaknesses, so you should probably consider your requirements carefully before jumping in.\n", "You can't do this. If you have SSH access to the server you can then run the python script located on the server using your SSH connection. If you want to write websites in python google python web frameworks for examples of how to set up and run websites with Python.\n", "wget http://cs.edu.com/u/user/TEST/one.py\npython one.py\n\n", "You can mount the remote servers directory with some sort of file networking, like NFS or something. That way it becomes local.\nBut a better idea is that you explain why you are trying to do this, so we can solve the real usecase. There is most likely tons of better solutions, depending on what the real problem is.\n", "The python interpreter doesn't know how to read from a URL. The file needs to be local. \nHowever, if you are trying to get the server to execute the python code, you can use mod_python or various kinds of CGI. \nYou can't do what you are trying to do the way you are trying to do it.\n", "Maybe something like this?\npython -c \"import urllib; eval(urllib.urlopen(\\\"http://cs.edu.com/u/user/TEST/one.py\").read())\"\n\n", "OK, now when you explained, here is a new answer.\nYou run that script with \n python one.py\n\nIt's a server side-script. It's run on the server. It's also located on the server. Why you try to access it via http is beyond me. Run it from the file system.\nAlthough, you should probably look into running Grok or Django or something. This way you'll just end up writing your own Python web framework, you may just as well use one that exists instead. ;)\n" ]
[ 3, 3, 1, 1, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001125637_python.txt
Q: List comprehension python What is the equivalent list comprehension in python of the following Common Lisp code: (loop for x = input then (if (evenp x) (/ x 2) (+1 (* 3 x))) collect x until (= x 1)) A: A list comprehension is used to take an existing sequence and perform some function and/or filter to it, resulting in a new list. So, in this case a list comprehension is not appropriate since you don't have a starting sequence. An example with a while loop: numbers = [] x=input() while x != 1: numbers.append(x) if x % 2 == 0: x /= 2 else: x = 3 * x + 1 A: I believe you are writing the hailstone sequence, although I could be wrong since I am not fluent in Lisp. As far as I know, you can't do this in only a list comprehension, since each element depends on the last. How I would do it would be this def hailstone(n): yield n while n!=1 if n%2 == 0: # even n = n / 2 else: # odd n = 3 * n + 1 yield n list = [ x for x in hailstone(input) ] Of course, input would hold whatever your input was. My hailstone function could probably be more concise. My goal was clarity. A: Python doesn't have this kind of control structure built in, but you can generalize this into a function like this: def unfold(evolve, initial, until): state = initial yield state while not until(state): state = evolve(state) yield state After this your expression can be written as: def is_even(n): return not n % 2 unfold(lambda x: x/2 if is_even(x) else 3*x + 1, initial=input, until=lambda x: x == 1) But the Pythonic way to do it is using a generator function: def produce(x): yield x while x != 1: x = x / 2 if is_even(x) else 3*x + 1 yield x A: The hackery referred to by Laurence: You can do it in one list comprehension, it just ends up being AWFUL python. Unreadable python. Terrible python. I only present the following as a curiosity, not as an actual answer. Don't do this in code you actually want to use, only if you fancy having a play with the inner workings on python. So, 3 approaches: Helping List 1 1: Using a helping list, answer ends up in the helping list. This appends values to the list being iterated over until you've reached the value you want to stop at. A = [10] print [None if A[-1] == 1 else A.append(A[-1]/2) if (A[-1]%2==0) else A.append(3*A[-1]+1) for i in A] print A result: [None, None, None, None, None, None, None] [10, 5, 16, 8, 4, 2, 1] Helping List 2 2: Using a helping list, but with the result being the output of the list comprehension. This mostly relies on list.append(...) returning None, not None evaluating as True and True being considered 1 for the purposes of arithmetic. Sigh. A=[10] print [A[0]*(not A.append(A[0])) if len(A) == 1 else 1 if A[-1] == 2 else (A[-1]/2)*(not A.append(A[-1]/2)) if (A[-1]%2==0) else (3*A[-1]+1)*(not A.append(3*A[-1]+1)) for i in A] result: [10, 5, 16, 8, 4, 2, 1] Referencing the List Comprehension from within 3: Not using a helping list, but referring back to the list comprehension as it's being built. This is a bit fragile, and probably wont work in all environments. If it doesn't work, try running the code on its own: from itertools import chain, takewhile initialValue = 10 print [i if len(locals()['_[1]']) == 0 else (locals()['_[1]'][-1]/2) if (locals()['_[1]'][-1]%2==0) else (3*locals()['_[1]'][-1]+1) for i in takewhile(lambda x:x>1, chain([initialValue],locals()['_[1]']))] result: [10, 5, 16, 8, 4, 2, 1] So, now forget that you read this. This is dark, dark and dingy python. Evil python. And we all know python isn't evil. Python is lovely and nice. So you can't have read this, because this sort of thing can't exist. Good good. A: As Kiv said, a list comprehension requires a known sequence to iterate over. Having said that, if you had a sequence and were fixated on using a list comprehension, your solution would probably include something like this: [not (x % 2) and (x / 2) or (3 * x + 1) for x in sequence] Mike Cooper's answer is a better solution because it both retains the x != 1 termination, and this line doesn't read cleanly. A: 1 I have discovered a truly marvelous proof of this, which this margin is too narrow to contain. In all seriousness though, I don't believe you can do this with Python list comprehensions. They have basically the same power as map and filter, so you can't break out or look at previous values without resorting to hackery.
List comprehension python
What is the equivalent list comprehension in python of the following Common Lisp code: (loop for x = input then (if (evenp x) (/ x 2) (+1 (* 3 x))) collect x until (= x 1))
[ "A list comprehension is used to take an existing sequence and perform some function and/or filter to it, resulting in a new list. So, in this case a list comprehension is not appropriate since you don't have a starting sequence. An example with a while loop:\nnumbers = []\nx=input()\nwhile x != 1:\n numbers.append(x)\n if x % 2 == 0: x /= 2\n else: x = 3 * x + 1\n\n", "I believe you are writing the hailstone sequence, although I could be wrong since I am not fluent in Lisp.\nAs far as I know, you can't do this in only a list comprehension, since each element depends on the last.\nHow I would do it would be this\ndef hailstone(n):\n yield n\n while n!=1\n if n%2 == 0: # even\n n = n / 2\n else: # odd\n n = 3 * n + 1\n yield n\n\nlist = [ x for x in hailstone(input) ]\n\nOf course, input would hold whatever your input was.\nMy hailstone function could probably be more concise. My goal was clarity.\n", "Python doesn't have this kind of control structure built in, but you can generalize this into a function like this:\ndef unfold(evolve, initial, until):\n state = initial\n yield state\n while not until(state):\n state = evolve(state)\n yield state\n\nAfter this your expression can be written as:\ndef is_even(n): return not n % 2\nunfold(lambda x: x/2 if is_even(x) else 3*x + 1,\n initial=input, until=lambda x: x == 1)\n\nBut the Pythonic way to do it is using a generator function:\ndef produce(x):\n yield x\n while x != 1:\n x = x / 2 if is_even(x) else 3*x + 1\n yield x\n\n", "The hackery referred to by Laurence:\nYou can do it in one list comprehension, it just ends up being AWFUL python. Unreadable python. Terrible python. I only present the following as a curiosity, not as an actual answer. Don't do this in code you actually want to use, only if you fancy having a play with the inner workings on python. \nSo, 3 approaches: \n\nHelping List 1\n1: Using a helping list, answer ends up in the helping list. This appends values to the list being iterated over until you've reached the value you want to stop at.\nA = [10]\nprint [None if A[-1] == 1\n else A.append(A[-1]/2) if (A[-1]%2==0) \n else A.append(3*A[-1]+1) \n for i in A]\nprint A\n\nresult:\n[None, None, None, None, None, None, None]\n[10, 5, 16, 8, 4, 2, 1]\n\n\nHelping List 2\n2: Using a helping list, but with the result being the output of the list comprehension. This mostly relies on list.append(...) returning None, not None evaluating as True and True being considered 1 for the purposes of arithmetic. Sigh.\nA=[10]\nprint [A[0]*(not A.append(A[0])) if len(A) == 1 \n else 1 if A[-1] == 2 else (A[-1]/2)*(not A.append(A[-1]/2)) if (A[-1]%2==0) \n else (3*A[-1]+1)*(not A.append(3*A[-1]+1)) \n for i in A]\n\nresult:\n[10, 5, 16, 8, 4, 2, 1]\n\n\nReferencing the List Comprehension from within\n3: Not using a helping list, but referring back to the list comprehension as it's being built. This is a bit fragile, and probably wont work in all environments. If it doesn't work, try running the code on its own:\nfrom itertools import chain, takewhile\ninitialValue = 10\nprint [i if len(locals()['_[1]']) == 0\n else (locals()['_[1]'][-1]/2) if (locals()['_[1]'][-1]%2==0)\n else (3*locals()['_[1]'][-1]+1) \n for i in takewhile(lambda x:x>1, chain([initialValue],locals()['_[1]']))]\n\nresult:\n[10, 5, 16, 8, 4, 2, 1]\n\n\nSo, now forget that you read this. This is dark, dark and dingy python. Evil python. And we all know python isn't evil. Python is lovely and nice. So you can't have read this, because this sort of thing can't exist. Good good. \n", "As Kiv said, a list comprehension requires a known sequence to iterate over.\nHaving said that, if you had a sequence and were fixated on using a list comprehension, your solution would probably include something like this:\n[not (x % 2) and (x / 2) or (3 * x + 1) for x in sequence]\n\nMike Cooper's answer is a better solution because it both retains the x != 1 termination, and this line doesn't read cleanly.\n", "1\nI have discovered a truly marvelous proof of this, which this margin is too narrow to contain.\nIn all seriousness though, I don't believe you can do this with Python list comprehensions. They have basically the same power as map and filter, so you can't break out or look at previous values without resorting to hackery.\n" ]
[ 10, 9, 4, 4, 3, 0 ]
[]
[]
[ "lisp", "list_comprehension", "python" ]
stackoverflow_0001122612_lisp_list_comprehension_python.txt
Q: webdav for wsgi/python? I want to add WebDAV to whiff. This would be easy if I could find a simple WSGI component that implements WebDAV. I found http://pyfilesync.berlios.de/pyfileserver.html, but it seems to insist on using an external configuration file. I want to control everything via a Python API. Any ideas? Thanks! A: I recently picked up PyFileServer for further development: http://code.google.com/p/wsgidav/ After the config file is read, it's only a plain dictionary, that is passed to the WSGI Application object's constructor. So it should be pretty easy to do what you want. I didn't use whiff yet, but you are invited to contact me or join the project :-)
webdav for wsgi/python?
I want to add WebDAV to whiff. This would be easy if I could find a simple WSGI component that implements WebDAV. I found http://pyfilesync.berlios.de/pyfileserver.html, but it seems to insist on using an external configuration file. I want to control everything via a Python API. Any ideas? Thanks!
[ "I recently picked up PyFileServer for further development:\n http://code.google.com/p/wsgidav/\nAfter the config file is read, it's only a plain dictionary, that is passed to the WSGI Application object's constructor.\nSo it should be pretty easy to do what you want.\nI didn't use whiff yet, but you are invited to contact me or join the project :-)\n" ]
[ 3 ]
[]
[]
[ "python", "webdav", "wsgi" ]
stackoverflow_0001050217_python_webdav_wsgi.txt
Q: Platform-independent version of /var/lib and ~/.config We see that programs like apt-get store information in several places: /var/cache/apt <- cache /var/lib/apt <- keyrings, package db, states, locks, mirrors /etc/apt <- configuration file ~/.aptitude/config <- user configuration file So we see four kinds of paths here: Cache path Data path System-wide configuration User configuration Perhaps (1) can be made part of (2) for simplicity sake. Can anyone think of ways to get such appropriate paths in platform-independent way? Is there a library that does this, or does one have to invent this wheel? A: For Linux, check out the Filesystem Hierarchy Standard (but be aware that these standards are for software being part of distribution, software installed locally should not interfere with distribution's package management and stay in /usr/local/ and /var/local/). If you want to be truly cross-platform, IMO best way would be to leave this things configurable for packager, defaulting to run in current directory (so that users without administrative privileges can simply unpack and run program). This way, people packaging for particular OS/distribution will set sensible values for system-wide installation, and users will be able to use it locally without administrative rights for the machine.
Platform-independent version of /var/lib and ~/.config
We see that programs like apt-get store information in several places: /var/cache/apt <- cache /var/lib/apt <- keyrings, package db, states, locks, mirrors /etc/apt <- configuration file ~/.aptitude/config <- user configuration file So we see four kinds of paths here: Cache path Data path System-wide configuration User configuration Perhaps (1) can be made part of (2) for simplicity sake. Can anyone think of ways to get such appropriate paths in platform-independent way? Is there a library that does this, or does one have to invent this wheel?
[ "For Linux, check out the Filesystem Hierarchy Standard (but be aware that these standards are for software being part of distribution, software installed locally should not interfere with distribution's package management and stay in /usr/local/ and /var/local/).\nIf you want to be truly cross-platform, IMO best way would be to leave this things configurable for packager, defaulting to run in current directory (so that users without administrative privileges can simply unpack and run program). This way, people packaging for particular OS/distribution will set sensible values for system-wide installation, and users will be able to use it locally without administrative rights for the machine.\n" ]
[ 1 ]
[ "Do you mean something like virtualenv?\n" ]
[ -1 ]
[ "configuration", "cross_platform", "path", "python" ]
stackoverflow_0001107213_configuration_cross_platform_path_python.txt
Q: writeline problem in python I have a very basic problem. I am learning my first steps with python & scripting in general and so even this makes me wonder: I want to read & write lines to new file: ifile=open("C:\Python24\OtsakkeillaSPSS.csv", "r") ofile = open("C:\Python24\OtsakkeillaSPSSout.csv", "w") #read first line with headers line1 = ifile.readline() print line1 #read following lines which contain data & write it to ofile for line in ifile: if not line: break ofile.write(line) if i print this to the screen i get all my lines done nicely: 0,26100,568,8636 0,11130,555,**3570 0,57100,77,2405** 0,60120,116,1193 0,33540,166,5007 0,95420,318,2310 0,20320,560,7607 0,4300,692,3969 0,65610,341,2073 0,1720,0,0 0,78850,228,1515 0,78870,118,1222 If i write it to ofile i end up missing some 15 lines: 0,6100,948,8332 0,26100,568,8636 0,11130,555 I would appreciate if someone could point out to me what is it that i don´t understand? Reg, Jaani A: #read following lines which contain data & write it to ofile for line in ifile: if not line: continue #break stops the loop, you should use continue ofile.write(line) A: You should be calling ofile.close() according to python docs. I'm not sure that writes are fully flushed out to a file without an explicit close. Also, as SilentGhost mentioned, check for empty lines in your input file. And as mentioned by stefanw below, that "if.. break" statement isn't necessary in a for in. A: The "if not line:" - Check is unnecessary. for line in ifile: ofile.write(line) A: Why are you using python2.4? the latest is python2.6 and then you can use from contextlib import nested ipath = "C:\Python24\OtsakkeillaSPSS.csv" opath = "C:\Python24\OtsakkeillaSPSSout.csv" with nested(open(ipath,'r'), open(opath,'w') as ifile, ofile: #read first line with headers line1 = ifile.readline() print line1 #read following lines which contain data & write it to ofile for line in ifile: ofile.write(line) A: oh, why does it look so foolish, the formation i mean? Give it another go... hi, I have a very basic problem. I am learning my first steps with python & scripting in general and so even this makes me wonder: I want to read & write lines to new file: ifile=open("C:\Python24\OtsakkeillaSPSS.csv", "r") ofile = open("C:\Python24\OtsakkeillaSPSSout.csv", "w") read first line with headers line1 = ifile.readline() print line1 read following lines which contain data & write it to ofile for line in ifile: if not line: break ofile.write(line) if i print this to the screen i get all my lines done nicely: 0,26100,568,8636 0,11130,555,3570 0,57100,77,2405 0,60120,116,1193 0,33540,166,5007 0,95420,318,2310 0,20320,560,7607 0,4300,692,3969 0,65610,341,2073 0,1720,0,0 0,78850,228,1515 0,78870,118,1222 If i write it to ofile i end up missing some 15 lines: 0,6100,948,8332 0,26100,568,8636 0,11130,555 I would appreciate if someone could point out to me what is it that i don´t understand? Reg, Jaani
writeline problem in python
I have a very basic problem. I am learning my first steps with python & scripting in general and so even this makes me wonder: I want to read & write lines to new file: ifile=open("C:\Python24\OtsakkeillaSPSS.csv", "r") ofile = open("C:\Python24\OtsakkeillaSPSSout.csv", "w") #read first line with headers line1 = ifile.readline() print line1 #read following lines which contain data & write it to ofile for line in ifile: if not line: break ofile.write(line) if i print this to the screen i get all my lines done nicely: 0,26100,568,8636 0,11130,555,**3570 0,57100,77,2405** 0,60120,116,1193 0,33540,166,5007 0,95420,318,2310 0,20320,560,7607 0,4300,692,3969 0,65610,341,2073 0,1720,0,0 0,78850,228,1515 0,78870,118,1222 If i write it to ofile i end up missing some 15 lines: 0,6100,948,8332 0,26100,568,8636 0,11130,555 I would appreciate if someone could point out to me what is it that i don´t understand? Reg, Jaani
[ "#read following lines which contain data & write it to ofile\nfor line in ifile:\n if not line:\n continue #break stops the loop, you should use continue\n ofile.write(line)\n\n", "You should be calling ofile.close() according to python docs.\nI'm not sure that writes are fully flushed out to a file without an explicit close. \nAlso, as SilentGhost mentioned, check for empty lines in your input file.\nAnd as mentioned by stefanw below, that \"if.. break\" statement isn't necessary in a for in.\n", "The \"if not line:\" - Check is unnecessary.\nfor line in ifile:\n ofile.write(line)\n\n", "Why are you using python2.4? the latest is python2.6 \nand then you can use \nfrom contextlib import nested\nipath = \"C:\\Python24\\OtsakkeillaSPSS.csv\"\nopath = \"C:\\Python24\\OtsakkeillaSPSSout.csv\"\nwith nested(open(ipath,'r'), open(opath,'w') as ifile, ofile:\n\n #read first line with headers\n line1 = ifile.readline()\n print line1\n\n #read following lines which contain data & write it to ofile\n for line in ifile:\n ofile.write(line)\n\n", "oh, why does it look so foolish, the formation i mean?\nGive it another go...\nhi,\nI have a very basic problem. I am learning my first steps with python & scripting in general and so even this makes me wonder:\nI want to read & write lines to new file:\nifile=open(\"C:\\Python24\\OtsakkeillaSPSS.csv\", \"r\") \nofile = open(\"C:\\Python24\\OtsakkeillaSPSSout.csv\", \"w\")\nread first line with headers\nline1 = ifile.readline() \nprint line1\nread following lines which contain data & write it to ofile\nfor line in ifile: \n if not line: \n break \n ofile.write(line)\nif i print this to the screen i get all my lines done nicely:\n0,26100,568,8636 \n0,11130,555,3570 \n0,57100,77,2405 \n0,60120,116,1193 \n0,33540,166,5007 \n0,95420,318,2310 \n0,20320,560,7607 \n0,4300,692,3969 \n0,65610,341,2073 \n0,1720,0,0 \n0,78850,228,1515 \n0,78870,118,1222\nIf i write it to ofile i end up missing some 15 lines:\n0,6100,948,8332 \n0,26100,568,8636 \n0,11130,555\nI would appreciate if someone could point out to me what is it that i don´t understand?\nReg,\nJaani\n" ]
[ 4, 4, 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001126477_python.txt
Q: Special chars in Python i have to use special chars in my python-application. For example: ƃ I have information like this: U+0183 LATIN SMALL LETTER B WITH TOPBAR General Character Properties In Unicode since: 1.1 Unicode category: Letter, Lowercase Various Useful Representations UTF-8: 0xC6 0x83 UTF-16: 0x0183 C octal escaped UTF-8: \306\203 XML decimal entity: &# 387; But when i just pust symbols into python-script i get an error: Non-ASCII character '\xc8' ... How can i use it in strings for my application? A: You should tell the interpreter which encoding you're using, because apparently on your system it defaults to ascii. See PEP 263. In your case, place the following at the top of your file: # -*- coding: utf-8 -*- Note that you don't have to write exactly that; PEP 263 allows more freedom, to accommodate several popular editors which use slightly different conventions for the same purpose. Additionally, this string may also be placed on the second line, e.g. when the first is used for the shebang. A: http://docs.python.org/tutorial/interpreter.html#source-code-encoding A: While the answers so fare are all correct, I thought I'd provide a more complete treatment: The simplest way to represent a non-ASCII character in a script literal is to use the u prefix and u or U escapes, like so: print u"Look \u0411\u043e\u0440\u0438\u0441, a G-clef: \U0001d11e" This illustrates: using the u prefix to make sure the string is a unicode object using the u escape for characters in the basic multi-lingual plane (U+FFFD and below) using the U escape for characters in other planes (U+10000 and above) that Ƃ (U+0182 LATIN CAPITAL LETTER B WITH TOPBAR) and Б (U+0411 CYRILLIC CAPTIAL LETTER BE) just one example of many confusingly similar Unicode codepoints The default script encoding for Python that works everywhere is ASCII. As such, you'd have to use the above escapes to encode literals of non-ASCII characters. You can inform the Python interpreter of the encoding of your script with a line like: # -*- coding: utf-8 -*- This only changes the encoding of your script. But then you could write: print u"Look Борис, a G-clef: " Note that you still have to use the u prefix to obtain a unicode object, not a str object. Lastly, it is possible to change the default encoding used for str... but this not recommended, as it is a global change and may not play well with other python code. A: Do you store the Python file as UTF-8? Does your editor support UTF-8? Are you using unicode strings like so? foo = u'ƃƃƃƃƃ' A: Declare Unicode strings. somestring = u'æøå' A: In python it should be u"\u0183" The u before the String start indicates that the String contains Unicode characters. See here for reference: http://www.fileformat.info/info/unicode/char/0183/index.htm http://docs.python.org/tutorial/introduction.html#unicode-strings
Special chars in Python
i have to use special chars in my python-application. For example: ƃ I have information like this: U+0183 LATIN SMALL LETTER B WITH TOPBAR General Character Properties In Unicode since: 1.1 Unicode category: Letter, Lowercase Various Useful Representations UTF-8: 0xC6 0x83 UTF-16: 0x0183 C octal escaped UTF-8: \306\203 XML decimal entity: &# 387; But when i just pust symbols into python-script i get an error: Non-ASCII character '\xc8' ... How can i use it in strings for my application?
[ "You should tell the interpreter which encoding you're using, because apparently on your system it defaults to ascii. See PEP 263. In your case, place the following at the top of your file:\n# -*- coding: utf-8 -*-\n\nNote that you don't have to write exactly that; PEP 263 allows more freedom, to accommodate several popular editors which use slightly different conventions for the same purpose. Additionally, this string may also be placed on the second line, e.g. when the first is used for the shebang.\n", "http://docs.python.org/tutorial/interpreter.html#source-code-encoding\n", "While the answers so fare are all correct, I thought I'd provide a more complete treatment:\nThe simplest way to represent a non-ASCII character in a script literal is to use the u prefix and u or U escapes, like so:\nprint u\"Look \\u0411\\u043e\\u0440\\u0438\\u0441, a G-clef: \\U0001d11e\"\n\nThis illustrates:\n\nusing the u prefix to make sure the string is a unicode object\nusing the u escape for characters in the basic multi-lingual plane (U+FFFD and below)\nusing the U escape for characters in other planes (U+10000 and above)\nthat Ƃ (U+0182 LATIN CAPITAL LETTER B WITH TOPBAR) and Б (U+0411 CYRILLIC CAPTIAL LETTER BE) just one example of many confusingly similar Unicode codepoints\n\nThe default script encoding for Python that works everywhere is ASCII. As such, you'd have to use the above escapes to encode literals of non-ASCII characters. You can inform the Python interpreter of the encoding of your script with a line like:\n# -*- coding: utf-8 -*-\n\nThis only changes the encoding of your script. But then you could write:\nprint u\"Look Борис, a G-clef: \"\n\nNote that you still have to use the u prefix to obtain a unicode object, not a str object.\nLastly, it is possible to change the default encoding used for str... but this not recommended, as it is a global change and may not play well with other python code.\n", "Do you store the Python file as UTF-8? Does your editor support UTF-8? Are you using unicode strings like so?\nfoo = u'ƃƃƃƃƃ'\n\n", "Declare Unicode strings.\nsomestring = u'æøå'\n", "In python it should be\nu\"\\u0183\"\n\nThe u before the String start indicates that the String contains Unicode characters.\nSee here for reference: \nhttp://www.fileformat.info/info/unicode/char/0183/index.htm\nhttp://docs.python.org/tutorial/introduction.html#unicode-strings\n" ]
[ 11, 3, 3, 1, 0, 0 ]
[]
[]
[ "chars", "python" ]
stackoverflow_0001127786_chars_python.txt
Q: Unable to find an internet page blocked by robots.txt Problem: to find answers and exercises of lectures in Mathematics at Uni. Helsinki Practical problems to make a list of sites with .com which has Disallow in robots.txt to make a list of sites at (1) which contain files with *.pdf to make a list of sites at (2) which contain the word "analyysi" in pdf-files Suggestions for practical problems Problem 3: to make a compiler which scrapes data from pdf-files Questions How can you search .com -sites which are registered? How would you solve the practical problems 1 & 2 by Python's defaultdict and BeautifulSoap? A: I am trying to find every web site on the internet that has a pdf-file which has the word "Analyysi" Not an answer to your question, but: PLEASE respect the site owner's wish to NOT be indexed. A: Your questions are faulty. With respect to (2), you are making the faulty assumption that you can find all PDF files on a webserver. This is not possible, for multiple reasons. The first reason is that not all documents may be referenced. The second reason is that even if they are referenced, the reference itself may be invisible to you. Finally, there are PDF resources which are generated on the fly. That means they do not exist until you ask for them. And since they depend on your input, there's an infinite amount of them. Question 3 is faulty for pretty much the same reasons. In particular, the generated PDF may contain the word "analyysi" only if you used it in the query. E.g. http://example.com/makePDF.cgi?analyysi A: If I understand your requirements, you'd essentially have to spider every possible site in order to see which one(s) match your criteria. I don't see any faster or more efficient solution, regardless of what tools you use. A: If I understand you correctly then I don't see how this is possible without, as mentioned already, scanning the entire internet. You are looking for pages on the internet which are not on Google? There is not a database of every site on the net and if they are indexed by a search engine or not... You would literally need to index the entire web and then go though each site and check if they are on google. I am also confused if this relates in one site or the web since your question seems to switch between both. A: Do you mean that you have your lectures on a web page of your University's intranet and that you would like to be able to access this page from outside your University's intranet? I assume that in order to access your Uni's intranet you must enter a password, and that Google does not index any of the Uni's intranet pages -- which is the nature of an intranet. If all the above assumptions are correct then you simply need to host your pdf files on a website outside your University's intranet. Simplest way is to start a blog (no cost involved and very easy and quick to do) and then post your pdf files there. Google will then index your pages and also "scrape data" from your pdf's as you put it, which means that the text within your pdf files will be searchable. A: I outline: 1. Law "The problem comes with enforcing that law! In principal it is easy, in practice it is expensive!" source "There is no law stating that /robots.txt must be obeyed, nor does it constitute a binding contract between site owner and user, but having a /robots.txt can be relevant in legal cases." source 2. Practise disallow filetype:txt 3. Theoretically Possible?
Unable to find an internet page blocked by robots.txt
Problem: to find answers and exercises of lectures in Mathematics at Uni. Helsinki Practical problems to make a list of sites with .com which has Disallow in robots.txt to make a list of sites at (1) which contain files with *.pdf to make a list of sites at (2) which contain the word "analyysi" in pdf-files Suggestions for practical problems Problem 3: to make a compiler which scrapes data from pdf-files Questions How can you search .com -sites which are registered? How would you solve the practical problems 1 & 2 by Python's defaultdict and BeautifulSoap?
[ "\nI am trying to find every web site on the internet that has a pdf-file which has the word \"Analyysi\"\n\nNot an answer to your question, but: PLEASE respect the site owner's wish to NOT be indexed.\n", "Your questions are faulty.\nWith respect to (2), you are making the faulty assumption that you can find all PDF files on a webserver. This is not possible, for multiple reasons. The first reason is that not all documents may be referenced. The second reason is that even if they are referenced, the reference itself may be invisible to you. Finally, there are PDF resources which are generated on the fly. That means they do not exist until you ask for them. And since they depend on your input, there's an infinite amount of them.\nQuestion 3 is faulty for pretty much the same reasons. In particular, the generated PDF may contain the word \"analyysi\" only if you used it in the query. E.g. http://example.com/makePDF.cgi?analyysi\n", "If I understand your requirements, you'd essentially have to spider every possible site in order to see which one(s) match your criteria. I don't see any faster or more efficient solution, regardless of what tools you use.\n", "If I understand you correctly then I don't see how this is possible without, as mentioned already, scanning the entire internet. You are looking for pages on the internet which are not on Google? There is not a database of every site on the net and if they are indexed by a search engine or not...\nYou would literally need to index the entire web and then go though each site and check if they are on google. \nI am also confused if this relates in one site or the web since your question seems to switch between both.\n", "Do you mean that you have your lectures on a web page of your University's intranet and that you would like to be able to access this page from outside your University's intranet?\nI assume that in order to access your Uni's intranet you must enter a password, and that Google does not index any of the Uni's intranet pages -- which is the nature of an intranet.\nIf all the above assumptions are correct then you simply need to host your pdf files on a website outside your University's intranet. Simplest way is to start a blog (no cost involved and very easy and quick to do) and then post your pdf files there.\nGoogle will then index your pages and also \"scrape data\" from your pdf's as you put it, which means that the text within your pdf files will be searchable.\n", "I outline:\n1. Law\n\"The problem comes with enforcing that law! In principal it is easy, in practice it is expensive!\" source\n\"There is no law stating that /robots.txt must be obeyed, nor does it constitute a binding contract between site owner and user, but having a /robots.txt can be relevant in legal cases.\" source\n2. Practise\ndisallow filetype:txt\n\n3. Theoretically Possible?\n" ]
[ 6, 4, 3, 1, 0, 0 ]
[]
[]
[ "data_mining", "python", "web_crawler" ]
stackoverflow_0001009686_data_mining_python_web_crawler.txt
Q: django-timezones I am trying to setup django-timezones but am unfamiliar on how to go about this. The only info that I have found is here: http://www.ohloh.net/p/django-timezones class MyModel(Model): timezone = TimeZoneField() datetime = LocalizedDateTime('timezone') I also tried looking through the pinax code or any other projects that use this app but haven't found anything. Does anyone have an example or some info that they can share on how to use this? Thanks, Justin A: Well, the firs thing you need to do when installing any Django app, is add it to your INSTALLED_APPS in settings.py. This particular app doesn't do to much other then give you some handy fields and things that you can use in other parts of your Django project. Your best bet to understand it is reading the source, I would say.
django-timezones
I am trying to setup django-timezones but am unfamiliar on how to go about this. The only info that I have found is here: http://www.ohloh.net/p/django-timezones class MyModel(Model): timezone = TimeZoneField() datetime = LocalizedDateTime('timezone') I also tried looking through the pinax code or any other projects that use this app but haven't found anything. Does anyone have an example or some info that they can share on how to use this? Thanks, Justin
[ "Well, the firs thing you need to do when installing any Django app, is add it to your INSTALLED_APPS in settings.py. This particular app doesn't do to much other then give you some handy fields and things that you can use in other parts of your Django project. Your best bet to understand it is reading the source, I would say.\n" ]
[ 4 ]
[]
[]
[ "django", "python", "timezone" ]
stackoverflow_0001126811_django_python_timezone.txt
Q: How can I obtain the full AST in Python? I like the options offered by the _ast module, it's really powerful. Is there a way of getting the full AST from it? For example, if I get the AST of the following code : import os os.listdir(".") by using : ast = compile(source_string,"&lt;string&gt;","exec",_ast.PyCF_ONLY_AST) the body of the ast object will have two elements, an import object, and a expr object. However, I'd like to go further, and obtain the AST of import and listdir, in other words, I'd like to make _ast descend to the lowest level possible. I think it's logical that this sort of thing should be possible. The question is how? EDIT: by the lowest level possible, I didn't mean accesing what's "visible". I'd like to get the AST for the implementation of listdir as well: like stat and other function calls that may be executed for it. A: You do get the whole tree this way -- all the way to the bottom -- but, it IS held as a tree, exactly... so at each level to get the children you have to explicitly visit the needed attributes. For example (i'm naming the compile result cf rather than ast because that would hide the standard library ast module -- I assume you only have 2.5 rather than 2.6, which is why you're using the lower-level _ast module instead?)...: >>> cf.body[0].names[0].name 'os' This is what tells you that the import statement is importing name os (and that one only because 1 is the lengths of the .names field of .body[0] which is the import). In Python 2.6's module ast you also get helpers to let you navigate more easily on a tree (e.g. by the Visitor design pattern) -- but the whole tree is there in either 2.5 (with _ast) or 2.5 (with ast), and in either case is represented in exactly the same way. To handily visit all the nodes in the tree, in 2.6, use module ast (no leading underscore) and subclass ast.NodeVisitor as appropriate (or equivalently use ast.iter_child_nodes recursively and ast.iter_fields as needed). Of course these helpers can be implemented in pure Python on top of _ast if you're stuck in 2.5 for some reason. A: py> ast._fields ('body',) py> ast.body [<_ast.Import object at 0xb7978e8c>, <_ast.Expr object at 0xb7978f0c>] py> ast.body[1] <_ast.Expr object at 0xb7978f0c> py> ast.body[1]._fields ('value',) py> ast.body[1].value <_ast.Call object at 0xb7978f2c> py> ast.body[1].value._fields ('func', 'args', 'keywords', 'starargs', 'kwargs') py> ast.body[1].value.args [<_ast.Str object at 0xb7978fac>] py> ast.body[1].value.args[0] <_ast.Str object at 0xb7978fac> py> ast.body[1].value.args[0]._fields ('s',) py> ast.body[1].value.args[0].s '.' HTH
How can I obtain the full AST in Python?
I like the options offered by the _ast module, it's really powerful. Is there a way of getting the full AST from it? For example, if I get the AST of the following code : import os os.listdir(".") by using : ast = compile(source_string,"&lt;string&gt;","exec",_ast.PyCF_ONLY_AST) the body of the ast object will have two elements, an import object, and a expr object. However, I'd like to go further, and obtain the AST of import and listdir, in other words, I'd like to make _ast descend to the lowest level possible. I think it's logical that this sort of thing should be possible. The question is how? EDIT: by the lowest level possible, I didn't mean accesing what's "visible". I'd like to get the AST for the implementation of listdir as well: like stat and other function calls that may be executed for it.
[ "You do get the whole tree this way -- all the way to the bottom -- but, it IS held as a tree, exactly... so at each level to get the children you have to explicitly visit the needed attributes. For example (i'm naming the compile result cf rather than ast because that would hide the standard library ast module -- I assume you only have 2.5 rather than 2.6, which is why you're using the lower-level _ast module instead?)...:\n>>> cf.body[0].names[0].name\n'os'\n\nThis is what tells you that the import statement is importing name os (and that one only because 1 is the lengths of the .names field of .body[0] which is the import).\nIn Python 2.6's module ast you also get helpers to let you navigate more easily on a tree (e.g. by the Visitor design pattern) -- but the whole tree is there in either 2.5 (with _ast) or 2.5 (with ast), and in either case is represented in exactly the same way.\nTo handily visit all the nodes in the tree, in 2.6, use module ast (no leading underscore) and subclass ast.NodeVisitor as appropriate (or equivalently use ast.iter_child_nodes recursively and ast.iter_fields as needed). Of course these helpers can be implemented in pure Python on top of _ast if you're stuck in 2.5 for some reason.\n", "py> ast._fields\n('body',)\npy> ast.body\n[<_ast.Import object at 0xb7978e8c>, <_ast.Expr object at 0xb7978f0c>]\npy> ast.body[1]\n<_ast.Expr object at 0xb7978f0c>\npy> ast.body[1]._fields\n('value',)\npy> ast.body[1].value\n<_ast.Call object at 0xb7978f2c>\npy> ast.body[1].value._fields\n('func', 'args', 'keywords', 'starargs', 'kwargs')\npy> ast.body[1].value.args\n[<_ast.Str object at 0xb7978fac>]\npy> ast.body[1].value.args[0]\n<_ast.Str object at 0xb7978fac>\npy> ast.body[1].value.args[0]._fields\n('s',)\npy> ast.body[1].value.args[0].s\n'.'\n\nHTH\n" ]
[ 8, 5 ]
[]
[]
[ "abstract_syntax_tree", "python" ]
stackoverflow_0001128234_abstract_syntax_tree_python.txt
Q: To understand Typeset for PythonPath One recommends me the following code apparently only in .zshrc without explaining its purpose clearly. typeset -U PYTHONPATH I am interested in how you can use the code in .bashrc. My Bash goes upset about the command. How can you use the command in Bash? A: That zsh command is useful because zsh can treat the environment variable PYTHONPATH as an actual array of paths. The -U argument to typeset says, then when representing the array in the environment value passed to the program (Python, in this case), only include the first instance of each unique value. In bash, since array variables aren't exported, PYTHONPATH would be just a normal string variable, containing paths separated by colons. Hence, there is no need to tell bash to use only unique values.
To understand Typeset for PythonPath
One recommends me the following code apparently only in .zshrc without explaining its purpose clearly. typeset -U PYTHONPATH I am interested in how you can use the code in .bashrc. My Bash goes upset about the command. How can you use the command in Bash?
[ "That zsh command is useful because zsh can treat the environment variable PYTHONPATH as an actual array of paths. The -U argument to typeset says, then when representing the array in the environment value passed to the program (Python, in this case), only include the first instance of each unique value.\nIn bash, since array variables aren't exported, PYTHONPATH would be just a normal string variable, containing paths separated by colons. Hence, there is no need to tell bash to use only unique values. \n" ]
[ 3 ]
[]
[]
[ "bash", "path", "python", "typeset", "zsh" ]
stackoverflow_0001128366_bash_path_python_typeset_zsh.txt
Q: Need some help with python string / slicing operations This is a very newbie question and i will probably get downvoted for it, but i quite honestly couldn't find the answer after at least an hour googling. I learned how to slice strings based on "exact locations" where you have to know exactly where the word ends. But i did not find any article that explained how do it on "non static" strings that could change. Also i do not want to use string.split() in this case as its a little overkill for what i need. I basically have a string like this: myString = "!save python Python is a high-level object oriented language created by Guido van Rossum." # the format is !save [singleword] [definition] i need to "slice" this string but i cant figure out a proper way to do it. i need to save a to variable the title (python in this case) and the definition of this string. Somethig like: title = myString[1] definition = myString[everything after string[1] I'm not exactly sure how to do this when you have a dynamic string where you dont know where each word ends. I would greatly appreciate some pointers on what functions/methods should i read on to archieve this. Thank you in advance. A: Why is split overkill? verb, title, definition = myString.split (' ', 2) A: If you have spaces between your command, title, and definition you could: wordList = myString.split() cmd = wordList[0] # !save title = wordList[1] # python definition = ' '.join(wordList[2:]) # Python is a high-level object oriented language created by Guido van Rossum. If you really would rather not use split you could use regular expressions: import re m = re.match('(/S+)/s*(/S+)/s*(.*)') cmd = m.group(1) title = m.group(2) definition = m.group(3) A: The selected answer (after PEP8ing): verb, title, definition = my_string.split(' ', 2) splits on a single space. It's likely a better choice to split on runs of whitespace, just in case there are tabs or multiple spaces on either side of the title: verb, title, definition = my_string.split(None, 2) Also consider normalising the whitespace in the definition: definition = ' '.join(definition.split())
Need some help with python string / slicing operations
This is a very newbie question and i will probably get downvoted for it, but i quite honestly couldn't find the answer after at least an hour googling. I learned how to slice strings based on "exact locations" where you have to know exactly where the word ends. But i did not find any article that explained how do it on "non static" strings that could change. Also i do not want to use string.split() in this case as its a little overkill for what i need. I basically have a string like this: myString = "!save python Python is a high-level object oriented language created by Guido van Rossum." # the format is !save [singleword] [definition] i need to "slice" this string but i cant figure out a proper way to do it. i need to save a to variable the title (python in this case) and the definition of this string. Somethig like: title = myString[1] definition = myString[everything after string[1] I'm not exactly sure how to do this when you have a dynamic string where you dont know where each word ends. I would greatly appreciate some pointers on what functions/methods should i read on to archieve this. Thank you in advance.
[ "Why is split overkill?\nverb, title, definition = myString.split (' ', 2)\n\n", "If you have spaces between your command, title, and definition you could:\nwordList = myString.split()\ncmd = wordList[0] # !save\ntitle = wordList[1] # python\ndefinition = ' '.join(wordList[2:]) # Python is a high-level object oriented language created by Guido van Rossum.\n\nIf you really would rather not use split you could use regular expressions:\nimport re\nm = re.match('(/S+)/s*(/S+)/s*(.*)')\ncmd = m.group(1)\ntitle = m.group(2)\ndefinition = m.group(3)\n\n", "The selected answer (after PEP8ing):\nverb, title, definition = my_string.split(' ', 2)\n\nsplits on a single space. It's likely a better choice to split on runs of whitespace, just in case there are tabs or multiple spaces on either side of the title:\nverb, title, definition = my_string.split(None, 2)\n\nAlso consider normalising the whitespace in the definition:\ndefinition = ' '.join(definition.split())\n\n" ]
[ 12, 2, 1 ]
[]
[]
[ "python", "slice", "string" ]
stackoverflow_0001128431_python_slice_string.txt
Q: What can I attach to pylons.request in Pylons? I want keep track of a unique identifier for each browser that connects to my web application (that is written in Pylons.) I keep a cookie on the client to keep track of this, but if the cookie isn't present, then I want to generate a new unique identifier that will be sent back to the client with the response, but I also may want to access this value from other code used to generate the response. Is attaching this value to pylons.request safe? Or do I need to do something like use threading_local to make a thread local that I reset when each new request is handled? A: Why do you want a unique identifier? Basically every visitor already gets a unique identifier, his Session. Beaker, Pylons session and caching middleware, does all the work and tracks visitors, usually with a Session cookie. So don't care about tracking users, just use the Session for what it's made for, to store whatever user specific stuff you have . from pylons import session session["something"] = whatever() session.save() # somewhen later something = session["something"] A: Whatever you were to set on the request will only survive for the duration of the request. the problem you are describing is more appropriately handled with a Session as TCH4k has said. It's already enabled in the middleware, so go ahead.
What can I attach to pylons.request in Pylons?
I want keep track of a unique identifier for each browser that connects to my web application (that is written in Pylons.) I keep a cookie on the client to keep track of this, but if the cookie isn't present, then I want to generate a new unique identifier that will be sent back to the client with the response, but I also may want to access this value from other code used to generate the response. Is attaching this value to pylons.request safe? Or do I need to do something like use threading_local to make a thread local that I reset when each new request is handled?
[ "Why do you want a unique identifier? Basically every visitor already gets a unique identifier, his Session. Beaker, Pylons session and caching middleware, does all the work and tracks visitors, usually with a Session cookie. So don't care about tracking users, just use the Session for what it's made for, to store whatever user specific stuff you have .\nfrom pylons import session\nsession[\"something\"] = whatever()\nsession.save()\n\n# somewhen later\nsomething = session[\"something\"]\n\n", "Whatever you were to set on the request will only survive for the duration of the request. the problem you are describing is more appropriately handled with a Session as TCH4k has said. It's already enabled in the middleware, so go ahead.\n" ]
[ 3, 0 ]
[]
[]
[ "pylons", "python", "thread_safety" ]
stackoverflow_0001122537_pylons_python_thread_safety.txt
Q: wxPython, how do I fire events? I am making my own button class, subclass of a panel where I draw with a DC, and I need to fire wx.EVT_BUTTON when my custom button is pressed. How do I do it? A: The Wiki is pretty nice for reference. Andrea Gavana has a pretty complete recipe for building your own custom controls. The following is taken directly from there and extends what FogleBird answered with (note self is referring to a subclass of wx.PyControl): def SendCheckBoxEvent(self): """ Actually sends the wx.wxEVT_COMMAND_CHECKBOX_CLICKED event. """ # This part of the code may be reduced to a 3-liner code # but it is kept for better understanding the event handling. # If you can, however, avoid code duplication; in this case, # I could have done: # # self._checked = not self.IsChecked() # checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED, # self.GetId()) # checkEvent.SetInt(int(self._checked)) if self.IsChecked(): # We were checked, so we should become unchecked self._checked = False # Fire a wx.CommandEvent: this generates a # wx.wxEVT_COMMAND_CHECKBOX_CLICKED event that can be caught by the # developer by doing something like: # MyCheckBox.Bind(wx.EVT_CHECKBOX, self.OnCheckBox) checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED, self.GetId()) # Set the integer event value to 0 (we are switching to unchecked state) checkEvent.SetInt(0) else: # We were unchecked, so we should become checked self._checked = True checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED, self.GetId()) # Set the integer event value to 1 (we are switching to checked state) checkEvent.SetInt(1) # Set the originating object for the event (ourselves) checkEvent.SetEventObject(self) # Watch for a possible listener of this event that will catch it and # eventually process it self.GetEventHandler().ProcessEvent(checkEvent) # Refresh ourselves: the bitmap has changed self.Refresh() A: Create a wx.CommandEvent object, call its setters to set the appropriate attributes, and pass it to wx.PostEvent. http://docs.wxwidgets.org/stable/wx_wxcommandevent.html#wxcommandeventctor http://docs.wxwidgets.org/stable/wx_miscellany.html#wxpostevent This is a duplicate, there is more information here on constructing these objects: wxPython: Calling an event manually
wxPython, how do I fire events?
I am making my own button class, subclass of a panel where I draw with a DC, and I need to fire wx.EVT_BUTTON when my custom button is pressed. How do I do it?
[ "The Wiki is pretty nice for reference. Andrea Gavana has a pretty complete recipe for building your own custom controls. The following is taken directly from there and extends what FogleBird answered with (note self is referring to a subclass of wx.PyControl):\ndef SendCheckBoxEvent(self):\n \"\"\" Actually sends the wx.wxEVT_COMMAND_CHECKBOX_CLICKED event. \"\"\"\n\n # This part of the code may be reduced to a 3-liner code\n # but it is kept for better understanding the event handling.\n # If you can, however, avoid code duplication; in this case,\n # I could have done:\n #\n # self._checked = not self.IsChecked()\n # checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED,\n # self.GetId())\n # checkEvent.SetInt(int(self._checked))\n if self.IsChecked():\n\n # We were checked, so we should become unchecked\n self._checked = False\n\n # Fire a wx.CommandEvent: this generates a\n # wx.wxEVT_COMMAND_CHECKBOX_CLICKED event that can be caught by the\n # developer by doing something like:\n # MyCheckBox.Bind(wx.EVT_CHECKBOX, self.OnCheckBox)\n checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED,\n self.GetId())\n\n # Set the integer event value to 0 (we are switching to unchecked state)\n checkEvent.SetInt(0)\n\n else:\n\n # We were unchecked, so we should become checked\n self._checked = True\n\n checkEvent = wx.CommandEvent(wx.wxEVT_COMMAND_CHECKBOX_CLICKED,\n self.GetId())\n\n # Set the integer event value to 1 (we are switching to checked state)\n checkEvent.SetInt(1)\n\n # Set the originating object for the event (ourselves)\n checkEvent.SetEventObject(self)\n\n # Watch for a possible listener of this event that will catch it and\n # eventually process it\n self.GetEventHandler().ProcessEvent(checkEvent)\n\n # Refresh ourselves: the bitmap has changed\n self.Refresh()\n\n", "Create a wx.CommandEvent object, call its setters to set the appropriate attributes, and pass it to wx.PostEvent.\nhttp://docs.wxwidgets.org/stable/wx_wxcommandevent.html#wxcommandeventctor\nhttp://docs.wxwidgets.org/stable/wx_miscellany.html#wxpostevent\nThis is a duplicate, there is more information here on constructing these objects:\nwxPython: Calling an event manually\n" ]
[ 9, 6 ]
[]
[]
[ "events", "python", "wxpython" ]
stackoverflow_0001128074_events_python_wxpython.txt
Q: how do i filter an itertools chain() result? in my views, if i import an itertools module: from itertools import chain and i chain some objects with it: franktags = Frank.objects.order_by('date_added').reverse().filter(topic__exact='art') amytags = Amy.objects.order_by('date_added').reverse().filter(topic__exact='art') timtags = Tim.objects.order_by('date_added').reverse().filter(topic__exact='art') erictags = Eric.objects.order_by('date_added').reverse().filter(topic__exact='art') ourtags = list(chain(franktags, amytags, timtags, erictags)) how do i then order "ourtags" by the "date_added"? not surpisingly, ourtags = list(chain(franktags, amytags, timtags, erictags)).order_by('date_added') returns an "'list' object has no attribute 'order_by'" error. A: import operator ourtags = sorted(ourtags, key=operator.attrgetter('date_added')) A: By this point in the code, you've already loaded up all of the objects into memory and into a list. Just sort the list like you would any old Python list. >>> import operator >>> ourtags.sort(key=operator.attrgetter('date_added'))
how do i filter an itertools chain() result?
in my views, if i import an itertools module: from itertools import chain and i chain some objects with it: franktags = Frank.objects.order_by('date_added').reverse().filter(topic__exact='art') amytags = Amy.objects.order_by('date_added').reverse().filter(topic__exact='art') timtags = Tim.objects.order_by('date_added').reverse().filter(topic__exact='art') erictags = Eric.objects.order_by('date_added').reverse().filter(topic__exact='art') ourtags = list(chain(franktags, amytags, timtags, erictags)) how do i then order "ourtags" by the "date_added"? not surpisingly, ourtags = list(chain(franktags, amytags, timtags, erictags)).order_by('date_added') returns an "'list' object has no attribute 'order_by'" error.
[ "import operator\n\nourtags = sorted(ourtags, key=operator.attrgetter('date_added'))\n\n", "By this point in the code, you've already loaded up all of the objects into memory and into a list. Just sort the list like you would any old Python list.\n>>> import operator\n>>> ourtags.sort(key=operator.attrgetter('date_added'))\n\n" ]
[ 14, 5 ]
[]
[]
[ "django", "django_views", "python", "python_itertools" ]
stackoverflow_0001129344_django_django_views_python_python_itertools.txt
Q: How to split two nested lists and combine the parts to create two new nested lists I'm trying to code a simple genetic programming utility in python. But right now I'm stuck at the crossover/mate function for my trees. The trees are built by nested lists and look something like this: # f = internal node (a function), c = leaf node (a constant) tree1 = [f, [f, [f, c, c], [f, c, c]], [f, [f, c, c], [f, c, c]]] tree2 = [f, [f, [f, c, c], c], [f, [f, c, c], c]] I want to randomly select a point in each tree to split at and then I want one part from each tree to be combined into a new tree. There is also a max depth that shouldn't be exceeded so the selects can't really take place anywhere in the tree as it might create a too large tree. Below is an example on how it should work: # f:n, where n is the number of arguments the function take # + split here tree1 = [f:2, [f:3, a, a, a], a] # + split here tree2 = [f:2, [f:2, a, a], [f:1, a] tree_child1 = [f:2, [f:1, a], a] tree_child2 = [f:2, [f:2, a, a], [f:3, a, a, a]] I have no idea (at the moment) on how to solve this. Any tips or solutions are more than welcome! (Added my parse function as it might help someone to understand the structure better.) # My recursive code to parse the tree. def parse(self, node=None): if not node: node = self.root if isinstance(node, list): function = node[0] res = [] for child in node[1:function.arity+1]: res.append(self.parse(child)) value = function.parse(*res) # function else: value = node.parse() # constant return value A: I ended up implementing most of this as an exercise. First, find the number of possible locations to split: the number of non-function nodes. def count(obj): total = 0 for o in obj[1:]: # Add the node itself. total += 1 if isinstance(o, list): total += count(o) return total Then, a helper: given an index in the above range, figure out where it is. def find_idx(tree, idx): """ Return the node containing the idx'th function parameter, and the index of that parameter. If the tree contains fewer than idx parameters, return (None, None). """ if not isinstance(idx, list): # Stash this in a list, so recursive calls share the same value. idx = [idx] for i, o in enumerate(tree): # Skip the function itself. if i == 0: continue if idx[0] == 0: return tree, i idx[0] -= 1 if isinstance(o, list): container, result_index = find_idx(o, idx) if container is not None: return container, result_index return None, None Doing the swap is pretty simple now: def random_swap(tree1, tree2): from random import randrange pos_in_1 = randrange(0, count(tree1)) pos_in_2 = randrange(0, count(tree2)) parent1, idx1 = find_idx(tree1, pos_in_1) parent2, idx2 = find_idx(tree2, pos_in_2) # Swap: parent1[idx1], parent2[idx2] = parent2[idx2], parent1[idx1] c = 1 tree1 = ["f:2", c, ["f:1", c]] tree2 = ["f:2", ["f:2", ["f:2", c, c], ["f:2", c, c]], ["f:3", ["f:4", c, c, c, c], ["f:2", c, c], c]] while True: random_swap(tree1, tree2) print tree1 print tree2 This doesn't implement a max depth, but it's a start. This will also never replace the root node, where a node in tree1 becomes the new tree2 and all of tree2 becomes a node in tree1. A workaround would be to wrap the whole thing in eg. [lambda a: a, tree], so editable nodes always have a parent node. This isn't very efficient. Maintaining node counts could make it faster, but then you'd need to store a reference to the parent, too, in order to update the counts efficiently. If you go that route, you'll really want to find or implement a real tree class. A: If you store in each internal node a count of the children in each branch, then you could pick a split point by generating a random number from 0 to 1+total children. If the answer is 1, split at that node, otherwise use the number to figure out which subtree to descend to, and repeat the process.
How to split two nested lists and combine the parts to create two new nested lists
I'm trying to code a simple genetic programming utility in python. But right now I'm stuck at the crossover/mate function for my trees. The trees are built by nested lists and look something like this: # f = internal node (a function), c = leaf node (a constant) tree1 = [f, [f, [f, c, c], [f, c, c]], [f, [f, c, c], [f, c, c]]] tree2 = [f, [f, [f, c, c], c], [f, [f, c, c], c]] I want to randomly select a point in each tree to split at and then I want one part from each tree to be combined into a new tree. There is also a max depth that shouldn't be exceeded so the selects can't really take place anywhere in the tree as it might create a too large tree. Below is an example on how it should work: # f:n, where n is the number of arguments the function take # + split here tree1 = [f:2, [f:3, a, a, a], a] # + split here tree2 = [f:2, [f:2, a, a], [f:1, a] tree_child1 = [f:2, [f:1, a], a] tree_child2 = [f:2, [f:2, a, a], [f:3, a, a, a]] I have no idea (at the moment) on how to solve this. Any tips or solutions are more than welcome! (Added my parse function as it might help someone to understand the structure better.) # My recursive code to parse the tree. def parse(self, node=None): if not node: node = self.root if isinstance(node, list): function = node[0] res = [] for child in node[1:function.arity+1]: res.append(self.parse(child)) value = function.parse(*res) # function else: value = node.parse() # constant return value
[ "I ended up implementing most of this as an exercise.\nFirst, find the number of possible locations to split: the number of non-function nodes.\ndef count(obj):\n total = 0\n for o in obj[1:]:\n # Add the node itself.\n total += 1\n\n if isinstance(o, list):\n total += count(o)\n return total\n\nThen, a helper: given an index in the above range, figure out where it is.\ndef find_idx(tree, idx):\n \"\"\"\n Return the node containing the idx'th function parameter, and the index of that\n parameter. If the tree contains fewer than idx parameters, return (None, None).\n \"\"\"\n if not isinstance(idx, list):\n # Stash this in a list, so recursive calls share the same value.\n idx = [idx]\n\n for i, o in enumerate(tree):\n # Skip the function itself.\n if i == 0:\n continue\n\n if idx[0] == 0:\n return tree, i\n\n idx[0] -= 1\n if isinstance(o, list):\n container, result_index = find_idx(o, idx)\n if container is not None:\n return container, result_index\n\n return None, None\n\nDoing the swap is pretty simple now:\ndef random_swap(tree1, tree2):\n from random import randrange\n pos_in_1 = randrange(0, count(tree1))\n pos_in_2 = randrange(0, count(tree2))\n\n parent1, idx1 = find_idx(tree1, pos_in_1)\n parent2, idx2 = find_idx(tree2, pos_in_2)\n\n # Swap:\n parent1[idx1], parent2[idx2] = parent2[idx2], parent1[idx1]\n\nc = 1\ntree1 = [\"f:2\", c, [\"f:1\", c]]\ntree2 = [\"f:2\", [\"f:2\", [\"f:2\", c, c], [\"f:2\", c, c]], [\"f:3\", [\"f:4\", c, c, c, c], [\"f:2\", c, c], c]]\n\nwhile True:\n random_swap(tree1, tree2)\n print tree1\n print tree2\n\nThis doesn't implement a max depth, but it's a start.\nThis will also never replace the root node, where a node in tree1 becomes the new tree2 and all of tree2 becomes a node in tree1. A workaround would be to wrap the whole thing in eg. [lambda a: a, tree], so editable nodes always have a parent node.\nThis isn't very efficient. Maintaining node counts could make it faster, but then you'd need to store a reference to the parent, too, in order to update the counts efficiently. If you go that route, you'll really want to find or implement a real tree class.\n", "If you store in each internal node a count of the children in each branch, then you could pick a split point by generating a random number from 0 to 1+total children. If the answer is 1, split at that node, otherwise use the number to figure out which subtree to descend to, and repeat the process.\n" ]
[ 2, 0 ]
[]
[]
[ "genetic_programming", "list", "python" ]
stackoverflow_0001128924_genetic_programming_list_python.txt
Q: key=operator.attrgetter sort order? in my django view, if i import operator, and use the following code: multitags = sorted(multitags, key=operator.attrgetter('date_added')) is there an easy way to reverse the order – such that i get the dates in descending order (today at top; last week underneath)? A: This should work: sorted(multitags, key=operator.attrgetter('date_added'), reverse=True) This document on the python wiki is worth reading through at least once to get an idea of other things worth knowing: Sorting Mini HOWTO A: Sure, just add reverse=True to the keyword arguments with which you're calling .sorted!
key=operator.attrgetter sort order?
in my django view, if i import operator, and use the following code: multitags = sorted(multitags, key=operator.attrgetter('date_added')) is there an easy way to reverse the order – such that i get the dates in descending order (today at top; last week underneath)?
[ "This should work:\nsorted(multitags, key=operator.attrgetter('date_added'), reverse=True)\n\nThis document on the python wiki is worth reading through at least once to get an idea of other things worth knowing:\n\nSorting Mini HOWTO\n\n", "Sure, just add reverse=True to the keyword arguments with which you're calling .sorted!\n" ]
[ 10, 3 ]
[]
[]
[ "django", "python", "python_itertools" ]
stackoverflow_0001129548_django_python_python_itertools.txt
Q: creating non-reloading dynamic webapps using Django As far as I know, for a new request coming from a webapp, you need to reload the page to process and respond to that request. For example, if you want to show a comment on a post, you need to reload the page, process the comment, and then show it. What I want, however, is I want to be able to add comments (something like facebook, where the comment gets added and shown without having to reload the whole page, for example) without having to reload the web-page. Is it possible to do with only Django and Python with no Javascript/AJAX knowledge? I have heard it's possible with AJAX (I don't know how), but I was wondering if it was possible to do with Django. Thanks, A: You want to do that with out any client side code (javascript and ajax are just examples) and with out reloading your page (or at least part of it)? If that is your question, then the answer unfortunately is you can't. You need to either have client side code or reload your page. Think about it, once the client get's the page it will not change unless The client requests the same page from the server and the server returns and updated one the page has some client side code (eg: javascript) that updates the page. A: You definitely want to use AJAX. Which means the client will need to run some javascript code. If you don't want to learn javascript you can always try something like pyjamas. You can check out an example of it's HttpRequest here But I always feel that using straight javascript via a library (like jQuery) is easier to understand than trying to force one language into another one. A: To do it right, ajax would be the way to go BUT in a limited sense you can achieve the same thing by using a iframe, iframe is like another page embedded inside main page, so instead of refreshing whole page you may just refresh the inner iframe page and that may give the same effect. More about iframe patterns you can read at http://ajaxpatterns.org/IFrame_Call A: Maybe a few iFrames and some Comet/long-polling? Have the comment submission in an iFrame (so the whole page doesn't reload), and then show the result in the long-polled iFrame... Having said that, it's a pretty bad design idea, and you probably don't want to be doing this. AJAX/JavaScript is pretty much the way to go for things like this. I have heard it's possible with AJAX...but I was wondering if it was possible to do with Django. There's no reason you can't use both - specifically, AJAX within a Django web application. Django provides your organization and framework needs (and a page that will respond to AJAX requests) and then use some JavaScript on the client side to make AJAX calls to your Django-backed page that will respond correctly. I suggest you go find a basic jQuery tutorial which should explain enough basic JavaScript to get this working.
creating non-reloading dynamic webapps using Django
As far as I know, for a new request coming from a webapp, you need to reload the page to process and respond to that request. For example, if you want to show a comment on a post, you need to reload the page, process the comment, and then show it. What I want, however, is I want to be able to add comments (something like facebook, where the comment gets added and shown without having to reload the whole page, for example) without having to reload the web-page. Is it possible to do with only Django and Python with no Javascript/AJAX knowledge? I have heard it's possible with AJAX (I don't know how), but I was wondering if it was possible to do with Django. Thanks,
[ "You want to do that with out any client side code (javascript and ajax are just examples) and with out reloading your page (or at least part of it)?\nIf that is your question, then the answer unfortunately is you can't. You need to either have client side code or reload your page.\nThink about it, once the client get's the page it will not change unless\n\nThe client requests the same page from the server and the server returns and updated one\nthe page has some client side code (eg: javascript) that updates the page.\n\n", "You definitely want to use AJAX. Which means the client will need to run some javascript code.\nIf you don't want to learn javascript you can always try something like pyjamas. You can check out an example of it's HttpRequest here\nBut I always feel that using straight javascript via a library (like jQuery) is easier to understand than trying to force one language into another one.\n", "To do it right, ajax would be the way to go BUT in a limited sense you can achieve the same thing by using a iframe, iframe is like another page embedded inside main page, so instead of refreshing whole page you may just refresh the inner iframe page and that may give the same effect.\nMore about iframe patterns you can read at\nhttp://ajaxpatterns.org/IFrame_Call\n", "Maybe a few iFrames and some Comet/long-polling? Have the comment submission in an iFrame (so the whole page doesn't reload), and then show the result in the long-polled iFrame...\nHaving said that, it's a pretty bad design idea, and you probably don't want to be doing this. AJAX/JavaScript is pretty much the way to go for things like this.\n\nI have heard it's possible with AJAX...but I was\n wondering if it was possible to do\n with Django.\n\nThere's no reason you can't use both - specifically, AJAX within a Django web application. Django provides your organization and framework needs (and a page that will respond to AJAX requests) and then use some JavaScript on the client side to make AJAX calls to your Django-backed page that will respond correctly.\nI suggest you go find a basic jQuery tutorial which should explain enough basic JavaScript to get this working.\n" ]
[ 8, 3, 1, 1 ]
[]
[]
[ "ajax", "django", "python" ]
stackoverflow_0001129210_ajax_django_python.txt
Q: Is it possible to encode (asdf) in python Textile? I'm using python Textile to store markup in the database. I would like to yield the following HTML snippet: (<em>asdf</em>) The obvious doesn't get encoded: (_asdf_) -> <p>(_asdf_)</p> The following works, but yields an ugly space: ( _asdf_) -> <p>( <em>asdf</em>) Am I missing something obvious or is this just not possible using python Textile? A: It's hard to say if this is a bug or not; in the form on the Textile website, (_foo_) works as you want, but in the downloadable PHP implementation, it doesn't. You should be able to do this: ([_asdf_]) -> <p>(<em>asdf</em>)</p> However, this doesn't work, which is a bug in py-textile. You either need to use this: (]_asdf_]) or patch textile.py by changing line 918 (in the Textile.span() method) to: (?:^|(?<=[\s>%(pnct)s])|([{[])) (the difference is in the final group; the brackets are incorrectly reversed.) You could also change the line to: (?:^|(?<=[\s>(%(pnct)s])|([{[])) (note the added parenthesis) to get the behavior you desire for (_foo_), but I'm not sure if that would break anything else. Follow up: the latest version of the PHP Textile class does indeed make a similar change to the one I suggested.
Is it possible to encode (asdf) in python Textile?
I'm using python Textile to store markup in the database. I would like to yield the following HTML snippet: (<em>asdf</em>) The obvious doesn't get encoded: (_asdf_) -> <p>(_asdf_)</p> The following works, but yields an ugly space: ( _asdf_) -> <p>( <em>asdf</em>) Am I missing something obvious or is this just not possible using python Textile?
[ "It's hard to say if this is a bug or not; in the form on the Textile website, (_foo_) works as you want, but in the downloadable PHP implementation, it doesn't.\nYou should be able to do this:\n([_asdf_]) -> <p>(<em>asdf</em>)</p>\n\nHowever, this doesn't work, which is a bug in py-textile. You either need to use this:\n(]_asdf_])\n\nor patch textile.py by changing line 918 (in the Textile.span() method) to:\n (?:^|(?<=[\\s>%(pnct)s])|([{[]))\n\n(the difference is in the final group; the brackets are incorrectly reversed.)\nYou could also change the line to:\n (?:^|(?<=[\\s>(%(pnct)s])|([{[]))\n\n(note the added parenthesis) to get the behavior you desire for (_foo_), but I'm not sure if that would break anything else.\n\nFollow up: the latest version of the PHP Textile class does indeed make a similar change to the one I suggested.\n" ]
[ 1 ]
[]
[]
[ "markup", "python", "textile" ]
stackoverflow_0001128951_markup_python_textile.txt
Q: Django: reverse function fails with an exception I'm following the Django tutorial and got stuck with an error at part 4 of the tutorial. I got to the part where I'm writing the vote view, which uses reverse to redirect to another view. For some reason, reverse fails with the following exception: import() argument 1 must be string, not instancemethod Currently my project's urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^polls/', include('mysite.polls.urls')), (r'^admin/(.*)', include(admin.site.root)), ) and the app urls.py is: from django.conf.urls.defaults import * urlpatterns = patterns('mysite.polls.views', (r'^$', 'index'), (r'^(?P<poll_id>\d+)/$', 'details'), (r'^(?P<poll_id>\d+)/results/$', 'results'), (r'^(?P<poll_id>\d+)/vote/$', 'vote'), ) And the vote view is: (I've simplified it to have only the row with the error) def vote(request, poll_id): return HttpResponseRedirect(reverse('mysite.polls.views.results', args=(1,))) When I remove the admin urls include from the project's urls.py, i.e. making it into: urlpatterns = patterns('', (r'^polls/', include('mysite.polls.urls')), #(r'^admin/(.*)', include(admin.site.root)), ) it works. I've tried so many things and can't understand what I'm doing wrong. A: The way you include the admin URLs has changed a few times over the last couple of versions. It's likely that you are using the wrong instructions for the version of Django you have installed. If you are using the current trunk - ie not an official release - then the documentation at http://docs.djangoproject.com/en/dev/ is correct. However, if you are using 1.0.2 then you should follow the link at the top of the page to http://docs.djangoproject.com/en/1.0/.
Django: reverse function fails with an exception
I'm following the Django tutorial and got stuck with an error at part 4 of the tutorial. I got to the part where I'm writing the vote view, which uses reverse to redirect to another view. For some reason, reverse fails with the following exception: import() argument 1 must be string, not instancemethod Currently my project's urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^polls/', include('mysite.polls.urls')), (r'^admin/(.*)', include(admin.site.root)), ) and the app urls.py is: from django.conf.urls.defaults import * urlpatterns = patterns('mysite.polls.views', (r'^$', 'index'), (r'^(?P<poll_id>\d+)/$', 'details'), (r'^(?P<poll_id>\d+)/results/$', 'results'), (r'^(?P<poll_id>\d+)/vote/$', 'vote'), ) And the vote view is: (I've simplified it to have only the row with the error) def vote(request, poll_id): return HttpResponseRedirect(reverse('mysite.polls.views.results', args=(1,))) When I remove the admin urls include from the project's urls.py, i.e. making it into: urlpatterns = patterns('', (r'^polls/', include('mysite.polls.urls')), #(r'^admin/(.*)', include(admin.site.root)), ) it works. I've tried so many things and can't understand what I'm doing wrong.
[ "The way you include the admin URLs has changed a few times over the last couple of versions. It's likely that you are using the wrong instructions for the version of Django you have installed.\nIf you are using the current trunk - ie not an official release - then the documentation at http://docs.djangoproject.com/en/dev/ is correct.\nHowever, if you are using 1.0.2 then you should follow the link at the top of the page to http://docs.djangoproject.com/en/1.0/.\n" ]
[ 6 ]
[]
[]
[ "admin", "django", "python", "reverse" ]
stackoverflow_0001129769_admin_django_python_reverse.txt
Q: Is there an option to configure a priority in memcached? (Similiar to Expiry) A hashtable in memcached will be discarded either when it's Expired or when there's not enough memory and it's choosen to die based on the Least Recently Used algorithm. Can we put a Priority to hint or influence the LRU algorithm? I want to use memcached to store Web Sessions so i can use the cheap round-robin. I need to give Sessions Top Priority and nothing can kill them (not even if it's the Least Recently Used) except their own Max_Expiry. A: Not that I know of. memcached is designed to be very fast and very straightforward, no fancy weights and priorities keep it simple. You should not rely on memcache for persistent session storage. You should keep your sessions in the DB, but you can cache them in memcache. This way you can enjoy both worlds.
Is there an option to configure a priority in memcached? (Similiar to Expiry)
A hashtable in memcached will be discarded either when it's Expired or when there's not enough memory and it's choosen to die based on the Least Recently Used algorithm. Can we put a Priority to hint or influence the LRU algorithm? I want to use memcached to store Web Sessions so i can use the cheap round-robin. I need to give Sessions Top Priority and nothing can kill them (not even if it's the Least Recently Used) except their own Max_Expiry.
[ "Not that I know of.\nmemcached is designed to be very fast and very straightforward, no fancy weights and priorities keep it simple.\nYou should not rely on memcache for persistent session storage. You should keep your sessions in the DB, but you can cache them in memcache. This way you can enjoy both worlds.\n" ]
[ 1 ]
[]
[]
[ "caching", "database", "memcached", "python", "session" ]
stackoverflow_0001000540_caching_database_memcached_python_session.txt
Q: virtualenv with all Python libraries I need to get Python code, which relies on Python 2.6, running on a machine with only Python 2.3 (I have no root access). This is a typical scenario for virtualenv. The only problem is that I cannot convince it to copy all libraries to the new environment as well. virtualenv --no-site-packages my_py26 does not do what I need. The library files are still only links to the /usr/lib/python2.6 directory. No I'm wondering, whether virtualenv is the right solution for this scenario at all. From what I understand it is only targetted to run on machines with exactly the same Python version. Tools like cx_Freeze and the like do not work for me, as I start the Python file after some environment variable tweeking. Is there maybe a hidden virtualenv option that copies all the Python library files into the new environment? Or some other tool that can help here? A: No, I think you completely misunderstood what virtualenv does. Virtualenv is to create a new environment on the same machine that is isolated from the main environment. In such an environment you can install packages that do not get installed in the main environment, and with --no-site-packages you can also isolate you from the main environments installed modules. If you need to run a program that requires Python 2.6 on a machine that does not have 2.6, you need to install Python 2.6 on that machine. A: I can't help you with your virtualenv problem as I have never used it. But I will just point something out for future use. You can install software from sources into your home folder and run them without root access. for example to install python 2.6: ~/src/Python-2.6.2 $ ./configure --prefix=$HOME/local ~/src/Python-2.6.2 $ make ... ~/src/Python-2.6.2 $ make install ... export PATH=$HOME/local/bin:$PATH export LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH ~/src/Python-2.6.2 $ which python /home/name/local/bin/python This is what I have used at Uni to install software where I don't have root access. A: You haven't clearly explained why cx_Freeze and the like wouldn't work for you. The normal approach to distributing Python applications to machines which have an older version of Python, or even no Python at all, is a tool like PyInstaller (in the same class of tools as cx_Freeze). PyInstaller makes copies of all your dependencies and allows you to create a single executable file which contains all your Python dependencies. You mention tweaking environment variables as a reason why you can't use such tools; if you expand on exactly why this is, you may be able to get a more helpful answer.
virtualenv with all Python libraries
I need to get Python code, which relies on Python 2.6, running on a machine with only Python 2.3 (I have no root access). This is a typical scenario for virtualenv. The only problem is that I cannot convince it to copy all libraries to the new environment as well. virtualenv --no-site-packages my_py26 does not do what I need. The library files are still only links to the /usr/lib/python2.6 directory. No I'm wondering, whether virtualenv is the right solution for this scenario at all. From what I understand it is only targetted to run on machines with exactly the same Python version. Tools like cx_Freeze and the like do not work for me, as I start the Python file after some environment variable tweeking. Is there maybe a hidden virtualenv option that copies all the Python library files into the new environment? Or some other tool that can help here?
[ "No, I think you completely misunderstood what virtualenv does. Virtualenv is to create a new environment on the same machine that is isolated from the main environment. In such an environment you can install packages that do not get installed in the main environment, and with --no-site-packages you can also isolate you from the main environments installed modules.\nIf you need to run a program that requires Python 2.6 on a machine that does not have 2.6, you need to install Python 2.6 on that machine. \n", "I can't help you with your virtualenv problem as I have never used it. But I will just point something out for future use.\nYou can install software from sources into your home folder and run them without root access. for example to install python 2.6:\n~/src/Python-2.6.2 $ ./configure --prefix=$HOME/local\n~/src/Python-2.6.2 $ make\n ...\n~/src/Python-2.6.2 $ make install\n ...\nexport PATH=$HOME/local/bin:$PATH\nexport LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH\n\n~/src/Python-2.6.2 $ which python\n/home/name/local/bin/python\n\nThis is what I have used at Uni to install software where I don't have root access.\n", "You haven't clearly explained why cx_Freeze and the like wouldn't work for you. The normal approach to distributing Python applications to machines which have an older version of Python, or even no Python at all, is a tool like PyInstaller (in the same class of tools as cx_Freeze). PyInstaller makes copies of all your dependencies and allows you to create a single executable file which contains all your Python dependencies.\nYou mention tweaking environment variables as a reason why you can't use such tools; if you expand on exactly why this is, you may be able to get a more helpful answer.\n" ]
[ 4, 4, 0 ]
[]
[]
[ "linux", "python", "virtualenv" ]
stackoverflow_0001130402_linux_python_virtualenv.txt
Q: Get POST data from a complex Django form? I have a Django form that uses a different number of fields based on the year/month. So I create the fields in the form like this: for entry in entry_list: self.fields[entry] = forms.DecimalField([stuffhere]) but now I don't know how to get the submitted data from the form. Normally I would do something like: form.cleaned_data["fieldname"] but I don't know what the names of the fields are. The debug screen shows my POST data as simply "Entry Object" with a value of "u''". Calling POST.lists() doesn't show anything. I am sure I am missing something obvious, but I've been stuck on this for a few days too many. Is there a better way to do this? Is all of the data in the request object, but I just don't know how to use it? Here is the code for the model/form/view: http://pastebin.com/f28d92c0e Much Thanks! EDIT: I've tried out both of the suggestions below. Using formsets was definitely easier and nicer. A: I think you might be better off using formsets here. They're designed for exactly what you seem to be trying to do - dealing with a variable number of items within a form. A: In this line: self.fields[entry] = forms.DecimalField(max_digits=4, decimal_places=1, label=nice_label) entry is a model instance. But fields are keyed by field names (strings). Try something like: self.fields[entry.entry_name] = forms.Decimal(...) (substitute appropriate for "entry_name").
Get POST data from a complex Django form?
I have a Django form that uses a different number of fields based on the year/month. So I create the fields in the form like this: for entry in entry_list: self.fields[entry] = forms.DecimalField([stuffhere]) but now I don't know how to get the submitted data from the form. Normally I would do something like: form.cleaned_data["fieldname"] but I don't know what the names of the fields are. The debug screen shows my POST data as simply "Entry Object" with a value of "u''". Calling POST.lists() doesn't show anything. I am sure I am missing something obvious, but I've been stuck on this for a few days too many. Is there a better way to do this? Is all of the data in the request object, but I just don't know how to use it? Here is the code for the model/form/view: http://pastebin.com/f28d92c0e Much Thanks! EDIT: I've tried out both of the suggestions below. Using formsets was definitely easier and nicer.
[ "I think you might be better off using formsets here. They're designed for exactly what you seem to be trying to do - dealing with a variable number of items within a form.\n", "In this line:\n\nself.fields[entry] = forms.DecimalField(max_digits=4, decimal_places=1, label=nice_label)\n\nentry is a model instance. But fields are keyed by field names (strings). Try something like:\n\nself.fields[entry.entry_name] = forms.Decimal(...)\n\n(substitute appropriate for \"entry_name\").\n" ]
[ 5, 0 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001130575_django_django_forms_python.txt
Q: which exception catches xxxx error in python given a traceback error log, i don't always know how to catch a particular exception. my question is in general, how do i determine which "except" clause to write in order to handle a certain exception. example 1: File "c:\programs\python\lib\httplib.py", line 683, in connect raise socket.error, msg error: (10065, 'No route to host') example 2: return codecs.charmap_encode(input,errors,encoding_table) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position(...) catching the 2nd example is obvious: try: ... except UnicodeDecodeError: ... how do i catch specifically the first error? A: Look at the stack trace. The code that raises the exception is: raise socket.error, msg. So the answer to your question is: You have to catch socket.error. import socket ... try: ... except socket.error: ... A: First one is also obvious, as second one e.g. >>> try: ... socket.socket().connect(("0.0.0.0", 0)) ... except socket.error: ... print "socket error!!!" ... socket error!!! >>> A: When you have an exception that unique to a module you simply have to use the same class used to raise it. Here you have the advantage because you already know where the exception class is because you're raising it. try: raise socket.error, msg except socket.error, (value, message): # O no! But for other such exception you either have to wait until it gets thrown to find where the class is, or you have to read through the documentation to find out where it is coming from.
which exception catches xxxx error in python
given a traceback error log, i don't always know how to catch a particular exception. my question is in general, how do i determine which "except" clause to write in order to handle a certain exception. example 1: File "c:\programs\python\lib\httplib.py", line 683, in connect raise socket.error, msg error: (10065, 'No route to host') example 2: return codecs.charmap_encode(input,errors,encoding_table) UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position(...) catching the 2nd example is obvious: try: ... except UnicodeDecodeError: ... how do i catch specifically the first error?
[ "Look at the stack trace. The code that raises the exception is: raise socket.error, msg.\nSo the answer to your question is: You have to catch socket.error.\nimport socket\n...\ntry:\n ...\nexcept socket.error:\n ...\n\n", "First one is also obvious, as second one e.g.\n>>> try:\n... socket.socket().connect((\"0.0.0.0\", 0))\n... except socket.error:\n... print \"socket error!!!\"\n... \nsocket error!!!\n>>> \n\n", "When you have an exception that unique to a module you simply have to use the same class used to raise it. Here you have the advantage because you already know where the exception class is because you're raising it.\ntry:\n raise socket.error, msg\nexcept socket.error, (value, message):\n # O no!\n\nBut for other such exception you either have to wait until it gets thrown to find where the class is, or you have to read through the documentation to find out where it is coming from.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0001130764_exception_python.txt
Q: most efficient data structure for a read-only list of strings (about 100,000) with fast prefix search I'm writing an application that needs to read a list of strings from a file, save them in a data structure, and then look up those strings by prefixes. The list of strings is simply a list of words in a given language. For example, if the search function gets "stup" as a parameter, it should return ["stupid", "stupidity", "stupor"...]. It should do so in O(log(n)*m) time, where n is the size of the data structure and m is the number of results and should be as fast as possible. Memory consumption is not a big issue right now. I'm writing this in python, so it would be great if you could point me to a suitable data structure (preferably) implemented in c with python wrappers. A: You want a trie. http://en.wikipedia.org/wiki/Trie I've used them in Scrabble and Boggle programs. They're perfect for the use case you described (fast prefix lookup). Here's some sample code for building up a trie in Python. This is from a Boggle program I whipped together a few months ago. The rest is left as an exercise to the reader. But for prefix checking you basically need a method that starts at the root node (the variable words), follows the letters of the prefix to successive child nodes, and returns True if such a path is found and False otherwise. class Node(object): def __init__(self, letter='', final=False): self.letter = letter self.final = final self.children = {} def __contains__(self, letter): return letter in self.children def get(self, letter): return self.children[letter] def add(self, letters, n=-1, index=0): if n < 0: n = len(letters) if index >= n: return letter = letters[index] if letter in self.children: child = self.children[letter] else: child = Node(letter, index==n-1) self.children[letter] = child child.add(letters, n, index+1) def load_dictionary(path): result = Node() for line in open(path, 'r'): word = line.strip().lower() result.add(word) return result words = load_dictionary('dictionary.txt') A: Some Python implementations of tries: http://jtauber.com/2005/02/trie.py http://www.koders.com/python/fid7B6BC1651A9E8BBA547552FE3F039479A4DECC45.aspx http://filoxus.blogspot.com/2007/11/trie-in-python.html A: A trie (or prefix tree) sounds right up your alley. It can do the search on a prefix string of length m in O(m) I believe.
most efficient data structure for a read-only list of strings (about 100,000) with fast prefix search
I'm writing an application that needs to read a list of strings from a file, save them in a data structure, and then look up those strings by prefixes. The list of strings is simply a list of words in a given language. For example, if the search function gets "stup" as a parameter, it should return ["stupid", "stupidity", "stupor"...]. It should do so in O(log(n)*m) time, where n is the size of the data structure and m is the number of results and should be as fast as possible. Memory consumption is not a big issue right now. I'm writing this in python, so it would be great if you could point me to a suitable data structure (preferably) implemented in c with python wrappers.
[ "You want a trie.\nhttp://en.wikipedia.org/wiki/Trie\nI've used them in Scrabble and Boggle programs. They're perfect for the use case you described (fast prefix lookup).\nHere's some sample code for building up a trie in Python. This is from a Boggle program I whipped together a few months ago. The rest is left as an exercise to the reader. But for prefix checking you basically need a method that starts at the root node (the variable words), follows the letters of the prefix to successive child nodes, and returns True if such a path is found and False otherwise.\nclass Node(object):\n def __init__(self, letter='', final=False):\n self.letter = letter\n self.final = final\n self.children = {}\n def __contains__(self, letter):\n return letter in self.children\n def get(self, letter):\n return self.children[letter]\n def add(self, letters, n=-1, index=0):\n if n < 0: n = len(letters)\n if index >= n: return\n letter = letters[index]\n if letter in self.children:\n child = self.children[letter]\n else:\n child = Node(letter, index==n-1)\n self.children[letter] = child\n child.add(letters, n, index+1)\n\ndef load_dictionary(path):\n result = Node()\n for line in open(path, 'r'):\n word = line.strip().lower()\n result.add(word)\n return result\n\nwords = load_dictionary('dictionary.txt')\n\n", "Some Python implementations of tries:\n\nhttp://jtauber.com/2005/02/trie.py\nhttp://www.koders.com/python/fid7B6BC1651A9E8BBA547552FE3F039479A4DECC45.aspx\nhttp://filoxus.blogspot.com/2007/11/trie-in-python.html\n\n", "A trie (or prefix tree) sounds right up your alley. It can do the search on a prefix string of length m in O(m) I believe.\n" ]
[ 15, 4, 2 ]
[ "string array.\nthen binary search through it to search the first match\nthen step one by one through it for all subsequent matches\n(i originally had linked list here too... but of course this doesn't have random access so this was 'bs' (which probably explains why I was downvoted). My binary search algorithm still is the fastest way to go though\n" ]
[ -1 ]
[ "data_structures", "dictionary", "lookup", "python" ]
stackoverflow_0001130992_data_structures_dictionary_lookup_python.txt
Q: Following a javascript postback using COM + IE automation to save text file I want to automate the archiving of the data on this page http://energywatch.natgrid.co.uk/EDP-PublicUI/Public/InstantaneousFlowsIntoNTS.aspx, and upload into a database. I have been using python and win32com (behind a corporate proxy, so no direct net access, hence I am using IE to do so) on other pages to do this. My question is that is there anyway to extract and save the CSV data that is returned when clicking the "Click here to download data" link at the bottom? This link is a javascript postback, and would be much easier than reformatting the page itself into CSV. . Of course, I'm not necessarily committed to using Python if a simpler alternative can be suggested? Thanks A: Here's a better way, using the mechanize library. import mechanize b = mechanize.Browser() b.set_proxies({'http': 'yourproxy.corporation.com:3128' }) b.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)')] b.open("http://energywatch.natgrid.co.uk/EDP-PublicUI/Public/InstantaneousFlowsIntoNTS.aspx") b.select_form(name="form1") b.form.find_control(name='__EVENTTARGET').readonly = False b.form['__EVENTTARGET'] = 'a1' print b.submit().read() Note how you can specify that mechanize should use a proxy server (also possible using plain urllib). Also note how ASP.NETs javascript postback is simulated. Edit: If your proxy server is using NTLM authentication, that could be the problem. AFAIK urllib2 does not handle NTLM authentication. You could try the NTLM Authorization Proxy Server. From the readme file: WHAT IS 'NTLM Authorization Proxy Server'? 'NTLM Authorization Proxy Server' is a proxy-like software, that will authorize you at MS proxy server and at web servers (ISS especially) using MS proprietary NTLM authorization method and it can change some values in your client's request header so that those requests will look like ones made by MS IE. It is written in Python language. See www.python.org.
Following a javascript postback using COM + IE automation to save text file
I want to automate the archiving of the data on this page http://energywatch.natgrid.co.uk/EDP-PublicUI/Public/InstantaneousFlowsIntoNTS.aspx, and upload into a database. I have been using python and win32com (behind a corporate proxy, so no direct net access, hence I am using IE to do so) on other pages to do this. My question is that is there anyway to extract and save the CSV data that is returned when clicking the "Click here to download data" link at the bottom? This link is a javascript postback, and would be much easier than reformatting the page itself into CSV. . Of course, I'm not necessarily committed to using Python if a simpler alternative can be suggested? Thanks
[ "Here's a better way, using the mechanize library.\n\nimport mechanize\n\nb = mechanize.Browser()\nb.set_proxies({'http': 'yourproxy.corporation.com:3128' })\n\nb.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)')]\nb.open(\"http://energywatch.natgrid.co.uk/EDP-PublicUI/Public/InstantaneousFlowsIntoNTS.aspx\")\n\nb.select_form(name=\"form1\")\nb.form.find_control(name='__EVENTTARGET').readonly = False\nb.form['__EVENTTARGET'] = 'a1'\n\nprint b.submit().read()\n\nNote how you can specify that mechanize should use a proxy server (also possible using plain urllib). Also note how ASP.NETs javascript postback is simulated.\nEdit:\nIf your proxy server is using NTLM authentication, that could be the problem. AFAIK urllib2 does not handle NTLM authentication. You could try the NTLM Authorization Proxy Server. From the readme file:\n\nWHAT IS 'NTLM Authorization Proxy Server'?\n'NTLM Authorization Proxy Server' is a proxy-like software, that will authorize you\nat MS proxy server and at web servers (ISS especially) using MS proprietary NTLM\nauthorization method and it can change some values in your client's request header\nso that those requests will look like ones made by MS IE. It is written in Python\nlanguage. See www.python.org.\n\n" ]
[ 1 ]
[]
[]
[ "com", "javascript", "python", "web", "webpage" ]
stackoverflow_0001130857_com_javascript_python_web_webpage.txt
Q: Getting "Comment post not allowed (400)" when using Django Comments I'm going through a Django book and I seem to be stuck. The code base used in the book is .96 and I'm using 1.0 for my Django install. The portion I'm stuck at is related to Django comments (django.contrib.comments). When I submit my comments I get "Comment post not allowed (400) Why: Missing content_type or object_pk field". I've found the Django documentation to be a bit lacking in this area and I'm hoping to get some help. The comment box is displayed just fine, it's when I submit the comment that I get the above error (or security warning as it truly appears). My call to the comment form: {% render_comment_form for bookmarks.sharedbookmark shared_bookmark.id %} My form.html code: {% if user.is_authenticated %} <form action="/comments/post/" method="post"> <p><label>Post a comment:</label><br /> <textarea name="comment" rows="10" cols="60"></textarea></p> <input type="hidden" name="options" value="{{ options }}" /> <input type="hidden" name="target" value="{{ target }}" /> <input type="hidden" name="gonzo" value="{{ hash }}" /> <input type="submit" name="post" value="submit comment" /> </form> {% else %} <p>Please <a href="/login/">log in</a> to post comments.</p> {% endif %} Any help would be much appreciated. My view as requested: def bookmark_page(request, bookmark_id): shared_bookmark = get_object_or_404( SharedBookmark, id=bookmark_id ) variables = RequestContext(request, { 'shared_bookmark': shared_bookmark }) return render_to_response('bookmark_page.html', variables) A: Django underwent a huge amount of change between 0.96 and 1.0, so it's not surprising you're having problems. For your specific issue, see here. However I would suggest you find a more up-to-date book. It's not just the comments, but whole areas of Django are completely different from 0.96 - in particular the admin. If it's the official 'Django book', you can find the draft of version 2 (which targets Django 1.0) here. A: It's not perfect, but I've worked around this. I used the form.html included with Django itself and that got me past the "Comment post not allowed (400)" message and posted my comment successfully. It includes a few other fields but since I didn't define my own form in forms.py that's to be expected I suppose. At any rate, I seem to have worked around it. Thanks for looking at my question.
Getting "Comment post not allowed (400)" when using Django Comments
I'm going through a Django book and I seem to be stuck. The code base used in the book is .96 and I'm using 1.0 for my Django install. The portion I'm stuck at is related to Django comments (django.contrib.comments). When I submit my comments I get "Comment post not allowed (400) Why: Missing content_type or object_pk field". I've found the Django documentation to be a bit lacking in this area and I'm hoping to get some help. The comment box is displayed just fine, it's when I submit the comment that I get the above error (or security warning as it truly appears). My call to the comment form: {% render_comment_form for bookmarks.sharedbookmark shared_bookmark.id %} My form.html code: {% if user.is_authenticated %} <form action="/comments/post/" method="post"> <p><label>Post a comment:</label><br /> <textarea name="comment" rows="10" cols="60"></textarea></p> <input type="hidden" name="options" value="{{ options }}" /> <input type="hidden" name="target" value="{{ target }}" /> <input type="hidden" name="gonzo" value="{{ hash }}" /> <input type="submit" name="post" value="submit comment" /> </form> {% else %} <p>Please <a href="/login/">log in</a> to post comments.</p> {% endif %} Any help would be much appreciated. My view as requested: def bookmark_page(request, bookmark_id): shared_bookmark = get_object_or_404( SharedBookmark, id=bookmark_id ) variables = RequestContext(request, { 'shared_bookmark': shared_bookmark }) return render_to_response('bookmark_page.html', variables)
[ "Django underwent a huge amount of change between 0.96 and 1.0, so it's not surprising you're having problems.\nFor your specific issue, see here.\nHowever I would suggest you find a more up-to-date book. It's not just the comments, but whole areas of Django are completely different from 0.96 - in particular the admin. If it's the official 'Django book', you can find the draft of version 2 (which targets Django 1.0) here.\n", "It's not perfect, but I've worked around this. I used the form.html included with Django itself and that got me past the \"Comment post not allowed (400)\" message and posted my comment successfully. It includes a few other fields but since I didn't define my own form in forms.py that's to be expected I suppose. At any rate, I seem to have worked around it. Thanks for looking at my question.\n" ]
[ 0, 0 ]
[]
[]
[ "comments", "django", "python" ]
stackoverflow_0001120139_comments_django_python.txt
Q: Eliminating multiple inheritance I have the following problem and I'm wondering if there's a nice way to model these objects without using multiple inheritance. If it makes any difference, I am using Python. Students need contact information plus student information. Adults need contact information plus billing information. Students can be adult students, in which case I need contact/student/billing info, or they can be children, in which case I need contact/student/parent info. Just to be clear on how the system will be used, I need to be able to ask for a list of all adults (and I will get adult students plus parents), or a list of all students (and I will get child students plus adult students). Also, all of these objects need to have a common base class. A: What you have is an example of Role -- it's a common trap to model Role by inheritance, but Roles can change, and changing an object's inheritance structure (even in languages where it's possible, like Python) is not recommended. Children grow and become adults, and some adults will also be parents of children students as well as adult students themselves -- they might then drop either role but need to keep the other (their child changes schools but they don't, or viceversa). Just have a class Person with mandatory fields and optional ones, and the latter, representing Roles, can change. "Asking for a list" (quite independently of inheritance or otherwise) can be done either by building the list on the fly (walking through all objects to check for each whether it meets requirements) or maintaining lists corresponding to the possible requirements (or a mix of the two strategies for both frequent and ad-hoc queries). A database of some sort is likely to help here (and most DBs work much better without inheritance in the way;-). A: As I'm sure someone else will comment soon (if they haven't already), one good OO principle is "Favor composition over inheritance". From your description, it sounds suspiciously like you're breaking the Single Responsibility Principle, and should be breaking down the functionality into separate objects. It also occurs to me that Python supports duck typing, which begs the question "Why is it so important that all the classes have a common base class?" A: Very simple solution: Use composition rather than inheritance. Rather than having Student inherit from Contact and Billing, make Contact a field/attribute of Person and inherit from that. Make Billing a field of Student. Make Parent a self-reference field of Person. A: It doesn't sound like you really need multiple inheritance. In fact, you don't ever really need multiple inheritance. It's just a question of whether multiple inheritance simplifies things (which I couldn't see as being the case here). I would create a Person class that has all the code that the adult and student would share. Then, you can have an Adult class that has all of the things that only the adult needs and a Child class that has the code only the child needs. A: This sounds like something that could be done quite nicely and flexibly with a component architecture, like zope.components. Components are in a way a sort of super-flexible composition patterns. In this case I'd probably end up doing something when you load the data to also set marker interfaces on it depending on some information, like if age >= 18 you set the IAdult interface, etc. You can then get the adult information by doing adultschema = IAdultSchema(person) or something like that. (Edit: Actually I'd probably use queryAdapters(person, ISchema) to get all schemas in one go. :) A component architecture may be overkill, but once you got used to thinking like that, many problems get trivial. :) Check out Brandons excellent PyCon talk about it: http://www.youtube.com/watch?v=UF77e2TeeQo And my intro blog post: http://regebro.wordpress.com/2007/11/16/a-python-component-architecture/ A: I think your requirements are over-simplified, since in a real situation, you might have students with their own accounts to handle billing even if they are minors who need parent contact information. Also, you might have parental contact information be different from billing information in an actual situation. You might also have adult students with someone else to bill. BUT, that aside - looking at your requirements, here is one way: classes: Person, BillingInfo, StudentInfo. All people are instances of class Person... class Person: # Will have contact fields all people have - or you could split these off into an # object. parent # Will be set to None for adults or else point to their parent's # Person object. billing_info # Set to None for non-adults, else to their BillingInfo object. student_info # Set to None for non-student parents, else to their StudentInfo # object. Checking the fields will allow you to create lists as you desire. A: One solution is to create a base Info class/interface that the classes ContactInfo, StudentInfo, and BillingInfo inherit from. Have some sort of Person object that contains a list of Info objects, and then you can populate the list of Info objects with ContactInfo, StudentInfo, etc. A: In pseudocode, you could do something like this: Class Student Inherits WhateverBase Private m_StudentType as EnumStudentTypes 'an enum containing: Adult, Child Private m_Billing as Billing Private m_Contact as Contact Private m_Parent as Parent Public Sub Constructor(studentType, billing, contact, parent) ...logic to make sure we have the right combination depending on studentType. ...throw an exception if we try to assign a a parent to an adult, etc. ...maybe you could have seperate constructors, one for each studenttype. End Sub Public Property StudentType as EnumStudentTypes Get Return m_StudentType End Get End Sub Public Property Parent Get ...code to make sure we're using a studentType that has a parent, ...and throws an exception if not. Otherwise it returns m_Parent End Get End Sub [more properties] End Class Student Then you could create a class called StudentManager: Public Class StudentManager Public Function GetAdults(studentCollection(Of Students)) as StudentCollection(Of Students) Dim ResultCollection(Of Students) ...Loop through studentCollection, adding all students where Student.StudentType=Adult Return ResultCollection End Function [Other Functions] End Class Public Enum StudentType Adult=0 Child=1 End Enum
Eliminating multiple inheritance
I have the following problem and I'm wondering if there's a nice way to model these objects without using multiple inheritance. If it makes any difference, I am using Python. Students need contact information plus student information. Adults need contact information plus billing information. Students can be adult students, in which case I need contact/student/billing info, or they can be children, in which case I need contact/student/parent info. Just to be clear on how the system will be used, I need to be able to ask for a list of all adults (and I will get adult students plus parents), or a list of all students (and I will get child students plus adult students). Also, all of these objects need to have a common base class.
[ "What you have is an example of Role -- it's a common trap to model Role by inheritance, but Roles can change, and changing an object's inheritance structure (even in languages where it's possible, like Python) is not recommended. Children grow and become adults, and some adults will also be parents of children students as well as adult students themselves -- they might then drop either role but need to keep the other (their child changes schools but they don't, or viceversa).\nJust have a class Person with mandatory fields and optional ones, and the latter, representing Roles, can change. \"Asking for a list\" (quite independently of inheritance or otherwise) can be done either by building the list on the fly (walking through all objects to check for each whether it meets requirements) or maintaining lists corresponding to the possible requirements (or a mix of the two strategies for both frequent and ad-hoc queries). A database of some sort is likely to help here (and most DBs work much better without inheritance in the way;-).\n", "As I'm sure someone else will comment soon (if they haven't already), one good OO principle is \"Favor composition over inheritance\". From your description, it sounds suspiciously like you're breaking the Single Responsibility Principle, and should be breaking down the functionality into separate objects.\nIt also occurs to me that Python supports duck typing, which begs the question \"Why is it so important that all the classes have a common base class?\"\n", "Very simple solution: Use composition rather than inheritance. Rather than having Student inherit from Contact and Billing, make Contact a field/attribute of Person and inherit from that. Make Billing a field of Student. Make Parent a self-reference field of Person. \n", "It doesn't sound like you really need multiple inheritance. In fact, you don't ever really need multiple inheritance. It's just a question of whether multiple inheritance simplifies things (which I couldn't see as being the case here).\nI would create a Person class that has all the code that the adult and student would share. Then, you can have an Adult class that has all of the things that only the adult needs and a Child class that has the code only the child needs.\n", "This sounds like something that could be done quite nicely and flexibly with a component architecture, like zope.components. Components are in a way a sort of super-flexible composition patterns.\nIn this case I'd probably end up doing something when you load the data to also set marker interfaces on it depending on some information, like if age >= 18 you set the IAdult interface, etc. You can then get the adult information by doing \nadultschema = IAdultSchema(person)\n\nor something like that.\n(Edit: Actually I'd probably use\nqueryAdapters(person, ISchema)\n\nto get all schemas in one go. :)\nA component architecture may be overkill, but once you got used to thinking like that, many problems get trivial. :)\nCheck out Brandons excellent PyCon talk about it: http://www.youtube.com/watch?v=UF77e2TeeQo\nAnd my intro blog post: http://regebro.wordpress.com/2007/11/16/a-python-component-architecture/\n", "I think your requirements are over-simplified, since in a real situation, you might have students with their own accounts to handle billing even if they are minors who need parent contact information. Also, you might have parental contact information be different from billing information in an actual situation. You might also have adult students with someone else to bill. BUT, that aside - looking at your requirements, here is one way:\nclasses: Person, BillingInfo, StudentInfo.\nAll people are instances of class Person...\nclass Person:\n # Will have contact fields all people have - or you could split these off into an\n # object.\n parent # Will be set to None for adults or else point to their parent's\n # Person object.\n billing_info # Set to None for non-adults, else to their BillingInfo object.\n student_info # Set to None for non-student parents, else to their StudentInfo\n # object. \n\nChecking the fields will allow you to create lists as you desire.\n", "One solution is to create a base Info class/interface that the classes ContactInfo, StudentInfo, and BillingInfo inherit from. Have some sort of Person object that contains a list of Info objects, and then you can populate the list of Info objects with ContactInfo, StudentInfo, etc.\n", "In pseudocode, you could do something like this:\nClass Student\n Inherits WhateverBase\n\n Private m_StudentType as EnumStudentTypes 'an enum containing: Adult, Child\n Private m_Billing as Billing\n Private m_Contact as Contact\n Private m_Parent as Parent\n\n Public Sub Constructor(studentType, billing, contact, parent)\n ...logic to make sure we have the right combination depending on studentType.\n ...throw an exception if we try to assign a a parent to an adult, etc.\n ...maybe you could have seperate constructors, one for each studenttype. \n End Sub\n\n\n Public Property StudentType as EnumStudentTypes\n Get\n Return m_StudentType\n End Get\n End Sub\n\n Public Property Parent \n Get \n ...code to make sure we're using a studentType that has a parent,\n ...and throws an exception if not. Otherwise it returns m_Parent\n End Get\n End Sub\n\n\n [more properties]\nEnd Class Student\n\nThen you could create a class called StudentManager:\nPublic Class StudentManager\n Public Function GetAdults(studentCollection(Of Students)) as StudentCollection(Of Students)\n Dim ResultCollection(Of Students)\n\n ...Loop through studentCollection, adding all students where Student.StudentType=Adult \n\n Return ResultCollection\n End Function\n\n\n [Other Functions]\nEnd Class\n\nPublic Enum StudentType\n Adult=0\n Child=1 \nEnd Enum\n\n" ]
[ 8, 5, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "multiple_inheritance", "oop", "python" ]
stackoverflow_0001131599_multiple_inheritance_oop_python.txt
Q: Write unit tests for restish in Python I'm writing a RESTful API in Python using the restish framework. I would like to write some unit tests (using the unittest package, for example), that will make different requests to my application and validate the results. The unit tests should be able to run as-is, without needing to start a separate web-server process. How do I set up a mock environment using restish to do this? Thanks A: I test everything using WebTest and NoseTests and I can strongly recommend it. It's fast, flexible and easy to set up. Just pass it your wsgi function and you're good to go. A: Since restish is a WSGI framework, you can take advantage of any one of a number of WSGI testing tools: http://wsgi.org/wsgi/Testing At least a few of those tools, such as Twill, should be able to test your application without starting a separate web server. (For example, see the "Testing WSGI Apps with Twill" link for more details.) You might want to ask on the restish forum/list if they have a preferred tool for this type of thing. A: Restish has a built in TestApp class that can be used to test restish apps. Assuming you have a "test" dir in your root restish project callte "restest" created with paster. import os import unittest from paste.fixture import TestApp class RootTest (unittest.TestCase): def setUp(self): self.app = TestApp('config:%s/../development.ini' % os.path.dirname(os.path.abspath(__file__))) def tearDown(self): self.app = None def test_html(self): res = self.app.get('/') res.mustcontain('Hello from restest!') if __name__ == '__main__': unittest.main()
Write unit tests for restish in Python
I'm writing a RESTful API in Python using the restish framework. I would like to write some unit tests (using the unittest package, for example), that will make different requests to my application and validate the results. The unit tests should be able to run as-is, without needing to start a separate web-server process. How do I set up a mock environment using restish to do this? Thanks
[ "I test everything using WebTest and NoseTests and I can strongly recommend it. It's fast, flexible and easy to set up. Just pass it your wsgi function and you're good to go.\n", "Since restish is a WSGI framework, you can take advantage of any one of a number of WSGI testing tools:\n\nhttp://wsgi.org/wsgi/Testing\n\nAt least a few of those tools, such as Twill, should be able to test your application without starting a separate web server. (For example, see the \"Testing WSGI Apps with Twill\" link for more details.)\nYou might want to ask on the restish forum/list if they have a preferred tool for this type of thing.\n", "Restish has a built in TestApp class that can be used to test restish apps.\nAssuming you have a \"test\" dir in your root restish project callte \"restest\" created with paster.\nimport os\nimport unittest\nfrom paste.fixture import TestApp\n\nclass RootTest (unittest.TestCase):\n\n def setUp(self):\n self.app = TestApp('config:%s/../development.ini' % os.path.dirname(os.path.abspath(__file__)))\n\n def tearDown(self):\n self.app = None\n\n def test_html(self):\n res = self.app.get('/')\n res.mustcontain('Hello from restest!')\n\nif __name__ == '__main__':\n unittest.main()\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0001122192_python_unit_testing.txt