content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Changing python interpreter windows I have two python installations, 2.5 and 2.6 I want to change the default python interpreter from 2.5 to 2.6. Anyone know how? A: PYTHONPATH is NOT what you are looking for. That is for varying where Python's "import" looks for packages and modules. You need to change the PATH variable in your environment so that it contains e.g. "....;c:\python26;...." instead of "....;c:\python25;....". Click on start > control panel > system > advanced > environment variables. Select "path". Edit it. Click on OK enough times to get out of there. A: just FYI, since both c:\python25 and c:\python26 are on PATH, I copy C:\Python25\python.exe to C:\Python25\py25.exe, and copy C:\Python26\python.exe to C:\Python26\py26.exe Then just type py25(or py26) get the specific version.
Changing python interpreter windows
I have two python installations, 2.5 and 2.6 I want to change the default python interpreter from 2.5 to 2.6. Anyone know how?
[ "PYTHONPATH is NOT what you are looking for. That is for varying where Python's \"import\" looks for packages and modules.\nYou need to change the PATH variable in your environment so that it contains e.g. \"....;c:\\python26;....\" instead of \"....;c:\\python25;....\". Click on start > control panel > system > advanced > environment variables. Select \"path\". Edit it. Click on OK enough times to get out of there.\n", "just FYI, since both c:\\python25 and c:\\python26 are on PATH, I copy C:\\Python25\\python.exe to C:\\Python25\\py25.exe, and copy C:\\Python26\\python.exe to C:\\Python26\\py26.exe\nThen just type py25(or py26) get the specific version.\n" ]
[ 10, 1 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0001053794_python_windows.txt
Q: Regex and a sequences of patterns? Is there a way to match a pattern (e\d\d) several times, capturing each one into a group? For example, given the string.. blah.s01e24e25 ..I wish to get four groups: 1 -> blah 2 -> 01 3 -> 24 4 -> 25 The obvious regex to use is (in Python regex: import re re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups() ..but I also want to match either of the following: blah.s01e24 blah.s01e24e25e26 You can't seem to do (e\d\d)+, or rather you can, but it only captures the last occurrence: >>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups() ('blah', '01', 'e25') >>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups() ('blah', '01', 'e26') I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes: \w+\.s(\d+)\.e(\d+) # matches blah.s01e01 \w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02 \w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03 \w - \d+x\d+ # matches blah - 01x01 \w - \d+x\d+\d+ # matches blah - 01x01x02 \w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03 ..and so on for numerous other patterns. Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs Basically, is there a way to capture a repeating pattern using regex? A: Do it in two steps, one to find all the numbers, then one to split them: import re def get_pieces(s): # Error checking omitted! whole_match = re.search(r'\w+\.(s\d+(?:e\d+)+)', s) return re.findall(r'\d+', whole_match.group(1)) print get_pieces(r"blah.s01e01") print get_pieces(r"blah.s01e01e02") print get_pieces(r"blah.s01e01e02e03") # prints: # ['01', '01'] # ['01', '01', '02'] # ['01', '01', '02', '03'] A: Number of captured groups equal to number of parentheses groups. Look at findall or finditer for solving your problem. A: non-grouping parentheses: (?:asdfasdg) which do not have to appear: (?:adsfasdf)? c = re.compile(r"""(\w+).s(\d+) (?: e(\d+) (?: e(\d+) )? )? """, re.X) or c = re.compile(r"""(\w+).s(\d+)(?:e(\d+)(?:e(\d+))?)?""", re.X) A: After thinking about the problem, I think I have a simpler solution, using named groups. The simplest regex a user (or I) could use is: (\w+\).s(\d+)\.e(\d+) The filename parsing class will take the first group as the show name, second as season number, third as episode number. This covers a majority of files. I'll allow a few different named groups for these: (?P<showname>\w+\).s(?P<seasonnumber>\d+)\.e(?P<episodenumber>\d+) To support multiple episodes, I'll support two named groups, something like startingepisodenumber and endingepisodenumber to support things like showname.s01e01-03: (?P<showname>\w+\)\.s(?P<seasonnumber>\d+)\.e(?P<startingepisodenumber>\d+)-(?P<endingepisodenumber>e\d+) And finally, allow named groups with names matching episodenumber\d+ (episodenumber1, episodenumber2 etc): (?P<showname>\w+\)\. s(?P<seasonnumber>\d+)\. e(?P<episodenumber1>\d+) e(?P<episodenumber2>\d+) e(?P<episodenumber3>\d+) It still requires possibly duplicating the patterns for different amounts of e01s, but there will never be a file with two non-consecutive episodes (like show.s01e01e03e04), so using the starting/endingepisodenumber groups should solve this, and for weird cases users come across, they can use the episodenumber\d+ group names This doesn't really answer the sequence-of-patterns question, but it solves the problem that led me to ask it! (I'll still accept another answer that shows how to match s01e23e24...e27 in one regex - if someone works this out!) A: Perhaps something like that? def episode_matcher(filename): m1= re.match(r"(?i)(.*?)\.s(\d+)((?:e\d+)+)", filename) if m1: m2= re.findall(r"\d+", m1.group(3)) return m1.group(1), m1.group(2), m2 # auto return None here >>> episode_matcher("blah.s01e02") ('blah', '01', ['02']) >>> episode_matcher("blah.S01e02E03") ('blah', '01', ['02', '03'])
Regex and a sequences of patterns?
Is there a way to match a pattern (e\d\d) several times, capturing each one into a group? For example, given the string.. blah.s01e24e25 ..I wish to get four groups: 1 -> blah 2 -> 01 3 -> 24 4 -> 25 The obvious regex to use is (in Python regex: import re re.match("(\w+).s(\d+)e(\d+)e(\d+)", "blah.s01e24e25").groups() ..but I also want to match either of the following: blah.s01e24 blah.s01e24e25e26 You can't seem to do (e\d\d)+, or rather you can, but it only captures the last occurrence: >>> re.match("(\w+).s(\d+)(e\d\d){2}", "blah.s01e24e25e26").groups() ('blah', '01', 'e25') >>> re.match("(\w+).s(\d+)(e\d\d){3}", "blah.s01e24e25e26").groups() ('blah', '01', 'e26') I want to do this in a single regex because I have multiple patterns to match TV episode filenames, and do not want to duplicate each expression to handle multiple episodes: \w+\.s(\d+)\.e(\d+) # matches blah.s01e01 \w+\.s(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02 \w+\.s(\d+)\.e(\d+)\.e(\d+)\.e(\d+) # matches blah.s01e01e02e03 \w - \d+x\d+ # matches blah - 01x01 \w - \d+x\d+\d+ # matches blah - 01x01x02 \w - \d+x\d+\d+\d+ # matches blah - 01x01x02x03 ..and so on for numerous other patterns. Another thing to complicate matters - I wish to store these regexs in a config file, so a solution using multiple regexs and function calls is not desired - but if this proves impossible I'll just allow the user to add simple regexs Basically, is there a way to capture a repeating pattern using regex?
[ "Do it in two steps, one to find all the numbers, then one to split them:\nimport re\n\ndef get_pieces(s):\n # Error checking omitted!\n whole_match = re.search(r'\\w+\\.(s\\d+(?:e\\d+)+)', s)\n return re.findall(r'\\d+', whole_match.group(1))\n\nprint get_pieces(r\"blah.s01e01\")\nprint get_pieces(r\"blah.s01e01e02\")\nprint get_pieces(r\"blah.s01e01e02e03\")\n\n# prints:\n# ['01', '01']\n# ['01', '01', '02']\n# ['01', '01', '02', '03']\n\n", "Number of captured groups equal to number of parentheses groups. Look at findall or finditer for solving your problem.\n", "non-grouping parentheses:\n (?:asdfasdg)\nwhich do not have to appear:\n (?:adsfasdf)?\nc = re.compile(r\"\"\"(\\w+).s(\\d+)\n (?:\n e(\\d+)\n (?:\n e(\\d+)\n )?\n )?\n \"\"\", re.X)\n\nor\nc = re.compile(r\"\"\"(\\w+).s(\\d+)(?:e(\\d+)(?:e(\\d+))?)?\"\"\", re.X)\n\n", "After thinking about the problem, I think I have a simpler solution, using named groups.\nThe simplest regex a user (or I) could use is:\n(\\w+\\).s(\\d+)\\.e(\\d+)\n\nThe filename parsing class will take the first group as the show name, second as season number, third as episode number. This covers a majority of files.\nI'll allow a few different named groups for these:\n(?P<showname>\\w+\\).s(?P<seasonnumber>\\d+)\\.e(?P<episodenumber>\\d+)\n\nTo support multiple episodes, I'll support two named groups, something like startingepisodenumber and endingepisodenumber to support things like showname.s01e01-03:\n(?P<showname>\\w+\\)\\.s(?P<seasonnumber>\\d+)\\.e(?P<startingepisodenumber>\\d+)-(?P<endingepisodenumber>e\\d+)\n\nAnd finally, allow named groups with names matching episodenumber\\d+ (episodenumber1, episodenumber2 etc):\n(?P<showname>\\w+\\)\\.\ns(?P<seasonnumber>\\d+)\\.\ne(?P<episodenumber1>\\d+)\ne(?P<episodenumber2>\\d+)\ne(?P<episodenumber3>\\d+)\n\nIt still requires possibly duplicating the patterns for different amounts of e01s, but there will never be a file with two non-consecutive episodes (like show.s01e01e03e04), so using the starting/endingepisodenumber groups should solve this, and for weird cases users come across, they can use the episodenumber\\d+ group names\nThis doesn't really answer the sequence-of-patterns question, but it solves the problem that led me to ask it! (I'll still accept another answer that shows how to match s01e23e24...e27 in one regex - if someone works this out!)\n", "Perhaps something like that?\ndef episode_matcher(filename):\n m1= re.match(r\"(?i)(.*?)\\.s(\\d+)((?:e\\d+)+)\", filename)\n if m1:\n m2= re.findall(r\"\\d+\", m1.group(3))\n return m1.group(1), m1.group(2), m2\n # auto return None here\n\n>>> episode_matcher(\"blah.s01e02\")\n('blah', '01', ['02'])\n>>> episode_matcher(\"blah.S01e02E03\")\n('blah', '01', ['02', '03'])\n\n" ]
[ 5, 1, 1, 0, 0 ]
[]
[]
[ "python", "regex", "sequences" ]
stackoverflow_0001053481_python_regex_sequences.txt
Q: Why print statement is not pythonic? This question was bugging me for quite a while (as evidenced by my previous question): why exactly is print(x) better (which is defined as being more pythonic) than print x? For those who don't know, the print statement was changed into function in Python 3.0. The formal documentation is in PEP 3105 and motivation is in Guido van Rossum's email. To those points I would like to make a counterpoint: There are other operators, such as import which we write as a statement, though their functionality is actually duplicated with a function __import__ To beginners, the operator print does not belong to the general application logic. To them it's the mysterious operator which is a culmination of their program. They expect it to look differently. All the beginner books which were describing basic Python 2.x are now guaranteed to be broken from the first example. Certainly, languages sometimes change, but those changes are usually less visible to novices. It's not immediately obvious to me that a functionality of print can be duplicated on an application level. For example, sometimes I would like to redirect print from a console as a modal OS dialog. While people say it's hard to rewrite all print statements to a function, they have forced every Python 2.x developer to do exactly that for all their projects. Good, it's not hard with automatic converter. Everyone who enjoys having an ability to manipulate function print would be just as well-served if print was a statement wrapping function __print__. So, can we please have a canonical answer to this question on the pages of Stack Overflow? A: Looks to me like yours is a debate, not a question -- are you really going to accept an answer that shows how deeply and badly wrong you were in your assertions?! On to your debating points: There are other operators, such as import which we write as a statement, though their functionality is actually duplicated with a function __import__ Absolutely wrong: function __import__ (like every other function -- and operator, for that matter) binds no names in the scope of "caller" (code containing it) -- any "thingie" that binds names in the "caller's scope" must be a statement (just like assignment, def, and call). Your "point" appears to totally miss the extremely deep and crucial distinction that Python draws between statements and expressions -- one may reasonably dislike this distinction, but ignoring it is, most obviously, simply wrong. Python statements are things the Python compiler must be specifically aware of -- they may alter the binding of names, may alter control flow, and/or may need to be entirely removed from the generated bytecode in certain conditions (the latter applies to assert). print was the only exception to this assertion in Python 2; by removing it from the roster of statements, Python 3 removes an exception, makes the general assertion "just hold", and therefore is a more regular language. Special cases are not special enough to break the rules has long been a Pythonic tenet (do import this at an interactive interpreter's >>> prompt to see "the Zen of Python" displayed), and this change to the language removes a violation of this tenet that had to remain for many years due to an early, erroneous design decision. To beginners, the operator print does not belong to the general application logic. To them it's the mysterious operator which is a culmination of their program. They expect it to look differently. Curing beginners of their misconceptions as early as feasible is a very good thing. All the beginner books which were describing basic Python 2.x are now guaranteed to be broken from the fist example. Certainly, languages sometimes changes, but changes are usually less visible to novices. Languages rarely change in deep and backwards-incompatible ways (Python does it about once a decade) and few language features are "highly visible to novices", so the total number of observations is small -- yet even within that tiny compass we can easily find counter-examples, where a feature highly visible to beginners was just so badly designed that removing it was well worth the disruption. For example, modern dialects of Basic, such as Microsoft's Visual Basic, don't use explicit user-entered line numbers, a "feature" that was both terrible and highly visible to absolutely everybody since it was mandatory in early dialects of Basic. Modern variants of Lisp (from Scheme onwards) don't use dynamic scoping, a misfeature that was sadly highly visible (usually manifesting as hard-to-understand bugs in their code) to beginners, basically as soon as they started writing functions in Lisp 1.5 (I once was a beginner in that and can testify to how badly it bit me). It's not immediately obvious to me that a functionality of print can be duplicated on an application level. For example, sometimes I would like to redirect print from a console as a modal OS dialog. Not sure I follow this "point". Just change sys.stdout to your favorite pseudo-file object and redirect to your heart's contents -- you have the option of monkey patching the built-in function print (which you never had in Python 2), but nobody's twisting your arm and forcing you to do so. While people say it's hard to rewrite all print statements to a function, they have forced every Python 2.x developer to do exactly that for all their projects. Good, it's not hard with automatic converter. The 2to3 tool does indeed take care of all such easy surface incompatibilities -- no sweat (and it needs to be run anyway to take care of quite a few more besides print, so people do use it extensively). So, what's your "point" here? Everyone who enjoys having an ability to manipulate function print would be just as well-served if print was a statement wrapping function print. Such an arrangement would not, per se, remove an unnecessary keyword (and most especially, an unjustified irregularity, as I explained above: a statement that has no good reason to be a statement because there is absolutely no need for the compiler to be specially aware of it in any way, shape, or form!). It's far from clear to me that having such an underlying function would add any real value, but if you have real use cases you can certainly propose the case in the Python Ideas mailing list -- such an underlying function, if proven to be precious indeed, could be retrofitted to be used by the print statement in Python 2.7 as well as by the print function in Python 3.2. However, consider a typical case in which one might want to monkey-patch the built-in print: adding keyword arguments to allow fancy tweaks. How would the __print__ function you're apparently proposed ever ge those KW arguments from a __print__ statement? Some funkier syntax yet than the horrors of >> myfile and the trailing comma...?! With print as a function, keyword arguments follow just the perfectly normal and ordinary rules that apply to every function and function call -- bliss! So, in summary, it's more Pythonic for print to be a function because it removes anomalies, special cases, and any need for weird exceptional syntax -- simplicity, regularity, and uniformity are Python's trademark. A: Here's the reason I hate the print statement in 2.x. >>> something() <something instance at 0xdeadbeef> >>> print something() <something instance at 0xdeadbeef> worthless object has no useful __str__, Fine, I can deal, look at it some more. >>> dir(something()) ['foo', 'bar', 'baz', 'wonderful'] >>> help(something().foo) "foo(self, callable)" hmm.. so does that callable take arguments? >>> something().foo(print) something().foo(print) ^ SyntaxError: invalid syntax >>> something().foo(lambda *args: print(*args)) something().foo(lambda *args: print(*args)) ^ SyntaxError: invalid syntax So... I have to either define a function to use >>> def myPrint(*args): print *args def myPrint(*args): print *args ^ SyntaxError: invalid syntax >>> def myPrint(*args): print args ... >>> myPrint(1) (1,) Shudder, or use sys.stdout.write, which is almost as cludgy, since it has very different behavior from print. It also looks different, which means I'll almost never remember that it exists. Using print statements in a short, one-off type facility and then improving it to use logging or something better is just inelegant. If print worked like those things, and especially could be used with high order functions, then it would be better than just the thing you use when you don't use real logging or real debuggers. A: The print statement also carries the unusual >> syntax for printing to a specific file. There is no other statement in Python that has this syntax, so it is unusual in that way. I believe you are right though, most of the problems with the print statement could have been solved by the introduction of a __print__ function. A: I found GvR's "print is the only application-level functionality that has a statement dedicated to it" convincing. Python is a general-purpose language, and shouldn't have a statement for outputting to a stream as an operator or keyword. A: It is not pythonic because the syntax should be: stdout.append("Hello World") or stdout += "hello world" Disclaimer: I like Python really. On a serious note ... I think that Python's object model and 'Implement it yourself' approach to things like attribute visibility is great. I think that this 'everything is an object' approach to OOP, and even the objects defined as a collection of objects structure is very clear-minded. What I fear Python will do is become a language that doesn't present it's intentions in a clear way ... and I would hate to see the beauty of the principles get bogged down in over-thinking the already unconventional syntax presentation. Sort of like Lisp, beautiful in it's structure, grim, imho in it's syntax.
Why print statement is not pythonic?
This question was bugging me for quite a while (as evidenced by my previous question): why exactly is print(x) better (which is defined as being more pythonic) than print x? For those who don't know, the print statement was changed into function in Python 3.0. The formal documentation is in PEP 3105 and motivation is in Guido van Rossum's email. To those points I would like to make a counterpoint: There are other operators, such as import which we write as a statement, though their functionality is actually duplicated with a function __import__ To beginners, the operator print does not belong to the general application logic. To them it's the mysterious operator which is a culmination of their program. They expect it to look differently. All the beginner books which were describing basic Python 2.x are now guaranteed to be broken from the first example. Certainly, languages sometimes change, but those changes are usually less visible to novices. It's not immediately obvious to me that a functionality of print can be duplicated on an application level. For example, sometimes I would like to redirect print from a console as a modal OS dialog. While people say it's hard to rewrite all print statements to a function, they have forced every Python 2.x developer to do exactly that for all their projects. Good, it's not hard with automatic converter. Everyone who enjoys having an ability to manipulate function print would be just as well-served if print was a statement wrapping function __print__. So, can we please have a canonical answer to this question on the pages of Stack Overflow?
[ "Looks to me like yours is a debate, not a question -- are you really going to accept an answer that shows how deeply and badly wrong you were in your assertions?!\nOn to your debating points:\n\nThere are other operators, such as\n import which we write as a statement,\n though their functionality is actually\n duplicated with a function __import__\n\nAbsolutely wrong: function __import__ (like every other function -- and operator, for that matter) binds no names in the scope of \"caller\" (code containing it) -- any \"thingie\" that binds names in the \"caller's scope\" must be a statement (just like assignment, def, and call). Your \"point\" appears to totally miss the extremely deep and crucial distinction that Python draws between statements and expressions -- one may reasonably dislike this distinction, but ignoring it is, most obviously, simply wrong.\nPython statements are things the Python compiler must be specifically aware of -- they may alter the binding of names, may alter control flow, and/or may need to be entirely removed from the generated bytecode in certain conditions (the latter applies to assert). print was the only exception to this assertion in Python 2; by removing it from the roster of statements, Python 3 removes an exception, makes the general assertion \"just hold\", and therefore is a more regular language. Special cases are not special enough to break the rules has long been a Pythonic tenet (do import this at an interactive interpreter's >>> prompt to see \"the Zen of Python\" displayed), and this change to the language removes a violation of this tenet that had to remain for many years due to an early, erroneous design decision.\n\nTo beginners, the operator print does\n not belong to the general application\n logic. To them it's the mysterious\n operator which is a culmination of\n their program. They expect it to look\n differently.\n\nCuring beginners of their misconceptions as early as feasible is a very good thing.\n\nAll the beginner books which were\n describing basic Python 2.x are now\n guaranteed to be broken from the fist\n example. Certainly, languages\n sometimes changes, but changes are\n usually less visible to novices.\n\nLanguages rarely change in deep and backwards-incompatible ways (Python does it about once a decade) and few language features are \"highly visible to novices\", so the total number of observations is small -- yet even within that tiny compass we can easily find counter-examples, where a feature highly visible to beginners was just so badly designed that removing it was well worth the disruption. For example, modern dialects of Basic, such as Microsoft's Visual Basic, don't use explicit user-entered line numbers, a \"feature\" that was both terrible and highly visible to absolutely everybody since it was mandatory in early dialects of Basic. Modern variants of Lisp (from Scheme onwards) don't use dynamic scoping, a misfeature that was sadly highly visible (usually manifesting as hard-to-understand bugs in their code) to beginners, basically as soon as they started writing functions in Lisp 1.5 (I once was a beginner in that and can testify to how badly it bit me). \n\nIt's not immediately obvious to me\n that a functionality of print can be\n duplicated on an application level.\n For example, sometimes I would like to\n redirect print from a console as a\n modal OS dialog.\n\nNot sure I follow this \"point\". Just change sys.stdout to your favorite pseudo-file object and redirect to your heart's contents -- you have the option of monkey patching the built-in function print (which you never had in Python 2), but nobody's twisting your arm and forcing you to do so.\n\nWhile people say it's hard to rewrite\n all print statements to a function,\n they have forced every Python 2.x\n developer to do exactly that for all\n their projects. Good, it's not hard\n with automatic converter.\n\nThe 2to3 tool does indeed take care of all such easy surface incompatibilities -- no sweat (and it needs to be run anyway to take care of quite a few more besides print, so people do use it extensively). So, what's your \"point\" here?\n\nEveryone who enjoys having an ability\n to manipulate function print would be\n just as well-served if print was a\n statement wrapping function print.\n\nSuch an arrangement would not, per se, remove an unnecessary keyword (and most especially, an unjustified irregularity, as I explained above: a statement that has no good reason to be a statement because there is absolutely no need for the compiler to be specially aware of it in any way, shape, or form!). It's far from clear to me that having such an underlying function would add any real value, but if you have real use cases you can certainly propose the case in the Python Ideas mailing list -- such an underlying function, if proven to be precious indeed, could be retrofitted to be used by the print statement in Python 2.7 as well as by the print function in Python 3.2.\nHowever, consider a typical case in which one might want to monkey-patch the built-in print: adding keyword arguments to allow fancy tweaks. How would the __print__ function you're apparently proposed ever ge those KW arguments from a __print__ statement? Some funkier syntax yet than the horrors of >> myfile and the trailing comma...?! With print as a function, keyword arguments follow just the perfectly normal and ordinary rules that apply to every function and function call -- bliss!\nSo, in summary, it's more Pythonic for print to be a function because it removes anomalies, special cases, and any need for weird exceptional syntax -- simplicity, regularity, and uniformity are Python's trademark.\n", "Here's the reason I hate the print statement in 2.x. \n>>> something()\n<something instance at 0xdeadbeef>\n>>> print something()\n<something instance at 0xdeadbeef>\n\nworthless object has no useful __str__, Fine, I can deal, look at it some more.\n>>> dir(something())\n['foo', 'bar', 'baz', 'wonderful']\n>>> help(something().foo)\n\"foo(self, callable)\"\n\nhmm.. so does that callable take arguments?\n>>> something().foo(print)\n something().foo(print)\n ^\nSyntaxError: invalid syntax\n>>> something().foo(lambda *args: print(*args))\n something().foo(lambda *args: print(*args))\n ^\nSyntaxError: invalid syntax\n\nSo... I have to either define a function to use\n>>> def myPrint(*args): print *args\n def myPrint(*args): print *args\n ^\nSyntaxError: invalid syntax\n>>> def myPrint(*args): print args\n...\n>>> myPrint(1)\n(1,)\n\nShudder, or use sys.stdout.write, which is almost as cludgy, since it has very different behavior from print. It also looks different, which means I'll almost never remember that it exists.\nUsing print statements in a short, one-off type facility and then improving it to use logging or something better is just inelegant. If print worked like those things, and especially could be used with high order functions, then it would be better than just the thing you use when you don't use real logging or real debuggers. \n", "The print statement also carries the unusual >> syntax for printing to a specific file. There is no other statement in Python that has this syntax, so it is unusual in that way.\nI believe you are right though, most of the problems with the print statement could have been solved by the introduction of a __print__ function. \n", "I found GvR's \"print is the only application-level functionality that has a statement dedicated to it\" convincing. Python is a general-purpose language, and shouldn't have a statement for outputting to a stream as an operator or keyword.\n", "It is not pythonic because the syntax should be:\nstdout.append(\"Hello World\")\n\nor \nstdout += \"hello world\"\n\nDisclaimer: I like Python really.\nOn a serious note ...\nI think that Python's object model and 'Implement it yourself' approach to things like attribute visibility is great. I think that this 'everything is an object' approach to OOP, and even the objects defined as a collection of objects structure is very clear-minded.\nWhat I fear Python will do is become a language that doesn't present it's intentions in a clear way ... and I would hate to see the beauty of the principles get bogged down in over-thinking the already unconventional syntax presentation. Sort of like Lisp, beautiful in it's structure, grim, imho in it's syntax.\n" ]
[ 58, 11, 8, 6, 3 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001053849_python_python_3.x.txt
Q: formencode invalid return type if an exception occurs in form encode then what will be the return type?? suppose if(request.POST): formvalidate = ValidationRule() try: new = formvalidate.to_python(request.POST) data = Users1( n_date = new['n_date'], heading = new['heading'], desc = new['desc'], link = new['link'], module_name = new['module_name'] ) session.add(data) session.commit() except formencode.Invalid, e: errors = e how we can find the field wise error A: I assume you are using formencode(http://formencode.org) you can use unpack_errors to get per field error e.g. import formencode from formencode import validators class UserForm(formencode.Schema): first_name = validators.String(not_empty=True) last_name = validators.String(not_empty=True) form = UserForm() try: form.to_python({}) except formencode.Invalid,e: print e.unpack_errors() it will print a dict of errors per field. you can use formencode.htmlfill.render to render all errors, in different ways, read http://formencode.org/htmlfill.html#errors
formencode invalid return type
if an exception occurs in form encode then what will be the return type?? suppose if(request.POST): formvalidate = ValidationRule() try: new = formvalidate.to_python(request.POST) data = Users1( n_date = new['n_date'], heading = new['heading'], desc = new['desc'], link = new['link'], module_name = new['module_name'] ) session.add(data) session.commit() except formencode.Invalid, e: errors = e how we can find the field wise error
[ "I assume you are using formencode(http://formencode.org)\nyou can use unpack_errors to get per field error e.g.\nimport formencode\nfrom formencode import validators\n\nclass UserForm(formencode.Schema):\n first_name = validators.String(not_empty=True)\n last_name = validators.String(not_empty=True)\n\nform = UserForm()\ntry:\n form.to_python({})\nexcept formencode.Invalid,e:\n print e.unpack_errors()\n\nit will print a dict of errors per field.\nyou can use formencode.htmlfill.render to render all errors, in different ways, read\nhttp://formencode.org/htmlfill.html#errors\n" ]
[ 3 ]
[]
[]
[ "error_handling", "formencode", "python", "webforms" ]
stackoverflow_0001054210_error_handling_formencode_python_webforms.txt
Q: compound sorting in python I have a python script which outputs lots of data, sample is as below. the first of the 4 fields always consists of two letters, one digit, a slash and one or two digits Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. Gi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETTPACKARD Gi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH Gi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems Gi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc Gi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems Gi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH Gi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems Gi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard Gi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc Gi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems What would be the best way to sort this correctly on the first field, so that this sample would read Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. Gi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems Gi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems Gi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH Gi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems Gi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard Gi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETT PACKARD Gi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH Gi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems Gi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc Gi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc My efforts have been very messy, and resulted in numbers such as 12 coming before 5! As ever, many thanks for your patience. A: def lineKey (line): keyStr, rest = line.split(' ', 1) a, b = keyStr.split('/', 1) return (a, int(b)) sorted(lines, key=lineKey) A: to sort split each line such that you have two tuple, part before / and integer part after that, so each line should be sorted on something like ('Gi6', 12), see example below s="""Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. Gi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETTPACKARD Gi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH Gi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems Gi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc Gi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems Gi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH Gi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems Gi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard Gi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc Gi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems""" lines = s.split("\n") def sortKey(l): a,b = l.split("/") b=int(b[:2].strip()) return (a,b) lines.sort(key=sortKey) for l in lines: print l A: You can define a cmp() comparison function, for .sort([cmp[, key[, reverse]]]) calls: The sort() method takes optional arguments for controlling the comparisons. cmp specifies a custom comparison function of two arguments (list items) which should return a negative, zero or positive number depending on whether the first argument is considered smaller than, equal to, or larger than the second argument: cmp=lambda x,y: cmp(x.lower(), y.lower()). The default value is None. In the cmp() function, retrieve the numeric key and use int(field) to ensure numeric (not textual) comparison. Alternately, a key() function can be defined (thanks, @ Anurag Uniyal): key specifies a function of one argument that is used to extract a comparison key from each list element: (e.g. key=str.lower). The default value is None. A: If you are working in a unix environment, you can use "sort" to sort such lists. Another possibility is to use some kind of bucket sort in your python script, which should be a lot faster.
compound sorting in python
I have a python script which outputs lots of data, sample is as below. the first of the 4 fields always consists of two letters, one digit, a slash and one or two digits Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. Gi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETTPACKARD Gi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH Gi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems Gi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc Gi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems Gi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH Gi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems Gi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard Gi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc Gi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems What would be the best way to sort this correctly on the first field, so that this sample would read Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. Gi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems Gi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems Gi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH Gi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems Gi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard Gi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETT PACKARD Gi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH Gi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems Gi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc Gi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc My efforts have been very messy, and resulted in numbers such as 12 coming before 5! As ever, many thanks for your patience.
[ "def lineKey (line):\n keyStr, rest = line.split(' ', 1)\n a, b = keyStr.split('/', 1)\n return (a, int(b))\n\nsorted(lines, key=lineKey)\n\n", "to sort split each line such that you have two tuple, part before / and integer part after that, so each line should be sorted on something like ('Gi6', 12), see example below\ns=\"\"\"Gi3/2 --.--.--.-- 0024.e89b.c10e Dell Inc. \nGi5/4 --.--.--.-- 0030.c1cd.f038 HEWLETTPACKARD \nGi4/3 --.--.--.-- 0020.ac00.6703 INTERFLEX DATENSYSTEME GMBH \nGi3/7 --.--.--.-- 0009.4392.34f2 Cisco Systems \nGi6/6 --.--.--.-- 001c.2333.bd5a Dell Inc \nGi3/16 --.--.--.-- 0009.7c92.7af2 Cisco Systems \nGi5/12 --.--.--.-- 0020.ac00.3fb0 INTERFLEX DATENSYSTEME GMBH \nGi4/5 --.--.--.-- 0009.4392.6db2 Cisco Systems \nGi4/6 --.--.--.-- 000b.cd39.c7c8 Hewlett Packard \nGi6/4 --.--.--.-- 0021.70d7.8d33 Dell Inc \nGi6/14 --.--.--.-- 0009.7c91.fa71 Cisco Systems\"\"\"\n\nlines = s.split(\"\\n\")\ndef sortKey(l):\n a,b = l.split(\"/\")\n b=int(b[:2].strip())\n return (a,b)\n\nlines.sort(key=sortKey)\n\nfor l in lines: print l\n\n", "You can define a cmp() comparison function, for .sort([cmp[, key[, reverse]]]) calls:\n\nThe sort() method takes optional arguments for controlling the comparisons.\ncmp specifies a custom comparison function of two arguments (list items) which should return a negative, zero or positive number depending on whether the first argument is considered smaller than, equal to, or larger than the second argument: cmp=lambda x,y: cmp(x.lower(), y.lower()). The default value is None.\n\nIn the cmp() function, retrieve the numeric key and use int(field) to ensure numeric (not textual) comparison.\nAlternately, a key() function can be defined (thanks, @ Anurag Uniyal):\n\nkey specifies a function of one argument that is used to extract a comparison key from each list element: (e.g. key=str.lower). The default value is None.\n\n", "If you are working in a unix environment, you can use \"sort\" to sort such lists.\nAnother possibility is to use some kind of bucket sort in your python script, which should be a lot faster.\n" ]
[ 5, 4, 1, 0 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0001054454_python_sorting.txt
Q: Django official tutorial for the absolute beginner, absolutely failed! Not that level of failure indeed. I just completed the 4 part tutorial from djangoproject.com, my administration app works fine and my entry point url (/polls/) works well, with the exception that I get this http response: No polls are available. Even if the database has one registry. Entering with the admin app, the entry shows up the way it should be. At the end of the tutorial, you change all your hard-coded views by replacing it for generic views on your URLconf. It's supossed that after all the modifications your urls.py ends up like this: from django.conf.urls.defaults import * from mysite.polls.models import Poll info_dict = { 'queryset': Poll.objects.all(), } urlpatterns = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), url(r'^(?P<object_id>\d+)/results/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/results.html'), 'poll_results'), (r'^(?P<poll_id>\d+)/vote/$', 'mysite.polls.views.vote'), ) Using these generic views, It'll be pointless to copy/paste my views.py file, I'll only mention that there's just a vote function (since django generic views do all the magic). My supposition is that the urls.py file needs some tweak, or is wrong at something In order to send that "No polls available." output at /polls/ url. My poll_list.html file looks like this: {% if latest_poll_list %} <ul> {% for poll in latest_poll_list %} <li>{{ poll.question }}</li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %} It evals latest_poll_list to false, and that's why the else block is executed. Can you give me a hand at this? (I searched at stackoverflow for duplicate question's, and even at google for this issue, but I couldn't find anything). Why do I get this message when I enter at http://127.0.0.1:8000/polls? A: You overlooked this paragraph in the 4. part of the tutorial: In previous parts of the tutorial, the templates have been provided with a context that contains the poll and latest_poll_list context variables. However, the generic views provide the variables object and object_list as context. Therefore, you need to change your templates to match the new context variables. Go through your templates, and modify any reference to latest_poll_list to object_list, and change any reference to poll to object.
Django official tutorial for the absolute beginner, absolutely failed!
Not that level of failure indeed. I just completed the 4 part tutorial from djangoproject.com, my administration app works fine and my entry point url (/polls/) works well, with the exception that I get this http response: No polls are available. Even if the database has one registry. Entering with the admin app, the entry shows up the way it should be. At the end of the tutorial, you change all your hard-coded views by replacing it for generic views on your URLconf. It's supossed that after all the modifications your urls.py ends up like this: from django.conf.urls.defaults import * from mysite.polls.models import Poll info_dict = { 'queryset': Poll.objects.all(), } urlpatterns = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), url(r'^(?P<object_id>\d+)/results/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/results.html'), 'poll_results'), (r'^(?P<poll_id>\d+)/vote/$', 'mysite.polls.views.vote'), ) Using these generic views, It'll be pointless to copy/paste my views.py file, I'll only mention that there's just a vote function (since django generic views do all the magic). My supposition is that the urls.py file needs some tweak, or is wrong at something In order to send that "No polls available." output at /polls/ url. My poll_list.html file looks like this: {% if latest_poll_list %} <ul> {% for poll in latest_poll_list %} <li>{{ poll.question }}</li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %} It evals latest_poll_list to false, and that's why the else block is executed. Can you give me a hand at this? (I searched at stackoverflow for duplicate question's, and even at google for this issue, but I couldn't find anything). Why do I get this message when I enter at http://127.0.0.1:8000/polls?
[ "You overlooked this paragraph in the 4. part of the tutorial:\n\nIn previous parts of the tutorial, the templates have been provided with a context that contains the poll and latest_poll_list context variables. However, the generic views provide the variables object and object_list as context. Therefore, you need to change your templates to match the new context variables. Go through your templates, and modify any reference to latest_poll_list to object_list, and change any reference to poll to object.\n\n" ]
[ 14 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001054494_django_python.txt
Q: Weighted slope one algorithm? (porting from Python to R) I was reading about the Weighted slope one algorithm ( and more formally here (PDF)) which is supposed to take item ratings from different users and, given a user vector containing at least 1 rating and 1 missing value, predict the missing ratings. I found a Python implementation of the algorithm, but I'm having a hard time porting it to R (which I'm more comfortable with). Below is my attempt. Any suggestions on how to make it work? Thanks in advance, folks. # take a 'training' set, tr.set and a vector with some missing ratings, d pred=function(tr.set,d) { tr.set=rbind(tr.set,d) n.items=ncol(tr.set) # tally frequencies to use as weights freqs=sapply(1:n.items, function(i) { unlist(lapply(1:n.items, function(j) { sum(!(i==j)&!is.na(tr.set[,i])&!is.na(tr.set[,j])) })) }) # estimate product-by-product mean differences in ratings diffs=array(NA, dim=c(n.items,n.items)) diffs=sapply(1:n.items, function(i) { unlist(lapply(1:n.items, function(j) { diffs[j,i]=mean(tr.set[,i]-tr.set[,j],na.rm=T) })) }) # create an output vector with NAs for all the items the user has already rated pred.out=as.numeric(is.na(d)) pred.out[!is.na(d)]=NA a=which(!is.na(pred.out)) b=which(is.na(pred.out)) # calculated the weighted slope one estimate pred.out[a]=sapply(a, function(i) { sum(unlist(lapply(b,function (j) { sum((d[j]+diffs[j,i])*freqs[j,i])/rowSums(freqs)[i] }))) }) names(pred.out)=colnames(tr.set) return(pred.out) } # end function # test, using example from [3] alice=c(squid=1.0, octopus=0.2, cuttlefish=0.5, nautilus=NA) bob=c(squid=1.0, octopus=0.5, cuttlefish=NA, nautilus=0.2) carole=c(squid=0.2, octopus=1.0, cuttlefish=0.4, nautilus=0.4) dave=c(squid=NA, octopus=0.4, cuttlefish=0.9, nautilus=0.5) tr.set2=rbind(alice,bob,carole,dave) lucy2=c(squid=0.4, octopus=NA, cuttlefish=NA, nautilus=NA) pred(tr.set2,lucy2) # not correct # correct(?): {'nautilus': 0.10, 'octopus': 0.23, 'cuttlefish': 0.25} A: I used the same reference (Bryan O'Sullivan's python code) to write an R version of Slope One a while back. I'm pasting the code below in case it helps. predict <- function(userprefs, data.freqs, data.diffs) { seen <- names(userprefs) preds <- sweep(data.diffs[ , seen, drop=FALSE], 2, userprefs, '+') preds <- preds * data.freqs[ , seen] preds <- apply(preds, 1, sum) freqs <- apply(data.freqs[ , seen, drop=FALSE], 1, sum) unseen <- setdiff(names(preds), seen) result <- preds[unseen] / freqs[unseen] return(result[is.finite(result)]) } update <- function(userdata, freqs, diffs) { for (ratings in userdata) { items <- names(ratings) n <- length(ratings) ratdiff <- rep(ratings, n) - rep(ratings, rep(n, n)) diffs[items, items] <- diffs[items, items] + ratdiff freqs[items, items] <- freqs[items, items] + 1 } diffs <- diffs / freqs return(list(freqs=freqs, diffs=diffs)) } userdata <- list(alice=c(squid=1.0, cuttlefish=0.5, octopus=0.2), bob=c(squid=1.0, octopus=0.5, nautilus=0.2), carole=c(squid=0.2, octopus=1.0, cuttlefish=0.4, nautilus=0.4), dave=c(cuttlefish=0.9, octopus=0.4, nautilus=0.5)) items <- c('squid', 'cuttlefish', 'nautilus', 'octopus') n.items <- length(items) freqs <- diffs <- matrix(0, nrow=n.items, ncol=n.items, dimnames=list(items, items)) result <- update(userdata, freqs, diffs) print(result$freqs) print(result$diffs) userprefs <- c(squid=.4) predresult <- predict(userprefs, result$freqs, result$diffs) print(predresult)
Weighted slope one algorithm? (porting from Python to R)
I was reading about the Weighted slope one algorithm ( and more formally here (PDF)) which is supposed to take item ratings from different users and, given a user vector containing at least 1 rating and 1 missing value, predict the missing ratings. I found a Python implementation of the algorithm, but I'm having a hard time porting it to R (which I'm more comfortable with). Below is my attempt. Any suggestions on how to make it work? Thanks in advance, folks. # take a 'training' set, tr.set and a vector with some missing ratings, d pred=function(tr.set,d) { tr.set=rbind(tr.set,d) n.items=ncol(tr.set) # tally frequencies to use as weights freqs=sapply(1:n.items, function(i) { unlist(lapply(1:n.items, function(j) { sum(!(i==j)&!is.na(tr.set[,i])&!is.na(tr.set[,j])) })) }) # estimate product-by-product mean differences in ratings diffs=array(NA, dim=c(n.items,n.items)) diffs=sapply(1:n.items, function(i) { unlist(lapply(1:n.items, function(j) { diffs[j,i]=mean(tr.set[,i]-tr.set[,j],na.rm=T) })) }) # create an output vector with NAs for all the items the user has already rated pred.out=as.numeric(is.na(d)) pred.out[!is.na(d)]=NA a=which(!is.na(pred.out)) b=which(is.na(pred.out)) # calculated the weighted slope one estimate pred.out[a]=sapply(a, function(i) { sum(unlist(lapply(b,function (j) { sum((d[j]+diffs[j,i])*freqs[j,i])/rowSums(freqs)[i] }))) }) names(pred.out)=colnames(tr.set) return(pred.out) } # end function # test, using example from [3] alice=c(squid=1.0, octopus=0.2, cuttlefish=0.5, nautilus=NA) bob=c(squid=1.0, octopus=0.5, cuttlefish=NA, nautilus=0.2) carole=c(squid=0.2, octopus=1.0, cuttlefish=0.4, nautilus=0.4) dave=c(squid=NA, octopus=0.4, cuttlefish=0.9, nautilus=0.5) tr.set2=rbind(alice,bob,carole,dave) lucy2=c(squid=0.4, octopus=NA, cuttlefish=NA, nautilus=NA) pred(tr.set2,lucy2) # not correct # correct(?): {'nautilus': 0.10, 'octopus': 0.23, 'cuttlefish': 0.25}
[ "I used the same reference (Bryan O'Sullivan's python code) to write an R version of Slope One a while back. I'm pasting the code below in case it helps.\npredict <- function(userprefs, data.freqs, data.diffs) {\n seen <- names(userprefs)\n\n preds <- sweep(data.diffs[ , seen, drop=FALSE], 2, userprefs, '+') \n preds <- preds * data.freqs[ , seen]\n preds <- apply(preds, 1, sum)\n\n freqs <- apply(data.freqs[ , seen, drop=FALSE], 1, sum)\n\n unseen <- setdiff(names(preds), seen)\n result <- preds[unseen] / freqs[unseen]\n return(result[is.finite(result)])\n}\n\nupdate <- function(userdata, freqs, diffs) {\n for (ratings in userdata) {\n items <- names(ratings)\n n <- length(ratings)\n\n ratdiff <- rep(ratings, n) - rep(ratings, rep(n, n))\n diffs[items, items] <- diffs[items, items] + ratdiff\n\n freqs[items, items] <- freqs[items, items] + 1\n }\n diffs <- diffs / freqs\n return(list(freqs=freqs, diffs=diffs))\n}\n\n\nuserdata <- list(alice=c(squid=1.0, cuttlefish=0.5, octopus=0.2),\n bob=c(squid=1.0, octopus=0.5, nautilus=0.2),\n carole=c(squid=0.2, octopus=1.0, cuttlefish=0.4, nautilus=0.4),\n dave=c(cuttlefish=0.9, octopus=0.4, nautilus=0.5))\n\nitems <- c('squid', 'cuttlefish', 'nautilus', 'octopus')\nn.items <- length(items)\nfreqs <- diffs <- matrix(0, nrow=n.items, ncol=n.items, dimnames=list(items, items))\n\nresult <- update(userdata, freqs, diffs)\nprint(result$freqs)\nprint(result$diffs)\n\nuserprefs <- c(squid=.4)\npredresult <- predict(userprefs, result$freqs, result$diffs)\nprint(predresult)\n\n" ]
[ 9 ]
[]
[]
[ "prediction", "python", "r", "recommendation_engine" ]
stackoverflow_0001022649_prediction_python_r_recommendation_engine.txt
Q: How frequently should Python decorators be used? I recently started experimenting with Python decorators (and higher-order functions) because it looked like they might make my Django unit tests more concise. e.g., instead of writing: def visit1(): login() do_stuff() logout() I could instead do @handle_login def visit1(): do_stuff() However, after some experimenting, I have found that decorators are not as simple as I had hoped. First, I was confused by the different decorator syntax I found in different examples, until I learned that decorators behave very differently when they take arguments. Then I tried decorating a method, and eventually learned that it wasn't working because I first have to turn my decorator into a descriptor by adding a __get__ method. During this whole process I've ended up confused more than a few times and still find that debugging this "decorated" code is more complicated than it normally is for Python. I'm now re-evaluating whether I really need decorators in my code, since my initial motivation was to save a bit of typing, not because there was anything that really required higher-order functions. So my question is: should decorators be used liberally or sparingly? Is it ever more Pythonic to avoid using them? A: Decorators are fine in their place and definitely not to be avoided -- when appropriate;-). I see your question as meaning essentially "OK so when are they appropriate"? Adding some prefix and/or postfix code around some but not all methods of some classes is a good example. Were it all methods, a class decorator to wrap all methods would be better than repeating @thisonetoo endlessly;-). If it's once in a blue moon then it's not worth refactoring out to wrappers (decorators or otherwise). In the middle, there's a large ground where decorators are quite suitable indeed. It boils down to one of the golden rules of programming -- DRY, for Don't Repeat Yourself. When you see your code becoming repetitious, you should refactor the repetition out -- and decorators are an excellent tool for that, although they're far from the only one (auxiliary methods and functions, custom metaclasses, generators and other iterators, context managers... many of the features we added to Python over the last few years can best be thought of as DRY-helpers, easier and smoother ways to factor out this or that frequent form of repetition!). If there's no repetition, there's no real call for refactoring, hence (in particular) no real need for decorators -- in such cases, YAGNI (Y'Ain't Gonna Need It) can trump DRY;-). A: Alex already answered your question pretty well, what I would add is decorators, make your code MUCH easier to understand. (Sometimes, even if you are doing it only once). For example, initially, I write my Django views, without thinking about authorisation at all. And when, I am done writing them, I can see which need authorised users and just put a @login_required for them. So anyone coming after me can at one glance see what views are auth protected. And of course, they are much more DRY and putting this everywhere. if not request.user.is_authenticated(): return HttpResponseREdiect(..) A: Decorators are a way to hoist a common Aspect out of your code. Aspect-Oriented Programming proponents will tell you that there are so many common aspects that AOP is essential and central. Indeed, you can read a silly debate on this topic here: Aspect Oriented Programming vs. Object-Oriented Programming There are a few common use cases for AOP. You can read a few here: Do you use AOP (Aspect Oriented Programming) in production software? There a few cross-cutting concerns for which decorators are helpful. Access Controls ("security") Authentication, Authorization, Permissions, Ownership Logging (including Debugging aids and Auditing) Caching (often an implementation of Memoization) Some error handling might be a common aspect and therefore suitable for decorator implementation. There are very few other design patterns that are truly cross-cutting and deserve an AOP decorator. A: If you have the same code at the beginning and end of many functions, I think that would justify the added complexity of using a decorator. Rather like using a nice (but perhaps complex) template for a website with a lot of pages, it really saves time and adds clarity in the end.
How frequently should Python decorators be used?
I recently started experimenting with Python decorators (and higher-order functions) because it looked like they might make my Django unit tests more concise. e.g., instead of writing: def visit1(): login() do_stuff() logout() I could instead do @handle_login def visit1(): do_stuff() However, after some experimenting, I have found that decorators are not as simple as I had hoped. First, I was confused by the different decorator syntax I found in different examples, until I learned that decorators behave very differently when they take arguments. Then I tried decorating a method, and eventually learned that it wasn't working because I first have to turn my decorator into a descriptor by adding a __get__ method. During this whole process I've ended up confused more than a few times and still find that debugging this "decorated" code is more complicated than it normally is for Python. I'm now re-evaluating whether I really need decorators in my code, since my initial motivation was to save a bit of typing, not because there was anything that really required higher-order functions. So my question is: should decorators be used liberally or sparingly? Is it ever more Pythonic to avoid using them?
[ "Decorators are fine in their place and definitely not to be avoided -- when appropriate;-). I see your question as meaning essentially \"OK so when are they appropriate\"?\nAdding some prefix and/or postfix code around some but not all methods of some classes is a good example. Were it all methods, a class decorator to wrap all methods would be better than repeating @thisonetoo endlessly;-). If it's once in a blue moon then it's not worth refactoring out to wrappers (decorators or otherwise). In the middle, there's a large ground where decorators are quite suitable indeed.\nIt boils down to one of the golden rules of programming -- DRY, for Don't Repeat Yourself. When you see your code becoming repetitious, you should refactor the repetition out -- and decorators are an excellent tool for that, although they're far from the only one (auxiliary methods and functions, custom metaclasses, generators and other iterators, context managers... many of the features we added to Python over the last few years can best be thought of as DRY-helpers, easier and smoother ways to factor out this or that frequent form of repetition!).\nIf there's no repetition, there's no real call for refactoring, hence (in particular) no real need for decorators -- in such cases, YAGNI (Y'Ain't Gonna Need It) can trump DRY;-).\n", "Alex already answered your question pretty well, what I would add is decorators, make your code MUCH easier to understand. (Sometimes, even if you are doing it only once).\nFor example, initially, I write my Django views, without thinking about authorisation at all. And when, I am done writing them, I can see which need authorised users and just put a @login_required for them.\nSo anyone coming after me can at one glance see what views are auth protected.\nAnd of course, they are much more DRY and putting this everywhere.\nif not request.user.is_authenticated():\n return HttpResponseREdiect(..)\n", "Decorators are a way to hoist a common Aspect out of your code.\nAspect-Oriented Programming proponents will tell you that there are so many common aspects that AOP is essential and central. Indeed, you can read a silly debate on this topic here:\nAspect Oriented Programming vs. Object-Oriented Programming\nThere are a few common use cases for AOP. You can read a few here:\nDo you use AOP (Aspect Oriented Programming) in production software?\nThere a few cross-cutting concerns for which decorators are helpful.\n\nAccess Controls (\"security\") Authentication, Authorization, Permissions, Ownership\nLogging (including Debugging aids and Auditing)\nCaching (often an implementation of Memoization)\nSome error handling might be a common aspect and therefore suitable for decorator implementation.\n\nThere are very few other design patterns that are truly cross-cutting and deserve an AOP decorator.\n", "If you have the same code at the beginning and end of many functions, I think that would justify the added complexity of using a decorator. \nRather like using a nice (but perhaps complex) template for a website with a lot of pages, it really saves time and adds clarity in the end.\n" ]
[ 12, 3, 3, 0 ]
[]
[]
[ "decorator", "django", "python" ]
stackoverflow_0001054249_decorator_django_python.txt
Q: Django and Python 2.6 I'm just starting to get into Django, and of course as of last night one of the two new Python versions went final (2.6 obviously ;)) so I'm wondering if 2.6 plus Django is ready for actual use or do the Django team need more time to finish with tweaks/cleanup? All the google searches I did were inconclusive, I saw bits about some initial test runs on beta 2 but nothing more recent seemed to show up. Edit: http://groups.google.com/group/django-developers/browse_thread/thread/a48f81d916f24a04 They've confirmed here 1.0 w/2.6 works fine as far as they know. A: The impression I get is that 2.6 should work fine with Django 1.0. As found here: http://simonwillison.net/2008/Oct/2/whatus/ A: Note that there is currently no python-mysql adapter for python2.6. If you need MySQL, stick with 2.5 for now. A: There is an unofficial build for mysqldb 1.2.2 win32 python 2.6 @ http://www.technicalbard.com/files/MySQL-python-1.2.2.win32-py2.6.exe
Django and Python 2.6
I'm just starting to get into Django, and of course as of last night one of the two new Python versions went final (2.6 obviously ;)) so I'm wondering if 2.6 plus Django is ready for actual use or do the Django team need more time to finish with tweaks/cleanup? All the google searches I did were inconclusive, I saw bits about some initial test runs on beta 2 but nothing more recent seemed to show up. Edit: http://groups.google.com/group/django-developers/browse_thread/thread/a48f81d916f24a04 They've confirmed here 1.0 w/2.6 works fine as far as they know.
[ "The impression I get is that 2.6 should work fine with Django 1.0. As found here: http://simonwillison.net/2008/Oct/2/whatus/ \n", "Note that there is currently no python-mysql adapter for python2.6. If you need MySQL, stick with 2.5 for now.\n", "There is an unofficial build for mysqldb 1.2.2 win32 python 2.6 @ http://www.technicalbard.com/files/MySQL-python-1.2.2.win32-py2.6.exe\n" ]
[ 7, 5, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000162808_django_python.txt
Q: Consuming Python COM Server from .NET I wanted to implement python com server using win32com extensions. Then consume the server from within the .NET. I used the following example to implement the com server and it runs without a problem but when I try to consume it using C# I got FileNotFoundException with the following message "Retrieving the COM class factory for component with CLSID {676E38A6-7FA7-4BFF-9179-AE959734DEBB} failed due to the following error: 8007007e." . I posted the C# code as well.I wonder if I'm missing something I would appreciate any help. Thanks, Sarah #PythonCOMServer.py import pythoncom class PythonUtilities: _public_methods_ = [ 'SplitString' ] _reg_progid_ = "PythonDemos.Utilities" # NEVER copy the following ID # Use"print pythoncom.CreateGuid()" to make a new one. _reg_clsid_ = pythoncom.CreateGuid() print _reg_clsid_ def SplitString(self, val, item=None): import string if item != None: item = str(item) return string.split(str(val), item) # Add code so that when this script is run by # Python.exe,.it self-registers. if __name__=='__main__': print 'Registering Com Server' import win32com.server.register win32com.server.register.UseCommandLine(PythonUtilities) // the C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { Type pythonServer; object pythonObject; pythonServer = Type.GetTypeFromProgID("PythonDemos.Utilities"); pythonObject = Activator.CreateInstance(pythonServer); } } } A: A COM server is just a piece of software (a DLL or an executable) that will accept remote procedure calls (RPC) through a defined protocol. Part of the protocol says that the server must have a unique ID, stored in the Windows' registry. In our case, this means that you have "registered" a server that is not existing. Thus the error (component not found). So, it should be something like this (as usual, this is untested code!): import pythoncom class HelloWorld: _reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER _reg_clsid_ = "{B83DD222-7750-413D-A9AD-01B37021B24B}" _reg_desc_ = "Python Test COM Server" _reg_progid_ = "Python.TestServer" _public_methods_ = ['Hello'] _public_attrs_ = ['softspace', 'noCalls'] _readonly_attrs_ = ['noCalls'] # for Python 3.7+ _reg_verprogid_ = "Python.TestServer.1" _reg_class_spec_ = "HelloWorldCOM.HelloWorld" def __init__(self): self.softspace = 1 self.noCalls = 0 def Hello(self, who): self.noCalls = self.noCalls + 1 # insert "softspace" number of spaces return "Hello" + " " * self.softspace + str(who) if __name__ == '__main__': if '--register' in sys.argv[1:] or '--unregister' in sys.argv[1:]: import win32com.server.register win32com.server.register.UseCommandLine(HelloWorld) else: # start the server. from win32com.server import localserver localserver.serve(['{B83DD222-7750-413D-A9AD-01B37021B24B}']) Then you should run from the command line (assuming the script is called HelloWorldCOM.py): HelloWorldCOM.py --register HelloWorldCOM.py Class HelloWorld is the actual implementation of the server. It expose one method (Hello) and a couple of attributes, one of the two is read-only. With the first command, you register the server; with the second one, you run it and then it becomes available to usage from other applications. A: You need run Process Monitor on your C# Executable to track down the file that is not found.
Consuming Python COM Server from .NET
I wanted to implement python com server using win32com extensions. Then consume the server from within the .NET. I used the following example to implement the com server and it runs without a problem but when I try to consume it using C# I got FileNotFoundException with the following message "Retrieving the COM class factory for component with CLSID {676E38A6-7FA7-4BFF-9179-AE959734DEBB} failed due to the following error: 8007007e." . I posted the C# code as well.I wonder if I'm missing something I would appreciate any help. Thanks, Sarah #PythonCOMServer.py import pythoncom class PythonUtilities: _public_methods_ = [ 'SplitString' ] _reg_progid_ = "PythonDemos.Utilities" # NEVER copy the following ID # Use"print pythoncom.CreateGuid()" to make a new one. _reg_clsid_ = pythoncom.CreateGuid() print _reg_clsid_ def SplitString(self, val, item=None): import string if item != None: item = str(item) return string.split(str(val), item) # Add code so that when this script is run by # Python.exe,.it self-registers. if __name__=='__main__': print 'Registering Com Server' import win32com.server.register win32com.server.register.UseCommandLine(PythonUtilities) // the C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { Type pythonServer; object pythonObject; pythonServer = Type.GetTypeFromProgID("PythonDemos.Utilities"); pythonObject = Activator.CreateInstance(pythonServer); } } }
[ "A COM server is just a piece of software (a DLL or an executable) that will accept remote procedure calls (RPC) through a defined protocol. Part of the protocol says that the server must have a unique ID, stored in the Windows' registry.\nIn our case, this means that you have \"registered\" a server that is not existing. Thus the error (component not found).\nSo, it should be something like this (as usual, this is untested code!):\nimport pythoncom\n\nclass HelloWorld:\n _reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER\n _reg_clsid_ = \"{B83DD222-7750-413D-A9AD-01B37021B24B}\"\n _reg_desc_ = \"Python Test COM Server\"\n _reg_progid_ = \"Python.TestServer\"\n _public_methods_ = ['Hello']\n _public_attrs_ = ['softspace', 'noCalls']\n _readonly_attrs_ = ['noCalls']\n # for Python 3.7+\n _reg_verprogid_ = \"Python.TestServer.1\"\n _reg_class_spec_ = \"HelloWorldCOM.HelloWorld\"\n\n def __init__(self):\n self.softspace = 1\n self.noCalls = 0\n\n def Hello(self, who):\n self.noCalls = self.noCalls + 1\n # insert \"softspace\" number of spaces\n return \"Hello\" + \" \" * self.softspace + str(who)\n\nif __name__ == '__main__':\n if '--register' in sys.argv[1:] or '--unregister' in sys.argv[1:]:\n import win32com.server.register\n win32com.server.register.UseCommandLine(HelloWorld)\n else:\n # start the server.\n from win32com.server import localserver\n localserver.serve(['{B83DD222-7750-413D-A9AD-01B37021B24B}'])\n\n\nThen you should run from the command line (assuming the script is called HelloWorldCOM.py):\nHelloWorldCOM.py --register\nHelloWorldCOM.py\n\nClass HelloWorld is the actual implementation of the server. It expose one method (Hello) and a couple of attributes, one of the two is read-only.\nWith the first command, you register the server; with the second one, you run it and then it becomes available to usage from other applications.\n", "You need run Process Monitor on your C# Executable to track down the file that is not found.\n" ]
[ 11, 0 ]
[]
[]
[ ".net", "com", "python" ]
stackoverflow_0001054849_.net_com_python.txt
Q: Unable to make a MySQL database of SO questions by Python Brent's answer suggests me that has made a database of SO questions such that he can fast analyze the questions. I am interested in making a similar database by MySQL such that I can practice MySQL with similar queries as Brent. The database should include at the least the following fields (I am guessing here, since the API of SO's api seems to be sectet). I aim to list only relevant variables which would allow me to make similar analysis as Brent. Questions Question_id (private key) Question_time Comments Comment_id (private key) Comment_time User_id (private key) User_name We need apparently scrape the data by Python's Beautiful Soap because Brent's database is apparently hidden. How can you make such a MySQL database by Python's Beautiful Soap?** A: I don't know the details of how to import the data into MySQL, but the raw data of Stack Overflow is freely available: https://blog.stackoverflow.com/2009/06/stack-overflow-creative-commons-data-dump/ There's no secret API, nor any need to use Beautiful Soup. A: I'm sure it's possible to work directly with the XML data dump @RichieHindle mentions, but I was much happier with @nobody_'s sqlite version -- especially after adding the indices as the README file in that sqlite version says. If you have the complete, indexed sqlite version and want to load the Python-tagged subset into a MySQL database, that can be seen as a simple but neat exercise in using two DB API instances, reading from the sqlite one and writing to the MySQL one (personally I found the sqlite performance entirely satisfactory once the index-building is done, so I did no subset extraction nor any moving to other DB engines) -- no Soup nor Soap needed for the purpose. In any case, it was much simpler and faster for me than loading from XML directly, despite lxml and all. Of course if you do still want to perform the subset-load, and if you experience any trouble at all coding it up, ask (with schema and code samples, error messages if any, etc) and SOers will try to answer, as usual!-)
Unable to make a MySQL database of SO questions by Python
Brent's answer suggests me that has made a database of SO questions such that he can fast analyze the questions. I am interested in making a similar database by MySQL such that I can practice MySQL with similar queries as Brent. The database should include at the least the following fields (I am guessing here, since the API of SO's api seems to be sectet). I aim to list only relevant variables which would allow me to make similar analysis as Brent. Questions Question_id (private key) Question_time Comments Comment_id (private key) Comment_time User_id (private key) User_name We need apparently scrape the data by Python's Beautiful Soap because Brent's database is apparently hidden. How can you make such a MySQL database by Python's Beautiful Soap?**
[ "I don't know the details of how to import the data into MySQL, but the raw data of Stack Overflow is freely available: https://blog.stackoverflow.com/2009/06/stack-overflow-creative-commons-data-dump/\nThere's no secret API, nor any need to use Beautiful Soup.\n", "I'm sure it's possible to work directly with the XML data dump @RichieHindle mentions, but I was much happier with @nobody_'s sqlite version -- especially after adding the indices as the README file in that sqlite version says.\nIf you have the complete, indexed sqlite version and want to load the Python-tagged subset into a MySQL database, that can be seen as a simple but neat exercise in using two DB API instances, reading from the sqlite one and writing to the MySQL one (personally I found the sqlite performance entirely satisfactory once the index-building is done, so I did no subset extraction nor any moving to other DB engines) -- no Soup nor Soap needed for the purpose. In any case, it was much simpler and faster for me than loading from XML directly, despite lxml and all.\nOf course if you do still want to perform the subset-load, and if you experience any trouble at all coding it up, ask (with schema and code samples, error messages if any, etc) and SOers will try to answer, as usual!-)\n" ]
[ 1, 1 ]
[]
[]
[ "database", "mysql", "python" ]
stackoverflow_0001054964_database_mysql_python.txt
Q: fast and easy way to template xml files in python Right now I've hard coded the whole xml file in my python script and just doing out.write(), but now it's getting harder to manage because i have multiple types of xml file. What is the easiest and quickest way to setup templating so that I can just give the variable names amd filename? A: Short answer is: You should be focusing, and dealing with, the data (i.e., python object) and not the raw XML Basic story: XML is supposed to be a representation of some data, or data set. You don't have a lot of detail in your question about the type of data, what it represents, etc, etc -- so I'll give you some basic answers. Python choices: BeautifulSoup, lxml and other python libraries (ElementTree, etc.), make dealing with XML more easy. They let me read in, or write out, XML data much more easily than if I'd tried to work directly with the XML in raw form. In the middle of those 2 (input,output) activities, my python program is dealing with a nice python object or some kind of parse tree I can walk. You can read data in, create an object from that string, manipulate it and write out XML. Other choice, Templates: OK -- maybe you like XML and just want to "template" it so you can populate it with the data. You might be more comfortable with this, if you aren't really manipulating the data -- but just representing it for output. And, this is similar to the XML strings you are currently using -- so may be more familiar. Use Cheetah, Jinja, or other template libraries to help. Make a template for the XML file, using that template language. For example, you just read a list of books from a file or database table. You would pass this list of book objects to the template engine, with a template, and then tell it to write out your XML output. Example template for these book objects: <?xml version="1.0"?> <catalog> {% for object in object_list %} <book id="{{ object.bookID }}"> <author>{{ object.author_name }}</author> <title>{{ object.title }}</title> <genre>{{ object.genre }}</genre> <price>{{ object.price }}</price> <publish_date>{{ object.pub_date }}</publish_date> <description>{{ object.description }}</description> </book> {% endfor %} </catalog> </xml> The template engine would loop through the "object_list" and output a long XML file with all your books. That would be much better than storing raw XML strings, as you currently are. This makes the update & modification of the display of XML separate from the data, data storage, and data manipulation -- making your life easier. A: Two choices. A template tool, for example Jinja2. Build the DOM object. Not as bad as it sounds. ElementTree has a pleasant factory for building XML tags and creating the necessary structure. A: A lightweight option is xml.dom.minidom xml.dom.minidom is a light-weight implementation of the Document Object Model interface. It is intended to be simpler than the full DOM and also significantly smaller. You can create DOM object using the xml.dom API, for example DOM Element objects, and generate the XML using Node.writexml. Note that this requires building DOM hierarchies, which may not be what you are after. more pythonic option is ElementTree. The Element type is a flexible container object, designed to store hierarchical data structures in memory. The type can be described as a cross between a list and a dictionary. ElementTree objects are easier to create and handle in Python, and can be serialized to XML with ElementTree.dump() or ElementTree.tostring() A: My ancient YAPTU and Palmer's yaptoo variant on it should be usable if you want something very simple and lightweight -- but there are many, many other general and powerful templating engines to chose among, these days. A pretty complete list is here. A: You asked for the easiest and quickest, so see this post: http://blog.simonwillison.net/post/58096201893/simpletemplates If you want something smarter, take a look here.
fast and easy way to template xml files in python
Right now I've hard coded the whole xml file in my python script and just doing out.write(), but now it's getting harder to manage because i have multiple types of xml file. What is the easiest and quickest way to setup templating so that I can just give the variable names amd filename?
[ "Short answer is: You should be focusing, and dealing with, the data (i.e., python object) and not the raw XML\nBasic story:\nXML is supposed to be a representation of some data, or data set.\nYou don't have a lot of detail in your question about the type of data, what it represents, etc, etc -- so I'll give you some basic answers.\nPython choices:\nBeautifulSoup, lxml and other python libraries (ElementTree, etc.), make dealing with XML more easy. They let me read in, or write out, XML data much more easily than if I'd tried to work directly with the XML in raw form.\nIn the middle of those 2 (input,output) activities, my python program is dealing with a nice python object or some kind of parse tree I can walk. You can read data in, create an object from that string, manipulate it and write out XML.\nOther choice, Templates:\nOK -- maybe you like XML and just want to \"template\" it so you can populate it with the data.\nYou might be more comfortable with this, if you aren't really manipulating the data -- but just representing it for output. And, this is similar to the XML strings you are currently using -- so may be more familiar.\nUse Cheetah, Jinja, or other template libraries to help.\nMake a template for the XML file, using that template language. \nFor example, you just read a list of books from a file or database table.\nYou would pass this list of book objects to the template engine, with a template, and then tell it to write out your XML output. \nExample template for these book objects:\n<?xml version=\"1.0\"?>\n<catalog>\n {% for object in object_list %}\n <book id=\"{{ object.bookID }}\">\n <author>{{ object.author_name }}</author>\n <title>{{ object.title }}</title>\n <genre>{{ object.genre }}</genre>\n <price>{{ object.price }}</price>\n <publish_date>{{ object.pub_date }}</publish_date>\n <description>{{ object.description }}</description>\n </book>\n {% endfor %}\n </catalog>\n </xml>\n\nThe template engine would loop through the \"object_list\" and output a long XML file with all your books. That would be much better than storing raw XML strings, as you currently are.\nThis makes the update & modification of the display of XML separate from the data, data storage, and data manipulation -- making your life easier.\n", "Two choices.\n\nA template tool, for example Jinja2.\nBuild the DOM object. Not as bad as it sounds. ElementTree has a pleasant factory for building XML tags and creating the necessary structure. \n\n", "A lightweight option is xml.dom.minidom\n\nxml.dom.minidom is a light-weight implementation of the Document Object Model interface. It is intended to be simpler than the full DOM and also significantly smaller.\n\nYou can create DOM object using the xml.dom API, for example DOM Element objects, and generate the XML using Node.writexml. Note that this requires building DOM hierarchies, which may not be what you are after.\nmore pythonic option is ElementTree.\n\nThe Element type is a flexible container object, designed to store hierarchical data structures in memory. The type can be described as a cross between a list and a dictionary.\n\nElementTree objects are easier to create and handle in Python, and can be serialized to XML with ElementTree.dump() or ElementTree.tostring()\n", "My ancient YAPTU and Palmer's yaptoo variant on it should be usable if you want something very simple and lightweight -- but there are many, many other general and powerful templating engines to chose among, these days. A pretty complete list is here.\n", "You asked for the easiest and quickest, so see this post: http://blog.simonwillison.net/post/58096201893/simpletemplates\nIf you want something smarter, take a look here.\n" ]
[ 6, 4, 4, 1, 1 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001055108_python_xml.txt
Q: Unicode friendly alphabetic pattern for python regex? I'm looking for a pattern equivalent to \w, and which doesn't match numeric pattern. I cannot use [a-zA-Z] because I would like it to match japanese kanjis as well. Is there a way to write something like [\w^[0-9]] ? Is there an equivalent of [:alpha:] in python regex? A: [^\W\d] Throw out non-word characters and throw out digits. Keep the rest.
Unicode friendly alphabetic pattern for python regex?
I'm looking for a pattern equivalent to \w, and which doesn't match numeric pattern. I cannot use [a-zA-Z] because I would like it to match japanese kanjis as well. Is there a way to write something like [\w^[0-9]] ? Is there an equivalent of [:alpha:] in python regex?
[ "[^\\W\\d]\n\nThrow out non-word characters and throw out digits. Keep the rest.\n" ]
[ 11 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001055160_python_regex.txt
Q: How to modify a NumPy.recarray using its two views I am new to Python and Numpy, and I am facing a problem, that I can not modify a numpy.recarray, when applying to masked views. I read recarray from a file, then create two masked views, then try to modify the values in for loop. Here is an example code. import numpy as np import matplotlib.mlab as mlab dat = mlab.csv2rec(args[0], delimiter=' ') m_Obsr = dat.is_observed == 1 m_ZeroScale = dat[m_Obsr].scale_mean < 0.01 for d in dat[m_Obsr][m_ZeroScale]: d.scale_mean = 1.0 But when I print the result newFile = args[0] + ".no-zero-scale" mlab.rec2csv(dat[m_Obsr][m_ZeroScale], newFile, delimiter=' ') All the scale_means in the files, are still zero. I must be doing something wrong. Is there a proper way of modifying values of the view? Is it because I am applying two views one by one? Thank you. A: I think you have a misconception in this term "masked views" and should (re-)read The Book (now freely downloadable) to clarify your understanding. I quote from section 3.4.2: Advanced selection is triggered when the selection object, obj, is a non-tuple sequence object, an ndarray (of data type integer or bool), or a tuple with at least one sequence object or ndarray (of data type integer or bool). There are two types of advanced indexing: integer and Boolean. Advanced selection always returns a copy of the data (contrast with basic slicing that returns a view). What you're doing here is advanced selection (of the Boolean kind) so you're getting a copy and never binding it anywhere -- you make your changes on the copy and then just let it go away, then write a new fresh copy from the original. Once you understand the issue the solution should be simple: make your copy once, make your changes on that copy, and write that same copy. I.e.: dat = mlab.csv2rec(args[0], delimiter=' ') m_Obsr = dat.is_observed == 1 m_ZeroScale = dat[m_Obsr].scale_mean < 0.01 the_copy = dat[m_Obsr][m_ZeroScale] for d in the_copy: d.scale_mean = 1.0 newFile = args[0] + ".no-zero-scale" mlab.rec2csv(the_copy, newFile, delimiter=' ')
How to modify a NumPy.recarray using its two views
I am new to Python and Numpy, and I am facing a problem, that I can not modify a numpy.recarray, when applying to masked views. I read recarray from a file, then create two masked views, then try to modify the values in for loop. Here is an example code. import numpy as np import matplotlib.mlab as mlab dat = mlab.csv2rec(args[0], delimiter=' ') m_Obsr = dat.is_observed == 1 m_ZeroScale = dat[m_Obsr].scale_mean < 0.01 for d in dat[m_Obsr][m_ZeroScale]: d.scale_mean = 1.0 But when I print the result newFile = args[0] + ".no-zero-scale" mlab.rec2csv(dat[m_Obsr][m_ZeroScale], newFile, delimiter=' ') All the scale_means in the files, are still zero. I must be doing something wrong. Is there a proper way of modifying values of the view? Is it because I am applying two views one by one? Thank you.
[ "I think you have a misconception in this term \"masked views\" and should (re-)read The Book (now freely downloadable) to clarify your understanding.\nI quote from section 3.4.2:\n\nAdvanced selection is triggered when\n the selection object, obj, is a\n non-tuple sequence object, an ndarray\n (of data type integer or bool), or a\n tuple with at least one sequence\n object or ndarray (of data type\n integer or bool). There are two types \n of advanced indexing: integer and\n Boolean. Advanced selection always\n returns a copy of the data (contrast\n with basic slicing that returns a\n view).\n\nWhat you're doing here is advanced selection (of the Boolean kind) so you're getting a copy and never binding it anywhere -- you make your changes on the copy and then just let it go away, then write a new fresh copy from the original.\nOnce you understand the issue the solution should be simple: make your copy once, make your changes on that copy, and write that same copy. I.e.:\ndat = mlab.csv2rec(args[0], delimiter=' ')\nm_Obsr = dat.is_observed == 1\nm_ZeroScale = dat[m_Obsr].scale_mean < 0.01\nthe_copy = dat[m_Obsr][m_ZeroScale]\n\nfor d in the_copy:\n d.scale_mean = 1.0\n\nnewFile = args[0] + \".no-zero-scale\"\nmlab.rec2csv(the_copy, newFile, delimiter=' ')\n\n" ]
[ 3 ]
[]
[]
[ "matplotlib", "numpy", "python" ]
stackoverflow_0001055131_matplotlib_numpy_python.txt
Q: What does `@` mean in Python? What does @ mean in Python? Example: @login_required, etc. A: It is decorator syntax. A function definition may be wrapped by one or more decorator expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion. So doing something like this: @login_required def my_function(): pass Is just a fancy way of doing this: def my_function(): pass my_function = login_required(my_function) For more, check out the documentation. A: It's a decorator. More here: http://www.ibm.com/developerworks/linux/library/l-cpdecor.html A: A decorator, also called pie syntax. It allows you to "decorate" a function with another function. You already had decoration with staticmethod() and classmethod(). The pie syntax makes it more easy to access and extend. A: If you ask this type of question you will probably be interested in the other hidden features of Python. A: That specific decorator looks like it comes from Django. It might help you get a better understanding by reading the Django documentation about that decorator. A: Some resources for decorator: decorator, PEP 318: Decorators for Functions and Methods, PythonDecorators and PythonDecoratorLibrary. A decorator article on DDJ and another article (blog post).
What does `@` mean in Python?
What does @ mean in Python? Example: @login_required, etc.
[ "It is decorator syntax.\n\nA function definition may be wrapped by one or more decorator expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion.\n\nSo doing something like this:\n@login_required\ndef my_function():\n pass\n\nIs just a fancy way of doing this:\ndef my_function():\n pass\nmy_function = login_required(my_function)\n\nFor more, check out the documentation.\n", "It's a decorator.\nMore here: http://www.ibm.com/developerworks/linux/library/l-cpdecor.html\n", "A decorator, also called pie syntax. It allows you to \"decorate\" a function with another function. You already had decoration with staticmethod() and classmethod(). The pie syntax makes it more easy to access and extend.\n", "If you ask this type of question you will probably be interested in the other hidden features of Python.\n", "That specific decorator looks like it comes from Django.\nIt might help you get a better understanding by reading the Django documentation about that decorator.\n", "Some resources for decorator:\ndecorator,\nPEP 318: Decorators for Functions and Methods,\nPythonDecorators and \nPythonDecoratorLibrary.\nA decorator article on DDJ and\nanother article (blog post).\n" ]
[ 31, 1, 1, 1, 1, 0 ]
[]
[]
[ "python", "syntax" ]
stackoverflow_0001053732_python_syntax.txt
Q: item frequency in a python list of dictionaries Ok, so I have a list of dicts: [{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}, ] and I want the 'frequency' of the items within each column. So for this I'd get something like: {'name': {'johnny': 2, 'jakob': 1, 'aaron': 1, 'max': 1}, 'surname': {'smith': 2, 'ryan': 1, 'specter': 1, 'headroom': 1}, 'age': {53:1, 13:1, 27: 1. 22:1, 108:1}} Any modules out there that can do stuff like this? A: collections.defaultdict from the standard library to the rescue: from collections import defaultdict LofD = [{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}, ] def counters(): return defaultdict(int) def freqs(LofD): r = defaultdict(counters) for d in LofD: for k, v in d.items(): r[k][v] += 1 return dict((k, dict(v)) for k, v in r.items()) print freqs(LofD) emits {'age': {27: 1, 108: 1, 53: 1, 22: 1, 13: 1}, 'surname': {'headroom': 1, 'smith': 2, 'specter': 1, 'ryan': 1}, 'name': {'jakob': 1, 'max': 1, 'aaron': 1, 'johnny': 2}} as desired (order of keys apart, of course -- it's irrelevant in a dict). A: items = [{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}] global_dict = {} for item in items: for key, value in item.items(): if not global_dict.has_key(key): global_dict[key] = {} if not global_dict[key].has_key(value): global_dict[key][value] = 0 global_dict[key][value] += 1 print global_dict Simplest solution and actually tested. A: New in Python 3.1: The collections.Counter class: mydict=[{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}, ] import collections newdict = {} for key in mydict[0].keys(): l = [value[key] for value in mydict] newdict[key] = dict(collections.Counter(l)) print(newdict) outputs: {'age': {27: 1, 108: 1, 53: 1, 22: 1, 13: 1}, 'surname': {'headroom': 1, 'smith': 2, 'specter': 1, 'ryan': 1}, 'name': {'jakob': 1, 'max': 1, 'aaron': 1, 'johnny': 2}} A: This? from collections import defaultdict fq = { 'name': defaultdict(int), 'surname': defaultdict(int), 'age': defaultdict(int) } for row in listOfDicts: for field in fq: fq[field][row[field]] += 1 print fq
item frequency in a python list of dictionaries
Ok, so I have a list of dicts: [{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}, ] and I want the 'frequency' of the items within each column. So for this I'd get something like: {'name': {'johnny': 2, 'jakob': 1, 'aaron': 1, 'max': 1}, 'surname': {'smith': 2, 'ryan': 1, 'specter': 1, 'headroom': 1}, 'age': {53:1, 13:1, 27: 1. 22:1, 108:1}} Any modules out there that can do stuff like this?
[ "collections.defaultdict from the standard library to the rescue:\nfrom collections import defaultdict\n\nLofD = [{'name': 'johnny', 'surname': 'smith', 'age': 53},\n {'name': 'johnny', 'surname': 'ryan', 'age': 13},\n {'name': 'jakob', 'surname': 'smith', 'age': 27},\n {'name': 'aaron', 'surname': 'specter', 'age': 22},\n {'name': 'max', 'surname': 'headroom', 'age': 108},\n]\n\ndef counters():\n return defaultdict(int)\n\ndef freqs(LofD):\n r = defaultdict(counters)\n for d in LofD:\n for k, v in d.items():\n r[k][v] += 1\n return dict((k, dict(v)) for k, v in r.items())\n\nprint freqs(LofD)\n\nemits\n{'age': {27: 1, 108: 1, 53: 1, 22: 1, 13: 1}, 'surname': {'headroom': 1, 'smith': 2, 'specter': 1, 'ryan': 1}, 'name': {'jakob': 1, 'max': 1, 'aaron': 1, 'johnny': 2}}\n\nas desired (order of keys apart, of course -- it's irrelevant in a dict).\n", "items = [{'name': 'johnny', 'surname': 'smith', 'age': 53}, {'name': 'johnny', 'surname': 'ryan', 'age': 13}, {'name': 'jakob', 'surname': 'smith', 'age': 27}, {'name': 'aaron', 'surname': 'specter', 'age': 22}, {'name': 'max', 'surname': 'headroom', 'age': 108}]\n\nglobal_dict = {}\n\nfor item in items:\n for key, value in item.items():\n if not global_dict.has_key(key):\n global_dict[key] = {}\n\n if not global_dict[key].has_key(value):\n global_dict[key][value] = 0\n\n global_dict[key][value] += 1\n\nprint global_dict\n\nSimplest solution and actually tested.\n", "New in Python 3.1: The collections.Counter class:\nmydict=[{'name': 'johnny', 'surname': 'smith', 'age': 53},\n {'name': 'johnny', 'surname': 'ryan', 'age': 13},\n {'name': 'jakob', 'surname': 'smith', 'age': 27},\n {'name': 'aaron', 'surname': 'specter', 'age': 22},\n {'name': 'max', 'surname': 'headroom', 'age': 108},\n]\n\nimport collections\nnewdict = {}\n\nfor key in mydict[0].keys():\n l = [value[key] for value in mydict]\n newdict[key] = dict(collections.Counter(l))\n\nprint(newdict)\n\noutputs:\n{'age': {27: 1, 108: 1, 53: 1, 22: 1, 13: 1}, \n'surname': {'headroom': 1, 'smith': 2, 'specter': 1, 'ryan': 1}, \n'name': {'jakob': 1, 'max': 1, 'aaron': 1, 'johnny': 2}}\n\n", "This?\nfrom collections import defaultdict\nfq = { 'name': defaultdict(int), 'surname': defaultdict(int), 'age': defaultdict(int) }\nfor row in listOfDicts:\n for field in fq:\n fq[field][row[field]] += 1\nprint fq\n\n" ]
[ 14, 2, 2, 1 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0001055646_dictionary_python.txt
Q: Can two versions of the same library coexist in the same Python install? The C libraries have a nice form of late binding, where the exact version of the library that was used during linking is recorded, and thus an executable can find the correct file, even when several versions of the same library are installed. Can the same be done in Python? To be more specific, I work on a Python project that uses some 3rd-party libraries, such as paramiko. Paramiko is now version 1.7.4, but some distributions carry an older version of it, while supplying about the same version of the Python interpreter. Naturally, I would like to support as many configurations as possible, and not just the latest distros. But if I upgrade the installed version of paramiko from what an old distro provides, I 1) make life hard for the package manager 2) might break some existing apps due to incompatibilities in the library version and 3) might get broken if the package manager decides to overwrite my custom installation. Is it possible to resolve this problem cleanly in Python? (i.e., how would i do the setup, and what should the code look like). Ideally, it would just install several versions of a library in site_libraries and let my script select the right one, rather than maintaining a private directory with a set of manually installed libraries.. P.S.: I could compile the Python program to a binary, carrying all the necessary dependencies with it, but it kind of works against the idea of using the interpreter provided by the distro. I do it on Windows though. A: You may want to take a look at virtualenv
Can two versions of the same library coexist in the same Python install?
The C libraries have a nice form of late binding, where the exact version of the library that was used during linking is recorded, and thus an executable can find the correct file, even when several versions of the same library are installed. Can the same be done in Python? To be more specific, I work on a Python project that uses some 3rd-party libraries, such as paramiko. Paramiko is now version 1.7.4, but some distributions carry an older version of it, while supplying about the same version of the Python interpreter. Naturally, I would like to support as many configurations as possible, and not just the latest distros. But if I upgrade the installed version of paramiko from what an old distro provides, I 1) make life hard for the package manager 2) might break some existing apps due to incompatibilities in the library version and 3) might get broken if the package manager decides to overwrite my custom installation. Is it possible to resolve this problem cleanly in Python? (i.e., how would i do the setup, and what should the code look like). Ideally, it would just install several versions of a library in site_libraries and let my script select the right one, rather than maintaining a private directory with a set of manually installed libraries.. P.S.: I could compile the Python program to a binary, carrying all the necessary dependencies with it, but it kind of works against the idea of using the interpreter provided by the distro. I do it on Windows though.
[ "You may want to take a look at virtualenv\n" ]
[ 8 ]
[]
[]
[ "python", "shared_libraries" ]
stackoverflow_0001055926_python_shared_libraries.txt
Q: Ruby on Rails vs. Django Possible Duplicate: Rails or Django? (or something else?) These are two web frameworks that are becoming (or have been in many circles) popular. I was wondering what are the advantages and disadvantages of each? Feel free to comment on Ruby and Python pros and cons also. Two disadvantages I am speculative about for RoR is the scalability, since it still seems to be a disputable topic, and how turbulent the 'in' libraries are? A: Watch Google Talk "Snakes and Rubies" on video.google.com. Core developers from Django and Ruby on Rails comparing these two frameworks. In better quality here
Ruby on Rails vs. Django
Possible Duplicate: Rails or Django? (or something else?) These are two web frameworks that are becoming (or have been in many circles) popular. I was wondering what are the advantages and disadvantages of each? Feel free to comment on Ruby and Python pros and cons also. Two disadvantages I am speculative about for RoR is the scalability, since it still seems to be a disputable topic, and how turbulent the 'in' libraries are?
[ "Watch Google Talk \"Snakes and Rubies\" on video.google.com. Core developers from Django and Ruby on Rails comparing these two frameworks. In better quality here\n" ]
[ 9 ]
[]
[]
[ "django", "python", "ruby", "ruby_on_rails" ]
stackoverflow_0001056278_django_python_ruby_ruby_on_rails.txt
Q: Python 2.4 plistlib on Linux According to http://docs.python.org/dev/library/plistlib.html, plistlib is available to non-Mac platforms only since 2.6, but I'm wondering if there's a way to get it work on 2.4 on Linux. A: Download it and give it a try: http://svn.python.org/projects/python/trunk/Lib/plistlib.py (If that doesn't work, you may have more luck with the 2.4 version.)
Python 2.4 plistlib on Linux
According to http://docs.python.org/dev/library/plistlib.html, plistlib is available to non-Mac platforms only since 2.6, but I'm wondering if there's a way to get it work on 2.4 on Linux.
[ "Download it and give it a try:\nhttp://svn.python.org/projects/python/trunk/Lib/plistlib.py\n(If that doesn't work, you may have more luck with the 2.4 version.)\n" ]
[ 3 ]
[]
[]
[ "linux", "python" ]
stackoverflow_0001056593_linux_python.txt
Q: JSON serialization in Spidermonkey I'm using python-spidermonkey to run JavaScript code. In order to pass objects (instead of just strings) to Python, I'm thinking of returning a JSON string. This seems like a common issue, so I wonder whether there are any facilities for this built into either Spidermonkey or python-spidermonkey. (I do know about uneval but that is not meant to be used for JSON serialization - and I'd rather avoid injecting a block of JavaScript to do this.) A: I would use JSON.stringify. It's part of the ECMAScript 5 standard, and it's implemented in the current version of spidermonkey. I don't know if it's in the version used by python-spidermonkey, but if it isn't, you can get a JavaScript implementation from http://www.json.org/js.html.
JSON serialization in Spidermonkey
I'm using python-spidermonkey to run JavaScript code. In order to pass objects (instead of just strings) to Python, I'm thinking of returning a JSON string. This seems like a common issue, so I wonder whether there are any facilities for this built into either Spidermonkey or python-spidermonkey. (I do know about uneval but that is not meant to be used for JSON serialization - and I'd rather avoid injecting a block of JavaScript to do this.)
[ "I would use JSON.stringify. It's part of the ECMAScript 5 standard, and it's implemented in the current version of spidermonkey. I don't know if it's in the version used by python-spidermonkey, but if it isn't, you can get a JavaScript implementation from http://www.json.org/js.html.\n" ]
[ 7 ]
[]
[]
[ "javascript", "json", "python", "spidermonkey" ]
stackoverflow_0001055805_javascript_json_python_spidermonkey.txt
Q: How to use dict in python? 10 5 -1 -1 -1 1 1 0 2 ... If I want to count the number of occurrences of each number in a file, how do I use python to do it? A: This is almost the exact same algorithm described in Anurag Uniyal's answer, except using the file as an iterator instead of readline(): from collections import defaultdict try: from io import StringIO # 2.6+, 3.x except ImportError: from StringIO import StringIO # 2.5 data = defaultdict(int) #with open("filename", "r") as f: # if a real file with StringIO("10\n5\n-1\n-1\n-1\n1\n1\n0\n2") as f: for line in f: data[int(line)] += 1 for number, count in data.iteritems(): print number, "was found", count, "times" A: Counter is your best friend:) http://docs.python.org/dev/library/collections.html#counter-objects for(Python2.5 and 2.6) http://code.activestate.com/recipes/576611/ >>> cnt = Counter() >>> for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: ... cnt[word] += 1 >>> cnt Counter({'blue': 3, 'red': 2, 'green': 1}) # or just cnt = Counter(['red', 'blue', 'red', 'green', 'blue', 'blue']) for this : print Counter(int(line.strip()) for line in open("foo.txt", "rb")) ##output Counter({-1: 3, 1: 2, 0: 1, 2: 1, 5: 1, 10: 1}) A: Read the lines of the file into a list l, e.g.: l = [int(line) for line in open('filename','r')] Starting with a list of values l, you can create a dictionary d that gives you for each value in the list the number of occurrences like this: >>> l = [10,5,-1,-1,-1,1,1,0,2] >>> d = dict((x,l.count(x)) for x in l) >>> d[1] 2 EDIT: as Matthew rightly points out, this is hardly optimal. Here is a version using defaultdict: from collections import defaultdict d = defaultdict(int) for line in open('filename','r'): d[int(line)] += 1 A: I think what you call map is, in python, a dictionary. Here is some useful link on how to use it: http://docs.python.org/tutorial/datastructures.html#dictionaries For a good solution, see the answer from Stephan or Matthew - but take also some time to understand what that code does :-) A: New in Python 3.1: from collections import Counter with open("filename","r") as lines: print(Counter(lines)) A: Use collections.defaultdict so that by deafult count for anything is zero After that loop thru lines in file using file.readline and convert each line to int increment counter for each value in your countDict at last go thru dict using for intV, count in countDict.iteritems() and print values A: Use dictionary where every line is a key, and count is value. Increment count for every line, and if there is no dictionary entry for line initialize it with 1 in except clause -- this should work with older versions of Python. def count_same_lines(fname): line_counts = {} for l in file(fname): l = l.rstrip() if l: try: line_counts[l] += 1 except KeyError: line_counts[l] = 1 print('cnt\ttxt') for k in line_counts.keys(): print('%d\t%s' % (line_counts[k], k)) A: l = [10,5,-1,-1,-1,1,1,0,2] d = {} for x in l: d[x] = (d[x] + 1) if (x in d) else 1 There will be a key in d for every distinct value in the original list, and the values of d will be the number of occurrences. A: counter.py #!/usr/bin/env python import fileinput from collections import defaultdict frequencies = defaultdict(int) for line in fileinput.input(): frequencies[line.strip()] += 1 print frequencies Example: $ perl -E'say 1*(rand() < 0.5) for (1..100)' | python counter.py defaultdict(<type 'int'>, {'1': 52, '0': 48})
How to use dict in python?
10 5 -1 -1 -1 1 1 0 2 ... If I want to count the number of occurrences of each number in a file, how do I use python to do it?
[ "This is almost the exact same algorithm described in Anurag Uniyal's answer, except using the file as an iterator instead of readline():\nfrom collections import defaultdict\ntry:\n from io import StringIO # 2.6+, 3.x\nexcept ImportError:\n from StringIO import StringIO # 2.5\n\ndata = defaultdict(int)\n\n#with open(\"filename\", \"r\") as f: # if a real file\nwith StringIO(\"10\\n5\\n-1\\n-1\\n-1\\n1\\n1\\n0\\n2\") as f:\n for line in f:\n data[int(line)] += 1\n\nfor number, count in data.iteritems():\n print number, \"was found\", count, \"times\"\n\n", "Counter is your best friend:)\nhttp://docs.python.org/dev/library/collections.html#counter-objects\nfor(Python2.5 and 2.6) http://code.activestate.com/recipes/576611/\n>>> cnt = Counter()\n>>> for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:\n... cnt[word] += 1\n>>> cnt\nCounter({'blue': 3, 'red': 2, 'green': 1})\n# or just cnt = Counter(['red', 'blue', 'red', 'green', 'blue', 'blue'])\n\nfor this :\nprint Counter(int(line.strip()) for line in open(\"foo.txt\", \"rb\"))\n##output\nCounter({-1: 3, 1: 2, 0: 1, 2: 1, 5: 1, 10: 1})\n\n", "Read the lines of the file into a list l, e.g.:\nl = [int(line) for line in open('filename','r')]\n\nStarting with a list of values l, you can create a dictionary d that gives you for each value in the list the number of occurrences like this:\n>>> l = [10,5,-1,-1,-1,1,1,0,2]\n>>> d = dict((x,l.count(x)) for x in l)\n>>> d[1]\n2\n\nEDIT: as Matthew rightly points out, this is hardly optimal. Here is a version using defaultdict:\nfrom collections import defaultdict\nd = defaultdict(int)\nfor line in open('filename','r'):\n d[int(line)] += 1\n\n", "I think what you call map is, in python, a dictionary.\nHere is some useful link on how to use it: http://docs.python.org/tutorial/datastructures.html#dictionaries\nFor a good solution, see the answer from Stephan or Matthew - but take also some time to understand what that code does :-)\n", "New in Python 3.1:\nfrom collections import Counter\nwith open(\"filename\",\"r\") as lines:\n print(Counter(lines))\n\n", "\nUse collections.defaultdict so that\nby deafult count for anything is\nzero\nAfter that loop thru lines in file\nusing file.readline and convert\neach line to int\nincrement counter for each value in\nyour countDict\nat last go thru dict using for intV,\ncount in countDict.iteritems() and\nprint values\n\n", "Use dictionary where every line is a key, and count is value. Increment count for every line, and if there is no dictionary entry for line initialize it with 1 in except clause -- this should work with older versions of Python.\ndef count_same_lines(fname):\n line_counts = {}\n for l in file(fname):\n l = l.rstrip()\n if l:\n try:\n line_counts[l] += 1\n except KeyError:\n line_counts[l] = 1\n print('cnt\\ttxt')\n for k in line_counts.keys():\n print('%d\\t%s' % (line_counts[k], k))\n\n", "l = [10,5,-1,-1,-1,1,1,0,2]\nd = {}\nfor x in l:\n d[x] = (d[x] + 1) if (x in d) else 1\n\nThere will be a key in d for every distinct value in the original list, and the values of d will be the number of occurrences.\n", "counter.py\n#!/usr/bin/env python\nimport fileinput\nfrom collections import defaultdict\n\nfrequencies = defaultdict(int)\nfor line in fileinput.input():\n frequencies[line.strip()] += 1\n\nprint frequencies\n\nExample: \n$ perl -E'say 1*(rand() < 0.5) for (1..100)' | python counter.py\ndefaultdict(<type 'int'>, {'1': 52, '0': 48})\n\n" ]
[ 7, 5, 2, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001047614_python.txt
Q: How to analyze IE activity when opening a specific web page I'd like to retrieve data from a specific webpage by using urllib library. The problem is that in order to open this page some data should be sent to the server before. If I do it with IE, i need to update first some checkboxes and then press "display data" button, which opens the desired page. Looking into the source code, I see that pressing "display data" submits some kind of form - there is no specific url address there. I cannot figure out by looking at the code what paramaters are sent to the server... I think that maybe the simpler way to do that would be to analyze the communication between the IE and the webserver after pressing the "display data" button. If I could see explicitly what IE does, I could mimic it with urllib. What is the easiest way to do that? A: An HTML debugging proxy would be the best tool to use in this situation. As you're using IE, I recommend Fiddler, as it is developed by Microsoft and automatically integrates with Internet Explorer through a plugin. I personally use Fiddler all the time, and it is a really helpful tool, as I'm building an app that mimics a user's browsing session with a website. Fiddler has really good debugging of request parameters, responses, and can even decode encrypted packets. A: You can use a web debugging proxy (e.g. Fiddler, Charles) or a browser addon (e.g. HttpFox, TamperData) or a packet sniffer (e.g. Wireshark).
How to analyze IE activity when opening a specific web page
I'd like to retrieve data from a specific webpage by using urllib library. The problem is that in order to open this page some data should be sent to the server before. If I do it with IE, i need to update first some checkboxes and then press "display data" button, which opens the desired page. Looking into the source code, I see that pressing "display data" submits some kind of form - there is no specific url address there. I cannot figure out by looking at the code what paramaters are sent to the server... I think that maybe the simpler way to do that would be to analyze the communication between the IE and the webserver after pressing the "display data" button. If I could see explicitly what IE does, I could mimic it with urllib. What is the easiest way to do that?
[ "An HTML debugging proxy would be the best tool to use in this situation. As you're using IE, I recommend Fiddler, as it is developed by Microsoft and automatically integrates with Internet Explorer through a plugin. I personally use Fiddler all the time, and it is a really helpful tool, as I'm building an app that mimics a user's browsing session with a website. Fiddler has really good debugging of request parameters, responses, and can even decode encrypted packets. \n", "You can use a web debugging proxy (e.g. Fiddler, Charles) or a browser addon (e.g. HttpFox, TamperData) or a packet sniffer (e.g. Wireshark).\n" ]
[ 3, 0 ]
[]
[]
[ "html", "information_retrieval", "internet_explorer", "python" ]
stackoverflow_0001056739_html_information_retrieval_internet_explorer_python.txt
Q: Python, Django, datetime In my model, I have 2 datetime properties: start_date end_date I would like to count the end date as a one week after the start_date. How can I accomplish this? A: If you always want your end_date to be one week after the start_date, what you could do, is to make a custom save method for your model. Another option would be to use signals instead. The result would be the same, but since you are dealing with the models data, I would suggest that you go for the custom save method. The code for it would look something like this: class ModelName(models.Model): ... def save(self): # Place code here, which is excecuted the same # time the ``pre_save``-signal would be self.end_date = self.start_date + datetime.timedelta(days=7) # Call parent's ``save`` function super(ModelName, self).save() You can read about a bit about how the save method/signals is called in the django docs at: http://docs.djangoproject.com/en/dev/ref/models/instances/ A: >>> import datetime >>> start_date = datetime.datetime.now() >>> end_date = start_date + datetime.timedelta(7) >>> print end_date
Python, Django, datetime
In my model, I have 2 datetime properties: start_date end_date I would like to count the end date as a one week after the start_date. How can I accomplish this?
[ "If you always want your end_date to be one week after the start_date, what you could do, is to make a custom save method for your model.\nAnother option would be to use signals instead. The result would be the same, but since you are dealing with the models data, I would suggest that you go for the custom save method. The code for it would look something like this:\nclass ModelName(models.Model):\n ...\n\n def save(self):\n # Place code here, which is excecuted the same\n # time the ``pre_save``-signal would be\n self.end_date = self.start_date + datetime.timedelta(days=7)\n\n # Call parent's ``save`` function\n super(ModelName, self).save()\n\nYou can read about a bit about how the save method/signals is called in the django docs at: http://docs.djangoproject.com/en/dev/ref/models/instances/\n", ">>> import datetime\n>>> start_date = datetime.datetime.now()\n>>> end_date = start_date + datetime.timedelta(7)\n>>> print end_date\n\n" ]
[ 8, 5 ]
[]
[]
[ "datetime", "django", "python" ]
stackoverflow_0001056934_datetime_django_python.txt
Q: Django failing to find apps I have been working on a django app on my local computer for some time now and i am trying to move it to a mediatemple container and im having a problem when i try to start up django. it gives me this traceback: application failed to start, starting manage.py fastcgi failed:Traceback (most recent call last): File "manage.py", line 11, in ? execute_manager(settings) File "/home/58626/data/python/lib/django/core/management/__init__.py", line 340, in execute_manager utility.execute() File "/home/58626/data/python/lib/django/core/management/__init__.py", line 295, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/58626/data/python/lib/django/core/management/base.py", line 192, in run_from_argv self.execute(*args, **options.__dict__) File "/home/58626/data/python/lib/django/core/management/base.py", line 210, in execute translation.activate('en-us') File "/home/58626/data/python/lib/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/home/58626/data/python/lib/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 209, in activate _active[currentThread()] = translation(language) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 198, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 181, in _fetch app = getattr(__import__(appname[:p], {}, {}, [appname[p+1:]]), appname[p+1:]) AttributeError: 'module' object has no attribute 'web' The name of the first app is "web". A: Steps I would take would be Run the dev server on your Media Template instance. If that runs successfully, it obviously is an error with your apache/nginx/whaever setup. I dont have experience running apps as FCGI, which it looks to em you are trying to do. It looks to me that somehow when Fcgi runs, it is unable to find your apps. So this is possibly a PYTHONPATH issue. Log/Print sys.path from your fcgi script and look there.
Django failing to find apps
I have been working on a django app on my local computer for some time now and i am trying to move it to a mediatemple container and im having a problem when i try to start up django. it gives me this traceback: application failed to start, starting manage.py fastcgi failed:Traceback (most recent call last): File "manage.py", line 11, in ? execute_manager(settings) File "/home/58626/data/python/lib/django/core/management/__init__.py", line 340, in execute_manager utility.execute() File "/home/58626/data/python/lib/django/core/management/__init__.py", line 295, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/58626/data/python/lib/django/core/management/base.py", line 192, in run_from_argv self.execute(*args, **options.__dict__) File "/home/58626/data/python/lib/django/core/management/base.py", line 210, in execute translation.activate('en-us') File "/home/58626/data/python/lib/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/home/58626/data/python/lib/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 209, in activate _active[currentThread()] = translation(language) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 198, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/home/58626/data/python/lib/django/utils/translation/trans_real.py", line 181, in _fetch app = getattr(__import__(appname[:p], {}, {}, [appname[p+1:]]), appname[p+1:]) AttributeError: 'module' object has no attribute 'web' The name of the first app is "web".
[ "Steps I would take would be \n\nRun the dev server on your Media Template instance. If that runs successfully, it obviously is an error with your apache/nginx/whaever setup.\nI dont have experience running apps as FCGI, which it looks to em you are trying to do. It looks to me that somehow when Fcgi runs, it is unable to find your apps. So this is possibly a PYTHONPATH issue. Log/Print sys.path from your fcgi script and look there.\n\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001056675_django_python.txt
Q: Browser automation: Python + Firefox using PyXPCOM I have tried Pamie a browser automation library for internet explorer. It interfaces IE using COM, pretty neat: import PAM30 ie = PAM30.PAMIE("http://user-agent-string.info/") ie.clickButton("Analyze my UA") Now I would like to do the same thing using PyXPCOM with similar flexibility on Firefox. How can I do this? Can you provide sample code? update: please only pyxpcom A: I've used webdriver with firefox. I was very pleased with it. As for the code examples, this will get you started. A: My understanding of PyXPCOM is that it's meant to let you create and access XPCOM components, not control existing ones. You may not be able to do this using PyXPCOM at all, per Mark Hammond, the original author: It simply isn't what XPCOM is trying to do. I'm not sure if Mozilla/Firefox now has or is developing a COM or any other "automation" mechanism. and: If by "automating", you mean "controlling Mozilla via a remote process via xpcom", then as far as I know, that is not possible You may instead want to take a look at the previously-suggested Webdriver project, Windmill, or MozMill, both of which support automating Firefox/Gecko/XULRunner via Python. A: If you're testing a webapp and want to write Python to do it, check out Selenium RC so you can use the same API for all browsers.
Browser automation: Python + Firefox using PyXPCOM
I have tried Pamie a browser automation library for internet explorer. It interfaces IE using COM, pretty neat: import PAM30 ie = PAM30.PAMIE("http://user-agent-string.info/") ie.clickButton("Analyze my UA") Now I would like to do the same thing using PyXPCOM with similar flexibility on Firefox. How can I do this? Can you provide sample code? update: please only pyxpcom
[ "I've used webdriver with firefox. I was very pleased with it.\nAs for the code examples, this will get you started.\n", "My understanding of PyXPCOM is that it's meant to let you create and access XPCOM components, not control existing ones. You may not be able to do this using PyXPCOM at all, per Mark Hammond, the original author:\n\nIt simply isn't what XPCOM is trying to do. I'm not sure if Mozilla/Firefox now has or is developing a COM or any other \"automation\" mechanism.\n\nand:\n\nIf by \"automating\", you mean \"controlling Mozilla via a remote process via xpcom\", then as far as I know, that is not possible\n\nYou may instead want to take a look at the previously-suggested Webdriver project, Windmill, or MozMill, both of which support automating Firefox/Gecko/XULRunner via Python.\n", "If you're testing a webapp and want to write Python to do it, check out Selenium RC so you can use the same API for all browsers. \n" ]
[ 10, 4, 2 ]
[]
[]
[ "automation", "firefox", "python" ]
stackoverflow_0001020524_automation_firefox_python.txt
Q: Django/Python: How do i transfer a class's attributes to another via a for loop? (Form->Model Instance) I wish to update a model instance from a form I have. The form is a ModelForm, so it has the same attributes as the model instance, how do I transfer the attributes from the form instance to the model instance instead of doing this: modelinstance.name = form.name . . . . A for loop perhaps? :) Thanks! A: Call the save() method of the form. Specifically instantiate the form with keyword argument instance like this: >>> a = Article.objects.get(pk=1) >>> f = ArticleForm(instance=a) >>> f.save() Taken from here: http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#the-save-method
Django/Python: How do i transfer a class's attributes to another via a for loop? (Form->Model Instance)
I wish to update a model instance from a form I have. The form is a ModelForm, so it has the same attributes as the model instance, how do I transfer the attributes from the form instance to the model instance instead of doing this: modelinstance.name = form.name . . . . A for loop perhaps? :) Thanks!
[ "Call the save() method of the form.\nSpecifically instantiate the form with keyword argument instance like this:\n>>> a = Article.objects.get(pk=1)\n>>> f = ArticleForm(instance=a)\n>>> f.save()\n\nTaken from here: http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#the-save-method\n" ]
[ 6 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001057477_django_python.txt
Q: Using pipes to communicate data between two anonymous python scripts Consider this at the windows commandline. scriptA.py | scriptB.py I want to send a dictionary object from scriptA.py to scriptB.py by pickle:ing it and sending it over a pipe. But I don't know how to accomplish this. I've read some posts about this subject here, but usually there's answers along these line: Popen( "scriptA.py"´, ..., and so on ) But I don't actually know the name of "scriptA.py". I just want to get hold of the ready pipe object and send/receive the databuffer. I've tried sys.stdout/stdout, but I get file-descriptor errors and basically haven't tried that track very far. The process is simple: scriptA.py: (1) Pickle/Serialize dictionary into stringbuffer (2) Send stringbuffer over pipe scriptB.py (3) Receive stringbuffer from pipe (4) Unpickle/Deserialize stringbuffer into dictionary A: When you say this to a shell scriptA.py | scriptB.py The shell connects them with a pipe. You do NOTHING and it works perfectly. Everything that scriptA.py writes to sys.stdout goes to scriptB.py Everything that scriptB.py reads from sys.stdin came from scriptA.py They're already connected. So, how do you pass a dictionary from stdout in A to stdin in B? Pickle. scriptA.py dumps the dictionary to stdout. scriptB.py loads the dictionary from stdin. JSON. scriptA.py dumps the dictionary to stdout. scriptB.py loads the dictionary from stdin. This is already built-in to Python and takes very, very little code. In scriptA, json.dump( {}, sys.stdout ) or pickle.dump( {}, sys.stdout ) In scriptB, json.load( sys.stdin ) or pickle.load( sys.stdin ) A: The pipe just puts stdout of A to stdin of B. A does: import sys sys.stdout.writelines(output) B just does: import sys input = sys.stdin.readlines() A: When you are piping something, you are (generally) feeding the standard output of one program into the standard input of another. I think you should keep trying this path. If you're having trouble just being able to read the output of your first script with your second, check out this question.
Using pipes to communicate data between two anonymous python scripts
Consider this at the windows commandline. scriptA.py | scriptB.py I want to send a dictionary object from scriptA.py to scriptB.py by pickle:ing it and sending it over a pipe. But I don't know how to accomplish this. I've read some posts about this subject here, but usually there's answers along these line: Popen( "scriptA.py"´, ..., and so on ) But I don't actually know the name of "scriptA.py". I just want to get hold of the ready pipe object and send/receive the databuffer. I've tried sys.stdout/stdout, but I get file-descriptor errors and basically haven't tried that track very far. The process is simple: scriptA.py: (1) Pickle/Serialize dictionary into stringbuffer (2) Send stringbuffer over pipe scriptB.py (3) Receive stringbuffer from pipe (4) Unpickle/Deserialize stringbuffer into dictionary
[ "When you say this to a shell\nscriptA.py | scriptB.py\n\nThe shell connects them with a pipe. You do NOTHING and it works perfectly.\nEverything that scriptA.py writes to sys.stdout goes to scriptB.py\nEverything that scriptB.py reads from sys.stdin came from scriptA.py\nThey're already connected. \nSo, how do you pass a dictionary from stdout in A to stdin in B?\n\nPickle. scriptA.py dumps the dictionary to stdout. scriptB.py loads the dictionary from stdin.\nJSON. scriptA.py dumps the dictionary to stdout. scriptB.py loads the dictionary from stdin.\n\nThis is already built-in to Python and takes very, very little code.\nIn scriptA, json.dump( {}, sys.stdout ) or pickle.dump( {}, sys.stdout )\nIn scriptB, json.load( sys.stdin ) or pickle.load( sys.stdin )\n", "The pipe just puts stdout of A to stdin of B.\nA does:\nimport sys\nsys.stdout.writelines(output)\n\nB just does:\nimport sys\ninput = sys.stdin.readlines()\n\n", "When you are piping something, you are (generally) feeding the standard output of one program into the standard input of another. I think you should keep trying this path.\nIf you're having trouble just being able to read the output of your first script with your second, check out this question.\n" ]
[ 7, 2, 0 ]
[]
[]
[ "pipe", "python", "windows" ]
stackoverflow_0001057576_pipe_python_windows.txt
Q: Bad pipe filedescriptor when reading from stdin in python Duplicate of this question. Vote to close. Consider this at the windows commandline. scriptA.py | scriptB.py In scriptA.py: sys.stdout.write( "hello" ) In scriptB.py: print sys.stdin.read() This generates the following error: c:\> scriptA.py | scriptB.py close failed: [Errno 22] Invalid argument Traceback (most recent call last): File "c:\scriptB.py", line 20, in <module> print sys.stdin.read() IOError: [Errno 9] Bad file descriptor The "close failed" message seems to come from execution of scriptA.py. It doesn't matter if I use sys.stdin.read(), sys.stdin.read(1), sys.stdin.readlines() etc etc. What's wrong? Duplicate of this question. Vote to close. A: It seems that stdin/stdout redirect does not work when starting from a file association. This is not specific to python, but a problem caused by win32 cmd.exe. See: http://mail.python.org/pipermail/python-bugs-list/2004-August/024920.html
Bad pipe filedescriptor when reading from stdin in python
Duplicate of this question. Vote to close. Consider this at the windows commandline. scriptA.py | scriptB.py In scriptA.py: sys.stdout.write( "hello" ) In scriptB.py: print sys.stdin.read() This generates the following error: c:\> scriptA.py | scriptB.py close failed: [Errno 22] Invalid argument Traceback (most recent call last): File "c:\scriptB.py", line 20, in <module> print sys.stdin.read() IOError: [Errno 9] Bad file descriptor The "close failed" message seems to come from execution of scriptA.py. It doesn't matter if I use sys.stdin.read(), sys.stdin.read(1), sys.stdin.readlines() etc etc. What's wrong? Duplicate of this question. Vote to close.
[ "It seems that stdin/stdout redirect does not work when starting from a file association.\nThis is not specific to python, but a problem caused by win32 cmd.exe.\nSee: http://mail.python.org/pipermail/python-bugs-list/2004-August/024920.html\n" ]
[ 7 ]
[]
[]
[ "pipe", "python", "windows" ]
stackoverflow_0001057638_pipe_python_windows.txt
Q: How to do windows API calls in Python 3.1? Has anyone found a version of pywin32 for python 3.x? The latest available appears to be for 2.6. Alternatively, how would I "roll my own" windows API calls in Python 3.1? A: You should be able to do everything with ctypes, if a bit cumbersomely. Here's an example of getting the "common application data" folder: from ctypes import windll, wintypes _SHGetFolderPath = windll.shell32.SHGetFolderPathW path_buf = wintypes.create_unicode_buffer(255) csidl = 35 _SHGetFolderPath(0, csidl, 0, 0, path_buf) print(path_buf.value) Result: C:\Documents and Settings\All Users\Application Data A: There are pywin32 available for 3.0. Python 3.1 was release two days ago, so if you need pywin32 for that you either need to wait a bit, or compile them from source. http://sourceforge.net/project/showfiles.php?group_id=78018&package_id=79063
How to do windows API calls in Python 3.1?
Has anyone found a version of pywin32 for python 3.x? The latest available appears to be for 2.6. Alternatively, how would I "roll my own" windows API calls in Python 3.1?
[ "You should be able to do everything with ctypes, if a bit cumbersomely.\nHere's an example of getting the \"common application data\" folder:\nfrom ctypes import windll, wintypes\n\n_SHGetFolderPath = windll.shell32.SHGetFolderPathW\npath_buf = wintypes.create_unicode_buffer(255)\ncsidl = 35\n_SHGetFolderPath(0, csidl, 0, 0, path_buf)\nprint(path_buf.value)\n\nResult:\nC:\\Documents and Settings\\All Users\\Application Data\n\n", "There are pywin32 available for 3.0. Python 3.1 was release two days ago, so if you need pywin32 for that you either need to wait a bit, or compile them from source.\nhttp://sourceforge.net/project/showfiles.php?group_id=78018&package_id=79063\n" ]
[ 10, 6 ]
[]
[]
[ "python", "python_3.x", "winapi" ]
stackoverflow_0001057496_python_python_3.x_winapi.txt
Q: Execute a prepared statement in sqlalchemy I have to run 40K requests against a username: SELECT * from user WHERE login = :login It's slow, so I figured I would just use a prepared statement. So I do e = sqlalchemy.create_engine(...) c = e.connect() c.execute("PREPARE userinfo(text) AS SELECT * from user WHERE login = $1") r = c.execute("EXECUTE userinfo('bob')") for x in r: do_foo() But I have a: InterfaceError: (InterfaceError) cursor already closed None None I don't understand why I get an exception A: Not sure how to solve your cursor related error message, but I dont think a prepared staement will solve your performance issue - as long as your using SQL server 2005 or later the execution plan for SELECT * from user WHERE login = $login will already be re-used and there will be no performance gain from the prepared statement. I dont know about MySql or other SQL database servers, but I suspect they too have similar optimisations for Ad-Hoc queries that make the prepared statement redundant. It sounds like the cause of the performance hit is more down to the fact that you are making 40,000 round trips to the database - you should try and rewrite the query so that you are only executing one SQL statement with a list of the login names. Am I right in thinking that MySql supports an aray data type? If it doesnt (or you are using Microsoft SQL) you should look into passing in some sort of delimited list of usernames. A: From this discussion, it might be a good idea to check your paster debug logs in case there is a better error message there.
Execute a prepared statement in sqlalchemy
I have to run 40K requests against a username: SELECT * from user WHERE login = :login It's slow, so I figured I would just use a prepared statement. So I do e = sqlalchemy.create_engine(...) c = e.connect() c.execute("PREPARE userinfo(text) AS SELECT * from user WHERE login = $1") r = c.execute("EXECUTE userinfo('bob')") for x in r: do_foo() But I have a: InterfaceError: (InterfaceError) cursor already closed None None I don't understand why I get an exception
[ "Not sure how to solve your cursor related error message, but I dont think a prepared staement will solve your performance issue - as long as your using SQL server 2005 or later the execution plan for SELECT * from user WHERE login = $login will already be re-used and there will be no performance gain from the prepared statement. I dont know about MySql or other SQL database servers, but I suspect they too have similar optimisations for Ad-Hoc queries that make the prepared statement redundant.\nIt sounds like the cause of the performance hit is more down to the fact that you are making 40,000 round trips to the database - you should try and rewrite the query so that you are only executing one SQL statement with a list of the login names. Am I right in thinking that MySql supports an aray data type? If it doesnt (or you are using Microsoft SQL) you should look into passing in some sort of delimited list of usernames.\n", "From this discussion, it might be a good idea to check your paster debug logs in case there is a better error message there.\n" ]
[ 2, 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001058037_python_sqlalchemy.txt
Q: Flags in Python I'm working with a large matrix (250x250x30 = 1,875,000 cells), and I'd like a way to set an arbitrary number of flags for each cell in this matrix, in some manner that's easy to use and reasonably space efficient. My original plan was a 250x250x30 list array, where each element was something like: ["FLAG1","FLAG8","FLAG12"]. I then changed it to storing just integers instead: [1,8,12]. These integers are mapped internally by getter/setter functions to the original flag strings. This only uses 250mb with 8 flags per point, which is fine in terms of memory. My question is: am I missing another obvious way to structure this sort of data? Thanks all for your suggestions. I ended up rolling a few suggestions into one, sadly I can only pick one answer and have to live with upvoting the others: EDIT: erm the initial code I had here (using sets as the base element of a 3d numpy array) used A LOT of memory. This new version uses around 500mb when filled with randint(0,2**1000). import numpy FLAG1=2**0 FLAG2=2**1 FLAG3=2**2 FLAG4=2**3 (x,y,z) = (250,250,30) array = numpy.zeros((x,y,z), dtype=object) def setFlag(location,flag): array[location] |= flag def unsetFlag(location,flag): array[location] &= ~flag A: Your solution is fine if every single cell is going to have a flag. However if you are working with a sparse dataset where only a small subsection of your cells will have flags what you really want is a dictionary. You would want to set up the dictonary so the key is a tuple for the location of the cell and the value is a list of flags like you have in your solution. allFlags = {(1,1,1):[1,2,3], (250,250,30):[4,5,6]} Here we have the 1,1,1 cell have the flags 1,2, and 3 and the cell 250,250,30 have the flags 4,5, and 6 edit- fixed key tuples, thanks Andre, and dictionary syntax. A: You can define some constants with different, power of two values as: FLAG1 = 0x01 FLAG8 = 0x02 FLAG12 = 0x04 ... And use them with boolean logic to store the flags in only one integer, p.e.: flags = FLAG1 | FLAG8 To check if a flag is enabled, you can use the & operator: flag1_enabled = flags & FLAG1 If the flag is enabled, this expression will return a non-zero value, that will be evaluated as True in any boolean operation. If the flag is disabled, the expression will return 0, that is evaluated as False in boolean operations. A: I would generally use a numpy array (presumably of short ints, 2 bytes each, since you may need more than 256 distinct values) -- that would take less than 4MB for the <2 million cells. If for some reason I couldn't afford the numpy dependency (e.g on App Engine, which doesn't support numpy), I'd use the standard library array module - it only supports 1-dimensional arrays, but it's just as space-efficient as numpy for large homogeneous arrays, and the getter/setter routines you mention can perfectly well "linearize" a 3-items tuple that's your natural index into the single integer index into the 1-D array. In general, consider numpy (or array) any time you have large homogeneous, dense vectors or matrices of numbers -- Python built-in lists are highly wasteful of space in this use case (due to their generality which you're not using and don't need here!-), and saving memory indirectly translates to saving time too (better caching, fewer levels of indirection, etc, etc). A: Consider using Flyweight pattern to share cell properties: http://en.wikipedia.org/wiki/Flyweight_pattern A: BitSet is what you want, since it allows you to store many flags at once using only an fixed size integer (Int type) A: Taking Robbie's suggestion one step further... flags = set() x, y, flag = 34, 201, 3 flags.add((x, y, flag)) # set flag 3 at position (34, 201) if (3, 2, 1) in flags: # check if flag 1 is at position (3, 2) # do something else: # do something else You can also create a helper class. class Flags(object): def __init__(self): self.data = set() def add(self, x, y, flag): self.data.add((x, y, flag)) def remove(self, x, y, flag): self.data.remove((x, y, flag)) def contains(self, x, y, flag): return (x, y, flag) in self.data You could also implement Python's special methods like __contains__ to make it easier to work with.
Flags in Python
I'm working with a large matrix (250x250x30 = 1,875,000 cells), and I'd like a way to set an arbitrary number of flags for each cell in this matrix, in some manner that's easy to use and reasonably space efficient. My original plan was a 250x250x30 list array, where each element was something like: ["FLAG1","FLAG8","FLAG12"]. I then changed it to storing just integers instead: [1,8,12]. These integers are mapped internally by getter/setter functions to the original flag strings. This only uses 250mb with 8 flags per point, which is fine in terms of memory. My question is: am I missing another obvious way to structure this sort of data? Thanks all for your suggestions. I ended up rolling a few suggestions into one, sadly I can only pick one answer and have to live with upvoting the others: EDIT: erm the initial code I had here (using sets as the base element of a 3d numpy array) used A LOT of memory. This new version uses around 500mb when filled with randint(0,2**1000). import numpy FLAG1=2**0 FLAG2=2**1 FLAG3=2**2 FLAG4=2**3 (x,y,z) = (250,250,30) array = numpy.zeros((x,y,z), dtype=object) def setFlag(location,flag): array[location] |= flag def unsetFlag(location,flag): array[location] &= ~flag
[ "Your solution is fine if every single cell is going to have a flag. However if you are working with a sparse dataset where only a small subsection of your cells will have flags what you really want is a dictionary. You would want to set up the dictonary so the key is a tuple for the location of the cell and the value is a list of flags like you have in your solution.\nallFlags = {(1,1,1):[1,2,3], (250,250,30):[4,5,6]}\n\nHere we have the 1,1,1 cell have the flags 1,2, and 3 and the cell 250,250,30 have the flags 4,5, and 6\nedit- fixed key tuples, thanks Andre, and dictionary syntax.\n", "You can define some constants with different, power of two values as:\nFLAG1 = 0x01\nFLAG8 = 0x02\nFLAG12 = 0x04\n...\n\nAnd use them with boolean logic to store the flags in only one integer, p.e.:\nflags = FLAG1 | FLAG8\n\nTo check if a flag is enabled, you can use the & operator:\nflag1_enabled = flags & FLAG1\n\nIf the flag is enabled, this expression will return a non-zero value, that will be evaluated as True in any boolean operation. If the flag is disabled, the expression will return 0, that is evaluated as False in boolean operations.\n", "I would generally use a numpy array (presumably of short ints, 2 bytes each, since you may need more than 256 distinct values) -- that would take less than 4MB for the <2 million cells.\nIf for some reason I couldn't afford the numpy dependency (e.g on App Engine, which doesn't support numpy), I'd use the standard library array module - it only supports 1-dimensional arrays, but it's just as space-efficient as numpy for large homogeneous arrays, and the getter/setter routines you mention can perfectly well \"linearize\" a 3-items tuple that's your natural index into the single integer index into the 1-D array.\nIn general, consider numpy (or array) any time you have large homogeneous, dense vectors or matrices of numbers -- Python built-in lists are highly wasteful of space in this use case (due to their generality which you're not using and don't need here!-), and saving memory indirectly translates to saving time too (better caching, fewer levels of indirection, etc, etc).\n", "Consider using Flyweight pattern to share cell properties:\nhttp://en.wikipedia.org/wiki/Flyweight_pattern\n", "BitSet is what you want, since it allows you to store many flags at once using only an fixed size integer (Int type)\n", "Taking Robbie's suggestion one step further...\nflags = set()\nx, y, flag = 34, 201, 3\nflags.add((x, y, flag)) # set flag 3 at position (34, 201)\nif (3, 2, 1) in flags: # check if flag 1 is at position (3, 2)\n # do something\nelse:\n # do something else\n\nYou can also create a helper class.\nclass Flags(object):\n def __init__(self):\n self.data = set()\n def add(self, x, y, flag):\n self.data.add((x, y, flag))\n def remove(self, x, y, flag):\n self.data.remove((x, y, flag))\n def contains(self, x, y, flag):\n return (x, y, flag) in self.data\n\nYou could also implement Python's special methods like __contains__ to make it easier to work with.\n" ]
[ 7, 5, 5, 3, 1, 1 ]
[]
[]
[ "flags", "matrix", "numpy", "python" ]
stackoverflow_0001058434_flags_matrix_numpy_python.txt
Q: Coding a coroutine in Python to display "odd" and "even" numbers inifinitely I have scratchy ideas of Generators, Iterators and Coroutines. (from PEPs and other tutorials). I want to implement a coroutine- in which routine1 will print odd and routine2 will print even numbers infinitely in a fashion such as: routine1: print odd yield to routine2 routune2: print even yield to routine1 Have some rough ideas of the code that should do this, but not able to get it in shape. I do not want the code. But, I would appreciate pseudo code style pointers for the same. Thank you. A: PEP 342, "Coroutines via Enhanced Generators", gives as its example 3 'A simple co-routine scheduler or "trampoline" that lets coroutines "call" other coroutines by yielding the coroutine they wish to invoke.' -- you don't need that much generality (or any of the generality aspects PEP 342 first introduced), for this very specific task, given that the coroutines are not communicating anything to each other, there's only two of them, their order of succession is perfectly regular, there is no termination, etc, etc... but a small subset of that code is still worth implementing as it shows you more about coroutines than this extremely simple example could on its own. The two coroutines should probably be two instances from the same generator function differing just in starting point (no point in writing that while True: loop twice after all, given how simple its body will be;-). As you'll see, the interesting part is the trampoline, even though you can and should make it vastly simpler than the general one in PEP 342. A: You yield back to the method that called you. Hence you can't yield to routine 1. You just yield. You could let routine 1 call routine 2, and routine 2 can yield and hence return back to routine 1.
Coding a coroutine in Python to display "odd" and "even" numbers inifinitely
I have scratchy ideas of Generators, Iterators and Coroutines. (from PEPs and other tutorials). I want to implement a coroutine- in which routine1 will print odd and routine2 will print even numbers infinitely in a fashion such as: routine1: print odd yield to routine2 routune2: print even yield to routine1 Have some rough ideas of the code that should do this, but not able to get it in shape. I do not want the code. But, I would appreciate pseudo code style pointers for the same. Thank you.
[ "PEP 342, \"Coroutines via Enhanced Generators\", gives as its example 3 'A simple co-routine scheduler or \"trampoline\" that lets coroutines \"call\" other coroutines by yielding the coroutine they wish to invoke.' -- you don't need that much generality (or any of the generality aspects PEP 342 first introduced), for this very specific task, given that the coroutines are not communicating anything to each other, there's only two of them, their order of succession is perfectly regular, there is no termination, etc, etc... but a small subset of that code is still worth implementing as it shows you more about coroutines than this extremely simple example could on its own.\nThe two coroutines should probably be two instances from the same generator function differing just in starting point (no point in writing that while True: loop twice after all, given how simple its body will be;-). As you'll see, the interesting part is the trampoline, even though you can and should make it vastly simpler than the general one in PEP 342.\n", "You yield back to the method that called you. Hence you can't yield to routine 1. You just yield. You could let routine 1 call routine 2, and routine 2 can yield and hence return back to routine 1.\n" ]
[ 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001058383_python.txt
Q: How to get a nested element in beautiful soup I am struggling with the syntax required to grab some hrefs in a td. The table, tr and td elements dont have any class's or id's. If I wanted to grab the anchor in this example, what would I need? < tr > < td > < a >... Thanks A: As per the docs, you first make a parse tree: import BeautifulSoup html = "<html><body><tr><td><a href='foo'/></td></tr></body></html>" soup = BeautifulSoup.BeautifulSoup(html) and then you search in it, for example for <a> tags whose immediate parent is a <td>: for ana in soup.findAll('a'): if ana.parent.name == 'td': print ana["href"] A: Something like this? from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(html) anchors = [td.find('a') for td in soup.findAll('td')] That should find the first "a" inside each "td" in the html you provide. You can tweak td.find to be more specific or else use findAll if you have several links inside each td. UPDATE: re Daniele's comment, if you want to make sure you don't have any None's in the list, then you could modify the list comprehension thus: from BeautifulSoup import BeautifulSoup soup = BeautifulSoup(html) anchors = [a for a in (td.find('a') for td in soup.findAll('td')) if a] Which basically just adds a check to see if you have an actual element returned by td.find('a').
How to get a nested element in beautiful soup
I am struggling with the syntax required to grab some hrefs in a td. The table, tr and td elements dont have any class's or id's. If I wanted to grab the anchor in this example, what would I need? < tr > < td > < a >... Thanks
[ "As per the docs, you first make a parse tree:\nimport BeautifulSoup\nhtml = \"<html><body><tr><td><a href='foo'/></td></tr></body></html>\"\nsoup = BeautifulSoup.BeautifulSoup(html)\n\nand then you search in it, for example for <a> tags whose immediate parent is a <td>:\nfor ana in soup.findAll('a'):\n if ana.parent.name == 'td':\n print ana[\"href\"]\n\n", "Something like this?\nfrom BeautifulSoup import BeautifulSoup\nsoup = BeautifulSoup(html)\nanchors = [td.find('a') for td in soup.findAll('td')]\n\nThat should find the first \"a\" inside each \"td\" in the html you provide. You can tweak td.find to be more specific or else use findAll if you have several links inside each td.\nUPDATE: re Daniele's comment, if you want to make sure you don't have any None's in the list, then you could modify the list comprehension thus:\nfrom BeautifulSoup import BeautifulSoup\nsoup = BeautifulSoup(html)\nanchors = [a for a in (td.find('a') for td in soup.findAll('td')) if a]\n\nWhich basically just adds a check to see if you have an actual element returned by td.find('a').\n" ]
[ 34, 30 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0001058599_beautifulsoup_python.txt
Q: How do I select a random element from an array in Python? The first examples that I googled didn't work. This should be trivial, right? A: import random random.choice (mylist) A: import random random.choice([1, 2, 3])
How do I select a random element from an array in Python?
The first examples that I googled didn't work. This should be trivial, right?
[ "import random\nrandom.choice (mylist)\n\n", "import random\nrandom.choice([1, 2, 3])\n\n" ]
[ 234, 58 ]
[]
[]
[ "arrays", "python", "random" ]
stackoverflow_0001058712_arrays_python_random.txt
Q: Unicode problems with web pages in Python's urllib I seem to have the all-familiar problem of correctly reading and viewing a web page. It looks like Python reads the page in UTF-8 but when I try to convert it to something more viewable (iso-8859-1) I get this error: UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: ordinal not in range(128) The code looks like this: #!/usr/bin/python from urllib import urlopen import re url_address = 'http://www.eurohockey.net/players/show_player.cgi?serial=4722' finished = 0 begin_record = 0 col = 0 str = '' for line in urlopen(url_address): if '</tr' in line: begin_record = 0 print str str = '' continue if begin_record == 1: col = col + 1 tmp_match = re.search('<td>(.+)</td>', line.strip()) str = str + ';' + unicode(tmp_match.group(1), 'iso-8859-1') if '<tr class=\"even\"' in line or '<tr class=\"odd\"' in line: begin_record = 1 col = 0 continue How should I handle the contents? Firefox at least thinks it's iso-8859-1 and it would make sense looking at the contents of that page. The error comes from the 'ä' character clearly. And if I was to save that data to a database, should I not bother with changing the codec and then converting when showing it? A: As noted by Lennart, your problem is not the decoding. It is trying to encode into "ascii", which is often a problem with print statements. I suspect the line print str is your problem. You need to encode the str into whatever your console is using to have that line work. A: It doesn't look like Python is "reading it in UTF-8" at all. As already pointed out, you have an encoding problem, NOT a decoding problem. It is impossible for that error to have arisen from that line that you say. When asking a question like this, always give the full traceback and error message. Kathy's suspicion is correct; in fact the print str line is the only possible source of that error, and that can only happen when sys.stdout.encoding is not set so Python punts on 'ascii'. Variables that may affect the outcome are what version of Python you are using, what platform you are running on and exactly how you run your script -- none of which you have told us; please do. Example: I'm using Python 2.6.2 on Windows XP and I'm running your script with some diagnostic additions: (1) import sys; print sys.stdout.encoding up near the front (2) print repr(str) before print str so that I can see what you've got before it crashes. In a Command Prompt window, if I do \python26\python hockey.py it prints cp850 as the encoding and just works. However if I do \python26\python hockey.py | more or \python26\python hockey.py >hockey.txt it prints None as the encoding and crashes with your error message on the first line with the a-with-diaeresis: C:\junk>\python26\python hockey.py >hockey.txt Traceback (most recent call last): File "hockey.py", line 18, in <module> print str UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: ordinal not in range(128) If that fits your case, the fix in general is to explicitly encode your output with an encoding suited to the display mechanism you plan to use. A: That text is indeed iso-88591-1, and I can decode it without a problem, and indeed your code runs without a hitch. Your error, however, is an ENCODE error, not a decode error. And you don't do any encoding in your code, so. Possibly you have gotten encoding and decoding confused, it's a common problem. You DECODE from Latin1 to Unicode. You ENCODE the other way. Remember that Latin1, UTF8 etc are called "encodings".
Unicode problems with web pages in Python's urllib
I seem to have the all-familiar problem of correctly reading and viewing a web page. It looks like Python reads the page in UTF-8 but when I try to convert it to something more viewable (iso-8859-1) I get this error: UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 2: ordinal not in range(128) The code looks like this: #!/usr/bin/python from urllib import urlopen import re url_address = 'http://www.eurohockey.net/players/show_player.cgi?serial=4722' finished = 0 begin_record = 0 col = 0 str = '' for line in urlopen(url_address): if '</tr' in line: begin_record = 0 print str str = '' continue if begin_record == 1: col = col + 1 tmp_match = re.search('<td>(.+)</td>', line.strip()) str = str + ';' + unicode(tmp_match.group(1), 'iso-8859-1') if '<tr class=\"even\"' in line or '<tr class=\"odd\"' in line: begin_record = 1 col = 0 continue How should I handle the contents? Firefox at least thinks it's iso-8859-1 and it would make sense looking at the contents of that page. The error comes from the 'ä' character clearly. And if I was to save that data to a database, should I not bother with changing the codec and then converting when showing it?
[ "As noted by Lennart, your problem is not the decoding. It is trying to encode into \"ascii\", which is often a problem with print statements. I suspect the line\nprint str\n\nis your problem. You need to encode the str into whatever your console is using to have that line work.\n", "It doesn't look like Python is \"reading it in UTF-8\" at all. As already pointed out, you have an encoding problem, NOT a decoding problem. It is impossible for that error to have arisen from that line that you say. When asking a question like this, always give the full traceback and error message.\nKathy's suspicion is correct; in fact the print str line is the only possible source of that error, and that can only happen when sys.stdout.encoding is not set so Python punts on 'ascii'.\nVariables that may affect the outcome are what version of Python you are using, what platform you are running on and exactly how you run your script -- none of which you have told us; please do.\nExample: I'm using Python 2.6.2 on Windows XP and I'm running your script with some diagnostic additions:\n(1) import sys; print sys.stdout.encoding up near the front\n(2) print repr(str) before print str so that I can see what you've got before it crashes.\nIn a Command Prompt window, if I do \\python26\\python hockey.py it prints cp850 as the encoding and just works.\nHowever if I do\n\\python26\\python hockey.py | more\n\nor\n\\python26\\python hockey.py >hockey.txt\n\nit prints None as the encoding and crashes with your error message on the first line with the a-with-diaeresis:\nC:\\junk>\\python26\\python hockey.py >hockey.txt\nTraceback (most recent call last):\n File \"hockey.py\", line 18, in <module>\n print str\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xe4' in position 2: ordinal not in range(128)\n\nIf that fits your case, the fix in general is to explicitly encode your output with an encoding suited to the display mechanism you plan to use.\n", "That text is indeed iso-88591-1, and I can decode it without a problem, and indeed your code runs without a hitch.\nYour error, however, is an ENCODE error, not a decode error. And you don't do any encoding in your code, so. Possibly you have gotten encoding and decoding confused, it's a common problem.\nYou DECODE from Latin1 to Unicode. You ENCODE the other way. Remember that Latin1, UTF8 etc are called \"encodings\".\n" ]
[ 3, 2, 1 ]
[]
[]
[ "python", "unicode" ]
stackoverflow_0001058302_python_unicode.txt
Q: How to append two strings in Python? I have done this operation millions of times, just using the + operator! I have no idea why it is not working this time, it is overwriting the first part of the string with the new one! I have a list of strings and just want to concatenate them in one single string! If I run the program from Eclipse it works, from the command-line it doesn't! The list is: ["UNH+1+XYZ:08:2:1A+%CONVID%'&\r", "ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'&\r", "DUM'&\r"] I want to discard the first and the last elements, the code is: ediMsg = "" count = 1 print "extract_the_info, lineList ",lineList print "extract_the_info, len(lineList) ",len(lineList) while (count < (len(lineList)-1)): temp = "" # ediMsg = ediMsg+str(lineList[count]) # print "Count "+str(count)+" ediMsg ",ediMsg print "line value : ",lineList[count] temp = lineList[count] ediMsg += " "+temp print "ediMsg : ",ediMsg count += 1 print "count ",count Look at the output: extract_the_info, lineList ["UNH+1+XYZ:08:2:1A+%CONVID%'&\r", "ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'&\r", "DUM'&\r"] extract_the_info, len(lineList) 8 line value : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& ediMsg : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& count 2 line value : DUM'& DUM'& : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& count 3 Why is it doing so!? A: While the two answers are correct (use " ".join()), your problem (besides very ugly python code) is this: Your strings end in "\r", which is a carriage return. Everything is fine, but when you print to the console, "\r" will make printing continue from the start of the same line, hence overwrite what was written on that line so far. A: You should use the following and forget about this nightmare: ''.join(list_of_strings) A: The problem is not with the concatenation of the strings (although that could use some cleaning up), but in your printing. The \r in your string has a special meaning and will overwrite previously printed strings. Use repr(), as such: ... print "line value : ", repr(lineList[count]) temp = lineList[count] ediMsg += " "+temp print "ediMsg : ", repr(ediMsg) ... to print out your result, that will make sure any special characters doesn't mess up the output. A: '\r' is the carriage return character. When you're printing out a string, a '\r' will cause the next characters to go at the start of the line. Change this: print "ediMsg : ",ediMsg to: print "ediMsg : ",repr(ediMsg) and you will see the embedded \r values. And while your code works, please change it to the one-liner: ediMsg = ' '.join(lineList[1:-1]) A: Your problem is printing, and it is not string manipulation. Try using '\n' as last char instead of '\r' in each string in: lineList = [ "UNH+1+TCCARQ:08:2:1A+%CONVID%'&\r", "ORG+1A+77499505:PARAF0103+++A+FR:EUR++11730788+1A'&\r", "DUM'&\r", "FPT+CC::::::::N'&\r", "CCD+CA:5132839000000027:0450'&\r", "CPY+++AF'&\r", "MON+712:1.00:EUR'&\r", "UNT+8+1'\r" ] A: I just gave it a quick look. It seems your problem arises when you are printing the text. I haven't done such things for a long time, but probably you only get the last line when you print. If you check the actual variable, I'm sure you'll find that the value is correct. By last line, I'm talking about the \r you got in the text strings.
How to append two strings in Python?
I have done this operation millions of times, just using the + operator! I have no idea why it is not working this time, it is overwriting the first part of the string with the new one! I have a list of strings and just want to concatenate them in one single string! If I run the program from Eclipse it works, from the command-line it doesn't! The list is: ["UNH+1+XYZ:08:2:1A+%CONVID%'&\r", "ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'&\r", "DUM'&\r"] I want to discard the first and the last elements, the code is: ediMsg = "" count = 1 print "extract_the_info, lineList ",lineList print "extract_the_info, len(lineList) ",len(lineList) while (count < (len(lineList)-1)): temp = "" # ediMsg = ediMsg+str(lineList[count]) # print "Count "+str(count)+" ediMsg ",ediMsg print "line value : ",lineList[count] temp = lineList[count] ediMsg += " "+temp print "ediMsg : ",ediMsg count += 1 print "count ",count Look at the output: extract_the_info, lineList ["UNH+1+XYZ:08:2:1A+%CONVID%'&\r", "ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'&\r", "DUM'&\r"] extract_the_info, len(lineList) 8 line value : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& ediMsg : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& count 2 line value : DUM'& DUM'& : ORG+1A+77499505:ABC+++A+FR:EUR++123+1A'& count 3 Why is it doing so!?
[ "While the two answers are correct (use \" \".join()), your problem (besides very ugly python code) is this:\nYour strings end in \"\\r\", which is a carriage return. Everything is fine, but when you print to the console, \"\\r\" will make printing continue from the start of the same line, hence overwrite what was written on that line so far.\n", "You should use the following and forget about this nightmare:\n''.join(list_of_strings)\n\n", "The problem is not with the concatenation of the strings (although that could use some cleaning up), but in your printing. The \\r in your string has a special meaning and will overwrite previously printed strings.\nUse repr(), as such:\n...\nprint \"line value : \", repr(lineList[count])\ntemp = lineList[count]\nediMsg += \" \"+temp\nprint \"ediMsg : \", repr(ediMsg)\n...\n\nto print out your result, that will make sure any special characters doesn't mess up the output.\n", "'\\r' is the carriage return character. When you're printing out a string, a '\\r' will cause the next characters to go at the start of the line.\nChange this:\nprint \"ediMsg : \",ediMsg\n\nto:\nprint \"ediMsg : \",repr(ediMsg)\n\nand you will see the embedded \\r values.\nAnd while your code works, please change it to the one-liner:\nediMsg = ' '.join(lineList[1:-1])\n\n", "Your problem is printing, and it is not string manipulation. Try using '\\n' as last char instead of '\\r' in each string in:\nlineList = [\n \"UNH+1+TCCARQ:08:2:1A+%CONVID%'&\\r\",\n \"ORG+1A+77499505:PARAF0103+++A+FR:EUR++11730788+1A'&\\r\",\n \"DUM'&\\r\",\n \"FPT+CC::::::::N'&\\r\",\n \"CCD+CA:5132839000000027:0450'&\\r\",\n \"CPY+++AF'&\\r\",\n \"MON+712:1.00:EUR'&\\r\",\n \"UNT+8+1'\\r\"\n]\n\n", "I just gave it a quick look. It seems your problem arises when you are printing the text. I haven't done such things for a long time, but probably you only get the last line when you print. If you check the actual variable, I'm sure you'll find that the value is correct.\nBy last line, I'm talking about the \\r you got in the text strings.\n" ]
[ 32, 21, 11, 8, 7, 5 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001058902_python_string.txt
Q: Drawing a Dragons curve in Python I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like the Dragon curve; initial state = ‘F’, replacement rule – replace ‘F’ with ‘F+F-F’, number of replacements = 8, length = 5, angle = 60 But have no idea how to put that into code. A: First hit on Google for "dragons curve python": http://www.pynokio.org/dragon.py.htm You can probably modify that to work with your plotting program of choice. I'd try matplotlib. A: Draw the dragon curve using turtle module (suggested by @John Fouhy): #!/usr/bin/env python import turtle from functools import partial nreplacements = 8 angle = 60 step = 5 # generate command cmd = 'f' for _ in range(nreplacements): cmd = cmd.replace('f', 'f+f-f') # draw t = turtle.Turtle() i2c = {'f': partial(t.forward, step), '+': partial(t.left, angle), '-': partial(t.right, angle), } for c in cmd: i2c[c]() A: Well, presumably, you could start by defining: def replace(s): return s.replace('F', 'F+F-F') Then you can generate your sequence as: code = 'F' for i in range(8): code = replace(code) I'm not familiar with turtle so I can't help you there.
Drawing a Dragons curve in Python
I am trying to work out how to draw the dragons curve, with pythons turtle using the An L-System or Lindenmayer system. I no the code is something like the Dragon curve; initial state = ‘F’, replacement rule – replace ‘F’ with ‘F+F-F’, number of replacements = 8, length = 5, angle = 60 But have no idea how to put that into code.
[ "First hit on Google for \"dragons curve python\": \nhttp://www.pynokio.org/dragon.py.htm\nYou can probably modify that to work with your plotting program of choice. I'd try matplotlib. \n", "Draw the dragon curve using turtle module (suggested by @John Fouhy):\n#!/usr/bin/env python\nimport turtle\nfrom functools import partial\n\nnreplacements = 8\nangle = 60\nstep = 5\n\n# generate command\ncmd = 'f'\nfor _ in range(nreplacements):\n cmd = cmd.replace('f', 'f+f-f')\n\n# draw\nt = turtle.Turtle()\ni2c = {'f': partial(t.forward, step),\n '+': partial(t.left, angle),\n '-': partial(t.right, angle),\n}\nfor c in cmd: i2c[c]()\n\n", "Well, presumably, you could start by defining:\ndef replace(s):\n return s.replace('F', 'F+F-F')\n\nThen you can generate your sequence as:\ncode = 'F'\nfor i in range(8):\n code = replace(code)\n\nI'm not familiar with turtle so I can't help you there.\n" ]
[ 3, 3, 0 ]
[]
[]
[ "fractals", "python" ]
stackoverflow_0000765048_fractals_python.txt
Q: Mule vs ActiveMQ for Python I need to manged several servers, network services, appalication server (Apache, Tomcat) and manage them (start stop, install software). I would like to use Python, since C++ seems to complex and less productive for thing task. In am not sure which middleware to use. ActiveMQ and Mule seem to be a good choice, although written in Java. I understand better ActiveMQ and I know very little about ESB. Any advice? Any option for Python? I saw that there is beanstalk, but is too simple and unflexible. I need a messageing system for the coordination, plus a way to send tar.gz file to the server (software packages). I was there was a messsaging solution native in Python. A: An example python "script" that manages various services on multiple remote servers: What follows is a hacked together script that can be used to manage various services on servers that you have SSH access to. You will ideally want to have an ssh-agent running, or you will be typing your passphrase a lot of times. For commands that require elevated privileges on the remote machine, you can see that "sudo" is called. This means that you need to modify your sudoers file on each remote machine, and add entries like this (assuming your username == deploy): Defaults:deploy !requiretty Defaults:deploy !authenticate deploy ALL=\ /sbin/service httpd status,\ /sbin/service httpd configtest,\ /sbin/service httpd graceful The first two lines allow the deploy user to run sudo without having a tty or re-typing the password -- which means it can be run straight over ssh without any further input. Here is an example python command to take advantage of sudo on a remote: CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful')) Anyway, this is not a "straight" answer to your question, but rather an illustration of how easy you can use python and a couple of other techniques to create a systems management tool that is 100% tailored to your specific needs. And by the way, the following script does "tell you loud and clear" if any of the commands returned an exit status > 0, so you can analyze the output yourself. This was hacked together when a project I was working with started using a load balancer and it was no longer equitable to run all the commands on each server. You could modify or extend this to work with rsync to deploy files, or even deploy updates to the scripts that you host on the remote servers to "do the work". #!/usr/bin/python from optparse import OptionParser import subprocess import sys def die(sMessage): print print sMessage print sys.exit(2) ################################################################################################### # Settings # The user@host: for the SourceURLs (NO TRAILING SLASH) RemoteUsers = [ "[email protected]", "[email protected]", ] ################################################################################################### # Global Variables # optparse.Parser instance Parser = None # optparse.Values instance full of command line options Opt = None # List of command line arguments Arg = None ################################################################################################### Parser = OptionParser(usage="%prog [options] [Command[, Subcommand]]") Parser.add_option("--interactive", dest = "Interactive", action = "store_true", default = False, help = "Ask before doing each operation." ) # Parse command line Opt, Arg = Parser.parse_args() def HelpAndExit(): print "This command is used to run commands on the application servers." print print "Usage:" print " deploy-control [--interactive] Command" print print "Options:" print " --interactive :: will ask before executing each operation" print print "Servers:" for s in RemoteUsers: print " " + s print print "Web Server Commands:" print " deploy-control httpd status" print " deploy-control httpd configtest" print " deploy-control httpd graceful" print " deploy-control loadbalancer in" print " deploy-control loadbalancer out" print print "App Server Commands:" print " deploy-control 6x6server status" print " deploy-control 6x6server stop" print " deploy-control 6x6server start" print " deploy-control 6x6server status" print " deploy-control wb4server stop" print " deploy-control wb4server start" print " deploy-control wb4server restart" print " deploy-control wb4server restart" print print "System Commands:" print " deploy-control disk usage" print " deploy-control uptime" print sys.exit(2) def YesNo(sPrompt): while True: s = raw_input(sPrompt) if s in ('y', 'yes'): return True elif s in ('n', 'no'): return False else: print "Invalid input!" # Implicitly verified below in if/else Command = tuple(Arg) if Command in (('help',), ()): HelpAndExit() ResultList = [] ################################################################################################### for UH in RemoteUsers: print "-"*80 print "Running %s command on: %s" % (Command, UH) if Opt.Interactive and not YesNo("Do you want to run this command? "): print "Skipping!" print continue #---------------------------------------------------------------------------------------------- if Command == ('httpd', 'configtest'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest')) #---------------------------------------------------------------------------------------------- elif Command == ('httpd', 'graceful'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful')) #---------------------------------------------------------------------------------------------- elif Command == ('httpd', 'status'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status')) #---------------------------------------------------------------------------------------------- elif Command == ('loadbalancer', 'in'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/loadbalancer-in')) #---------------------------------------------------------------------------------------------- elif Command == ('loadbalancer', 'out'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/loadbalancer-out')) #---------------------------------------------------------------------------------------------- elif Command == ('disk', 'usage'): CommandResult = subprocess.call(('ssh', UH, 'df -h')) #---------------------------------------------------------------------------------------------- elif Command == ('uptime',): CommandResult = subprocess.call(('ssh', UH, 'uptime')) #---------------------------------------------------------------------------------------------- elif Command == ('6x6server', 'status'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-status')) if CommandResult > 0: print "Servers not running!!!" #---------------------------------------------------------------------------------------------- elif Command == ('6x6server', 'stop'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-stop')) #---------------------------------------------------------------------------------------------- elif Command == ('6x6server', 'start'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-start')) #---------------------------------------------------------------------------------------------- elif Command == ('6x6server', 'restart'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-restart')) #---------------------------------------------------------------------------------------------- elif Command == ('wb4server', 'status'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-status')) if CommandResult > 0: print "Servers not running!!!" #---------------------------------------------------------------------------------------------- elif Command == ('wb4server', 'stop'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-stop')) #---------------------------------------------------------------------------------------------- elif Command == ('wb4server', 'start'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-start')) #---------------------------------------------------------------------------------------------- elif Command == ('wb4server', 'restart'): CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-restart')) #---------------------------------------------------------------------------------------------- else: print print "#"*80 print print "Error: invalid command" print HelpAndExit() #---------------------------------------------------------------------------------------------- ResultList.append(CommandResult) print ################################################################################################### if any(ResultList): print "#"*80 print "#"*80 print "#"*80 print print "ERRORS FOUND. SEE ABOVE" print sys.exit(0) else: print "-"*80 print print "Looks OK!" print sys.exit(1)
Mule vs ActiveMQ for Python
I need to manged several servers, network services, appalication server (Apache, Tomcat) and manage them (start stop, install software). I would like to use Python, since C++ seems to complex and less productive for thing task. In am not sure which middleware to use. ActiveMQ and Mule seem to be a good choice, although written in Java. I understand better ActiveMQ and I know very little about ESB. Any advice? Any option for Python? I saw that there is beanstalk, but is too simple and unflexible. I need a messageing system for the coordination, plus a way to send tar.gz file to the server (software packages). I was there was a messsaging solution native in Python.
[ "An example python \"script\" that manages various services on multiple remote servers:\nWhat follows is a hacked together script that can be used to manage various services on servers that you have SSH access to.\nYou will ideally want to have an ssh-agent running, or you will be typing your passphrase a lot of times.\nFor commands that require elevated privileges on the remote machine, you can see that \"sudo\" is called. This means that you need to modify your sudoers file on each remote machine, and add entries like this (assuming your username == deploy):\nDefaults:deploy !requiretty\nDefaults:deploy !authenticate\n\ndeploy ALL=\\\n /sbin/service httpd status,\\\n /sbin/service httpd configtest,\\\n /sbin/service httpd graceful\n\nThe first two lines allow the deploy user to run sudo without having a tty or re-typing the password -- which means it can be run straight over ssh without any further input. Here is an example python command to take advantage of sudo on a remote:\nCommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))\n\nAnyway, this is not a \"straight\" answer to your question, but rather an illustration of how easy you can use python and a couple of other techniques to create a systems management tool that is 100% tailored to your specific needs.\nAnd by the way, the following script does \"tell you loud and clear\" if any of the commands returned an exit status > 0, so you can analyze the output yourself. \nThis was hacked together when a project I was working with started using a load balancer and it was no longer equitable to run all the commands on each server. You could modify or extend this to work with rsync to deploy files, or even deploy updates to the scripts that you host on the remote servers to \"do the work\". \n#!/usr/bin/python\n\n\nfrom optparse import OptionParser\nimport subprocess\nimport sys\n\ndef die(sMessage):\n print\n print sMessage\n print\n sys.exit(2)\n\n\n###################################################################################################\n# Settings\n\n# The user@host: for the SourceURLs (NO TRAILING SLASH)\nRemoteUsers = [\n \"[email protected]\",\n \"[email protected]\",\n ]\n\n###################################################################################################\n# Global Variables\n\n# optparse.Parser instance\nParser = None\n\n# optparse.Values instance full of command line options\nOpt = None\n\n# List of command line arguments\nArg = None\n\n###################################################################################################\nParser = OptionParser(usage=\"%prog [options] [Command[, Subcommand]]\")\n\n\nParser.add_option(\"--interactive\",\n dest = \"Interactive\",\n action = \"store_true\",\n default = False,\n help = \"Ask before doing each operation.\"\n )\n\n# Parse command line\nOpt, Arg = Parser.parse_args()\n\ndef HelpAndExit():\n print \"This command is used to run commands on the application servers.\"\n print\n print \"Usage:\"\n print \" deploy-control [--interactive] Command\"\n print\n print \"Options:\"\n print \" --interactive :: will ask before executing each operation\"\n print\n print \"Servers:\"\n for s in RemoteUsers: print \" \" + s\n print\n print \"Web Server Commands:\"\n print \" deploy-control httpd status\"\n print \" deploy-control httpd configtest\"\n print \" deploy-control httpd graceful\"\n print \" deploy-control loadbalancer in\"\n print \" deploy-control loadbalancer out\"\n print\n print \"App Server Commands:\"\n print \" deploy-control 6x6server status\"\n print \" deploy-control 6x6server stop\"\n print \" deploy-control 6x6server start\"\n print \" deploy-control 6x6server status\"\n print \" deploy-control wb4server stop\"\n print \" deploy-control wb4server start\"\n print \" deploy-control wb4server restart\"\n print \" deploy-control wb4server restart\"\n print\n print \"System Commands:\"\n print \" deploy-control disk usage\"\n print \" deploy-control uptime\"\n print\n sys.exit(2)\n\ndef YesNo(sPrompt):\n while True:\n s = raw_input(sPrompt)\n if s in ('y', 'yes'):\n return True\n elif s in ('n', 'no'):\n return False\n else:\n print \"Invalid input!\"\n\n\n# Implicitly verified below in if/else\nCommand = tuple(Arg)\n\nif Command in (('help',), ()):\n HelpAndExit()\n\n\nResultList = []\n###################################################################################################\nfor UH in RemoteUsers:\n print \"-\"*80\n print \"Running %s command on: %s\" % (Command, UH)\n\n if Opt.Interactive and not YesNo(\"Do you want to run this command? \"):\n print \"Skipping!\"\n print\n continue\n\n #----------------------------------------------------------------------------------------------\n if Command == ('httpd', 'configtest'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('httpd', 'graceful'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('httpd', 'status'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('loadbalancer', 'in'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/loadbalancer-in'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('loadbalancer', 'out'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/loadbalancer-out'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('disk', 'usage'):\n CommandResult = subprocess.call(('ssh', UH, 'df -h'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('uptime',):\n CommandResult = subprocess.call(('ssh', UH, 'uptime'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('6x6server', 'status'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-status'))\n if CommandResult > 0:\n print \"Servers not running!!!\"\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('6x6server', 'stop'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-stop'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('6x6server', 'start'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-start'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('6x6server', 'restart'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/6x6server-restart'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('wb4server', 'status'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-status'))\n if CommandResult > 0:\n print \"Servers not running!!!\"\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('wb4server', 'stop'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-stop'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('wb4server', 'start'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-start'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('wb4server', 'restart'):\n CommandResult = subprocess.call(('ssh', UH, 'bin-slave/wb4server-restart'))\n\n #----------------------------------------------------------------------------------------------\n else:\n print\n print \"#\"*80\n print\n print \"Error: invalid command\"\n print\n HelpAndExit()\n\n #----------------------------------------------------------------------------------------------\n ResultList.append(CommandResult)\n print\n\n\n###################################################################################################\nif any(ResultList):\n print \"#\"*80\n print \"#\"*80\n print \"#\"*80\n print\n print \"ERRORS FOUND. SEE ABOVE\"\n print\n sys.exit(0)\n\nelse:\n print \"-\"*80\n print\n print \"Looks OK!\"\n print\n sys.exit(1)\n\n" ]
[ 1 ]
[]
[]
[ "messaging", "python" ]
stackoverflow_0001058986_messaging_python.txt
Q: Importing methods for a Python class I wonder if it's possible to keep methods for a Python class in a different file from the class definition, something like this: main_module.py: class Instrument(Object): # Some import statement? def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) to_import_from.py: def external_method(self, arg1, arg2): if self.flag: #doing something #...many more methods In my case, to_import_from.py is machine-generated, and contains many methods. I would rather not copy-paste these into main_module.py or import them one by one, but have them all recognized as methods of the Instrument class, just as if they had been defined there: >>> instr = Instrument() >>> instr.direct_method(arg1) >>> instr.external_method(arg1, arg2) Thanks! A: People seem to be overthinking this. Methods are just function valued local variables in class construction scope. So the following works fine: class Instrument(Object): # load external methods from to_import_from import * def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) A: I don't think what you want is directly possible in Python. You could, however, try one of the following. When generating to_import_from.py, add the non-generated stuff there too. This way, all methods are in the same class definition. Have to_import_from.py contain a base class definition which the the Instrument class inherits. In other words, in to_import_from.py: class InstrumentBase(object): def external_method(self, arg1, arg2): if self.flag: ... and then in main_module.py: import to_import_from class Instrument(to_import_from.InstrumentBase): def __init__(self): ... A: It's easier than you think: class Instrument(Object): def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) import to_import_from Instrument.external_method = to_import_from.external_method Done! Although having the machine generated code generate a class definition and subclassing from it would be a neater solution. A: I'm sorry that this is kind of a "You shouldn't be putting nails in the wall" answer, but you're missing the point of python class definitions. You should rather put the class with all its methods in its own python file, and in your main_module.py do from instrument import Instrument If you plan on using the methods for several classes, you should consider subclassing. In your case, the machine generated file could contain the base class that Instrument inherits from. Finally, give your class a good docstring that explains the API to its user, so there is no need for a "header file" used as an overview of your class. A: you can do this with the __getattr__ method external.py def external_function(arg): print("external", arg) main.py: import external class Instrument(object): def __getattr__(self, name): if hasattr(external, name): return getattr(external, name) else: return Object.__getattr__(self, name) def direct_method(self, arg): print("internal", arg) i = Instrument() i.direct_method("foo") i.external_function("foo") A: What you're doing is extending a base class with some "machine-generated" code. Choice 1. Extend a base class with machine-generated code. machine_generated.py # Start of boilerplate # import main_module class Instrument_Implementation( main_module.Instrument_Abstraction ): def direct_method(self,arg1): # End of boilerplate # ...the real code... Your application can then import machine_generated and use machine_generated.Instrument_Implementation. Choice 2. Simply use first-class functions. machine_generated.py def external_method(self, arg1, arg2): ...the real code... main_module.py import machine_generated class Instrument( object ): def direct_method(self,arg1): return machine_generator.external_method( arg1, ... ) Your application can import main_module and use main_module.Instrument. A: Here's my try. I think a nicer approach could be made with metaclasses... to_import_from.py : def external_method(self, arg1, arg2): if self.flag: print "flag is set" else : print "flag is not set" instrument.py : import imp import os import inspect import new import pdb class MethodImporter(object) : def __init__(self, path_to_module) : self.load_module(path_to_module) def load_module(self, path_to_module) : name = os.path.basename(path_to_module) module_file = file(path_to_module,"r") self.module = imp.load_module(name, module_file , path_to_module, ('','r',imp.PY_SOURCE)) print "Module %s imported" % self.module for method_name, method_object in inspect.getmembers(self.module, inspect.isfunction) : print "adding method %s to %s" % (method_name, self) setattr(self, method_name, new.instancemethod(method_object, self, self.__class__)) class Instrument(MethodImporter): def __init__(self): super(Instrument,self).__init__("./to_import_from.py") self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) when you run this code arg1, arg2 = 1, 2 instr = Instrument() instr.direct_method(arg1) instr.external_method(arg1, arg2) here's the output : Module <module 'to_import_from.py' from './to_import_from.pyc'> imported adding method external_method to <__main__.Instrument object at 0x2ddeb0> flag is set flag is set A: Technically, yes this is possible, but solving it this way is not really idiomatic python, and there are likely better solutions. Here's an example of how to do so: import to_import_from class Instrument(object): locals().update(dict((k,v) for (k,v) in to_import_from.__dict__.iteritems() if callable(v))) def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) That will import all callable functions defined in to_import_from as methods of the Instrument class, as well as adding some more methods. Note: if you also want to copy global variables as instance variables, you'll need to refine the check. Also note that it adds all callable objects it finds in to_import_from's namespace, including imports from other modules (ie from module import some_func style imports) However, this isn't a terribly nice way to do it. Better would be to instead tweak your code generation to produce a class, and have your class inherit from it. This avoids the unneccessary copying of methods into Instrument's namespace, and instead uses normal inheritcance. ie: class Instrument(to_import_from.BaseClass): # Add new methods here.
Importing methods for a Python class
I wonder if it's possible to keep methods for a Python class in a different file from the class definition, something like this: main_module.py: class Instrument(Object): # Some import statement? def __init__(self): self.flag = True def direct_method(self,arg1): self.external_method(arg1, arg2) to_import_from.py: def external_method(self, arg1, arg2): if self.flag: #doing something #...many more methods In my case, to_import_from.py is machine-generated, and contains many methods. I would rather not copy-paste these into main_module.py or import them one by one, but have them all recognized as methods of the Instrument class, just as if they had been defined there: >>> instr = Instrument() >>> instr.direct_method(arg1) >>> instr.external_method(arg1, arg2) Thanks!
[ "People seem to be overthinking this. Methods are just function valued local variables in class construction scope. So the following works fine:\nclass Instrument(Object):\n # load external methods\n from to_import_from import *\n\n def __init__(self):\n self.flag = True\n def direct_method(self,arg1):\n self.external_method(arg1, arg2)\n\n", "I don't think what you want is directly possible in Python. \nYou could, however, try one of the following.\n\nWhen generating to_import_from.py, add the non-generated stuff there too. This way,\nall methods are in the same class definition.\nHave to_import_from.py contain a base class definition which the the Instrument class\ninherits.\n\nIn other words, in to_import_from.py:\nclass InstrumentBase(object):\n def external_method(self, arg1, arg2):\n if self.flag:\n ...\n\nand then in main_module.py:\nimport to_import_from\n\nclass Instrument(to_import_from.InstrumentBase):\n def __init__(self):\n ...\n\n", "It's easier than you think:\nclass Instrument(Object):\n def __init__(self):\n self.flag = True\n def direct_method(self,arg1):\n self.external_method(arg1, arg2)\n\nimport to_import_from\n\nInstrument.external_method = to_import_from.external_method\n\nDone!\nAlthough having the machine generated code generate a class definition and subclassing from it would be a neater solution.\n", "I'm sorry that this is kind of a \"You shouldn't be putting nails in the wall\" answer, but you're missing the point of python class definitions. You should rather put the class with all its methods in its own python file, and in your main_module.py do\nfrom instrument import Instrument\n\nIf you plan on using the methods for several classes, you should consider subclassing. In your case, the machine generated file could contain the base class that Instrument inherits from.\nFinally, give your class a good docstring that explains the API to its user, so there is no need for a \"header file\" used as an overview of your class.\n", "you can do this with the __getattr__ method\nexternal.py\ndef external_function(arg):\n print(\"external\", arg)\n\nmain.py:\nimport external\n\nclass Instrument(object):\n def __getattr__(self, name):\n if hasattr(external, name):\n return getattr(external, name)\n else:\n return Object.__getattr__(self, name)\n\n def direct_method(self, arg):\n print(\"internal\", arg)\n\n\ni = Instrument() \ni.direct_method(\"foo\")\ni.external_function(\"foo\")\n\n", "What you're doing is extending a base class with some \"machine-generated\" code.\nChoice 1. Extend a base class with machine-generated code.\nmachine_generated.py\n# Start of boilerplate #\nimport main_module\nclass Instrument_Implementation( main_module.Instrument_Abstraction ):\n def direct_method(self,arg1): \n # End of boilerplate #\n ...the real code...\n\nYour application can then import machine_generated and use machine_generated.Instrument_Implementation.\nChoice 2. Simply use first-class functions.\nmachine_generated.py\ndef external_method(self, arg1, arg2):\n ...the real code...\n\nmain_module.py\nimport machine_generated\n\nclass Instrument( object ):\n def direct_method(self,arg1): \n return machine_generator.external_method( arg1, ... )\n\nYour application can import main_module and use main_module.Instrument.\n", "Here's my try. I think a nicer approach could be made with metaclasses...\nto_import_from.py :\ndef external_method(self, arg1, arg2):\n if self.flag:\n print \"flag is set\"\n else :\n print \"flag is not set\"\n\ninstrument.py :\nimport imp\nimport os\nimport inspect\nimport new\n\nimport pdb\n\nclass MethodImporter(object) :\n def __init__(self, path_to_module) :\n self.load_module(path_to_module)\n\n def load_module(self, path_to_module) :\n name = os.path.basename(path_to_module)\n module_file = file(path_to_module,\"r\")\n self.module = imp.load_module(name, module_file , path_to_module, ('','r',imp.PY_SOURCE))\n print \"Module %s imported\" % self.module\n for method_name, method_object in inspect.getmembers(self.module, inspect.isfunction) :\n print \"adding method %s to %s\" % (method_name, self)\n setattr(self, method_name, new.instancemethod(method_object, self, self.__class__))\n\n\nclass Instrument(MethodImporter):\n def __init__(self):\n super(Instrument,self).__init__(\"./to_import_from.py\")\n self.flag = True\n def direct_method(self,arg1):\n self.external_method(arg1, arg2)\n\nwhen you run this code\narg1, arg2 = 1, 2\ninstr = Instrument()\ninstr.direct_method(arg1)\ninstr.external_method(arg1, arg2)\n\nhere's the output :\nModule <module 'to_import_from.py' from './to_import_from.pyc'> imported\nadding method external_method to <__main__.Instrument object at 0x2ddeb0>\nflag is set\nflag is set\n\n", "Technically, yes this is possible, but solving it this way is not really idiomatic python, and there are likely better solutions. Here's an example of how to do so:\nimport to_import_from\n\nclass Instrument(object):\n locals().update(dict((k,v) for (k,v) in \n to_import_from.__dict__.iteritems() if callable(v)))\n\n def __init__(self):\n self.flag = True\n def direct_method(self,arg1):\n self.external_method(arg1, arg2)\n\nThat will import all callable functions defined in to_import_from as methods of the Instrument class, as well as adding some more methods. Note: if you also want to copy global variables as instance variables, you'll need to refine the check. Also note that it adds all callable objects it finds in to_import_from's namespace, including imports from other modules (ie from module import some_func style imports)\nHowever, this isn't a terribly nice way to do it. Better would be to instead tweak your code generation to produce a class, and have your class inherit from it. This avoids the unneccessary copying of methods into Instrument's namespace, and instead uses normal inheritcance. ie:\nclass Instrument(to_import_from.BaseClass):\n # Add new methods here.\n\n" ]
[ 17, 7, 7, 4, 4, 2, 0, 0 ]
[]
[]
[ "import", "methods", "python" ]
stackoverflow_0001057934_import_methods_python.txt
Q: Searching across multiple tables (best practices) I have property management application consisting of tables: tenants landlords units properties vendors-contacts Basically I want one search field to search them all rather than having to select which category I am searching. Would this be an acceptable solution (technology wise?) Will searching across 5 tables be OK in the long run and not bog down the server? What's the best way of accomplishing this? Using PostgreSQL A: Why not create a view which is a union of the tables which aggregates the columns you want to search on into one, and then search on that aggregated column? You could do something like this: select 'tenants:' + ltrim(str(t.Id)), <shared fields> from Tenants as t union select 'landlords:' + ltrim(str(l.Id)), <shared fields> from Tenants as l union ... This requires some logic to be embedded from the client querying; it has to know how to fabricate the key that it's looking for in order to search on a single field. That said, it's probably better if you just have a separate column which contains a "type" value (e.g. landlord, tenant) and then filter on both the type and the ID, as it will be computationally less expensive (and can be optimized better). A: You want to use the built-in full text search or a separate product like Lucene. This is optimised for unstructured searches over heterogeneous data. Also, don't forget that normal indices cannot be used for something LIKE '%...%'. Using a full text search engine will also be able to do efficient substring searches. A: I would suggest using a specialized full-text indexing tool like Lucene for this. It will probably be easier to get up and running, and the result is faster and more featureful too. Postgres full text indexes will be useful if you also need structured search capability on top of this or transactionality of your search index is important. If you do want to implement this in the database, something like the following scheme might work, assuming you use surrogate keys: for each searchable table create a view that has the primary key column of that table, the name of the table and a concatenation of all the searchable fields in that table. create a functional GIN or GiST index on the underlying over the to_tsvector() of the exact same concatenation. create a UNION ALL over all the views to create the searchable view. After that you can do the searches like this: SELECT id, table_name, ts_rank_cd(body, query) AS rank FROM search_view, to_tsquery('search&words') query WHERE query @@ body ORDER BY rank DESC LIMIT 10; A: You should be fine, and there's really no other good (easy) way to do this. Just make sure the fields you are searching on are properly indexed though.
Searching across multiple tables (best practices)
I have property management application consisting of tables: tenants landlords units properties vendors-contacts Basically I want one search field to search them all rather than having to select which category I am searching. Would this be an acceptable solution (technology wise?) Will searching across 5 tables be OK in the long run and not bog down the server? What's the best way of accomplishing this? Using PostgreSQL
[ "Why not create a view which is a union of the tables which aggregates the columns you want to search on into one, and then search on that aggregated column?\nYou could do something like this:\nselect 'tenants:' + ltrim(str(t.Id)), <shared fields> from Tenants as t union\nselect 'landlords:' + ltrim(str(l.Id)), <shared fields> from Tenants as l union\n...\n\nThis requires some logic to be embedded from the client querying; it has to know how to fabricate the key that it's looking for in order to search on a single field.\nThat said, it's probably better if you just have a separate column which contains a \"type\" value (e.g. landlord, tenant) and then filter on both the type and the ID, as it will be computationally less expensive (and can be optimized better).\n", "You want to use the built-in full text search or a separate product like Lucene. This is optimised for unstructured searches over heterogeneous data.\nAlso, don't forget that normal indices cannot be used for something LIKE '%...%'. Using a full text search engine will also be able to do efficient substring searches.\n", "I would suggest using a specialized full-text indexing tool like Lucene for this. It will probably be easier to get up and running, and the result is faster and more featureful too. Postgres full text indexes will be useful if you also need structured search capability on top of this or transactionality of your search index is important.\nIf you do want to implement this in the database, something like the following scheme might work, assuming you use surrogate keys:\n\nfor each searchable table create a view that has the primary key column of that table, the name of the table and a concatenation of all the searchable fields in that table.\ncreate a functional GIN or GiST index on the underlying over the to_tsvector() of the exact same concatenation.\ncreate a UNION ALL over all the views to create the searchable view.\n\nAfter that you can do the searches like this:\nSELECT id, table_name, ts_rank_cd(body, query) AS rank\n FROM search_view, to_tsquery('search&words') query\n WHERE query @@ body\n ORDER BY rank DESC\n LIMIT 10;\n\n", "You should be fine, and there's really no other good (easy) way to do this. Just make sure the fields you are searching on are properly indexed though.\n" ]
[ 7, 4, 3, 1 ]
[]
[]
[ "mysql", "postgresql", "pylons", "python", "sql" ]
stackoverflow_0001059253_mysql_postgresql_pylons_python_sql.txt
Q: Cherrypy server does not accept incoming http request on MS Windows if output (stdout) is not redirected It is a rather strange 'bug'. I have written a cherrypy based server. If I run it this way: python simple_server.py > out.txt It works as expected. Without the the redirection at the end, however, the server will not accept any connection at all. Anyone has any idea? I am using python 2.4 on a Win XP professional machine. A: Are you running the script in an XP "command window"? Otherwise (if there's neither redirection nor command window available), standard output might simply be closed, which might inhibit the script (or rather its underlying framework). A: CherryPy runs in a "development" mode by default, which includes logging startup messages to stdout. If stdout is not available, I would assume the server is not able to start successfully. You can change this by setting 'log.screen: False' in config (and replacing it with 'log.error_file: "/path/to/error.log"' if you know what's good for you ;) ). Note that the global config entry 'environment: production' will also turn off log.screen.
Cherrypy server does not accept incoming http request on MS Windows if output (stdout) is not redirected
It is a rather strange 'bug'. I have written a cherrypy based server. If I run it this way: python simple_server.py > out.txt It works as expected. Without the the redirection at the end, however, the server will not accept any connection at all. Anyone has any idea? I am using python 2.4 on a Win XP professional machine.
[ "Are you running the script in an XP \"command window\"? Otherwise (if there's neither redirection nor command window available), standard output might simply be closed, which might inhibit the script (or rather its underlying framework).\n", "CherryPy runs in a \"development\" mode by default, which includes logging startup messages to stdout. If stdout is not available, I would assume the server is not able to start successfully.\nYou can change this by setting 'log.screen: False' in config (and replacing it with 'log.error_file: \"/path/to/error.log\"' if you know what's good for you ;) ). Note that the global config entry 'environment: production' will also turn off log.screen.\n" ]
[ 1, 0 ]
[]
[]
[ "cherrypy", "python" ]
stackoverflow_0001056642_cherrypy_python.txt
Q: Python Memory Model I have a very large list Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..): n = (2**32)**2 for i in xrange(10**7) li[i] = n works fine. however: for i in xrange(10**7) li[i] = i**2 consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient... Does anyone have an explanation for this? A: Java special-cases a few value types (including integers) so that they're stored by value (instead of, by object reference like everything else). Python doesn't special-case such types, so that assigning n to many entries in a list (or other normal Python container) doesn't have to make copies. Edit: note that the references are always to objects, not "to variables" -- there's no such thing as "a reference to a variable" in Python (or Java). For example: >>> n = 23 >>> a = [n,n] >>> print id(n), id(a[0]), id(a[1]) 8402048 8402048 8402048 >>> n = 45 >>> print id(n), id(a[0]), id(a[1]) 8401784 8402048 8402048 We see from the first print that both entries in list a refer to exactly the same object as n refers to -- but when n is reassigned, it now refers to a different object, while both entries in a still refer to the previous one. An array.array (from the Python standard library module array) is very different from a list: it keeps compact copies of a homogeneous type, taking as few bits per item as are needed to store copies of values of that type. All normal containers keep references (internally implemented in the C-coded Python runtime as pointers to PyObject structures: each pointer, on a 32-bit build, takes 4 bytes, each PyObject at least 16 or so [including pointer to type, reference count, actual value, and malloc rounding up]), arrays don't (so they can't be heterogeneous, can't have items except from a few basic types, etc). For example, a 1000-items container, with all items being different small integers (ones whose values can fit in 2 bytes each), would take about 2,000 bytes of data as an array.array('h'), but about 20,000 as a list. But if all items were the same number, the array would still take 2,000 bytes of data, the list would take only 20 or so [[in every one of these cases you have to add about another 16 or 32 bytes for the container-object proper, in addition to the memory for the data]]. However, although the question says "array" (even in a tag), I doubt its arr is actually an array -- if it were, it could not store (2**32)*2 (largest int values in an array are 32 bits) and the memory behavior reported in the question would not actually be observed. So, the question is probably in fact about a list, not an array. Edit: a comment by @ooboo asks lots of reasonable followup questions, and rather than trying to squish the detailed explanation in a comment I'm moving it here. It's weird, though - after all, how is the reference to the integer stored? id(variable) gives an integer, the reference is an integer itself, isn't it cheaper to use the integer? CPython stores references as pointers to PyObject (Jython and IronPython, written in Java and C#, use those language's implicit references; PyPy, written in Python, has a very flexible back-end and can use lots of different strategies) id(v) gives (on CPython only) the numeric value of the pointer (just as a handy way to uniquely identify the object). A list can be heterogeneous (some items may be integers, others objects of different types) so it's just not a sensible option to store some items as pointers to PyObject and others differently (each object also needs a type indication and, in CPython, a reference count, at least) -- array.array is homogeneous and limited so it can (and does) indeed store a copy of the items' values rather than references (this is often cheaper, but not for collections where the same item appears a LOT, such as a sparse array where the vast majority of items are 0). A Python implementation would be fully allowed by the language specs to try subtler tricks for optimization, as long as it preserves semantics untouched, but as far as I know none currently does for this specific issue (you could try hacking a PyPy backend, but don't be surprised if the overhead of checking for int vs non-int overwhelms the hoped-for gains). Also, would it make a difference if I assigned 2**64 to every slot instead of assigning n, when n holds a reference to 2**64? What happens when I just write 1? These are examples of implementation choices that every implementation is fully allowed to make, as it's not hard to preserve the semantics (so hypothetically even, say, 3.1 and 3.2 could behave differently in this regard). When you use an int literal (or any other literal of an immutable type), or other expression producing a result of such a type, it's up to the implementation to decide whether to make a new object of that type unconditionally, or spend some time checking among such objects to see if there's an existing one it can reuse. In practice, CPython (and I believe the other implementations, but I'm less familiar with their internals) uses a single copy of sufficiently small integers (keeps a predefined C array of a few small integer values in PyObject form, ready to use or reuse at need) but doesn't go out of its way in general to look for other existing reusable objects. But for example identical literal constants within the same function are easily and readily compiled as references to a single constant object in the function's table of constants, so that's an optimization that's very easily done, and I believe every current Python implementation does perform it. It can sometimes be hard to remember than Python is a language and it has several implementations that may (legitimately and correctly) differ in a lot of such details -- everybody, including pedants like me, tends to say just "Python" rather than "CPython" when talking about the popular C-coded implementation (excepts in contexts like this one where drawing the distinction between language and implementation is paramount;-). Nevertheless, the distinction is quite important, and well worth repeating once in a while. A: In your first example you are storing the same integer len(arr) times. So python need just store the integer once in memory and refers to it len(arr) times. In your second example, you are storing len(arr) different integers. Now python must allocate storage for len(arr) integers and refer to to them in each of the len(arr) slots. A: You have only one variable n, but you create many i**2. What happens is that Python works with references. Each time you do array[i] = n you create a new reference to the value of n. Not to the variable, mind you, to the value. However, in the second case, when you do array[i] = i**2 you create a new value, and reference this new value. This will of course use up much more memory. In fact, Python will keep reusing the same value and just use references to it even if it's recalculated. So for example: l = [] x = 2 for i in xrange(1000000): l.append(x*2) Will generally not use more memory than l = [] x = 2 for i in xrange(1000000): l.append(x) However, in the case of l = [] x = 2 for i in xrange(1000000): l.append(i) each value of i will get a reference and therefore be kept in memory, using up a lot of memory compared to the other examples. (Alex pointed out some confusion in terminology. In python there is a module called array. These types of arrays store integer values, instead of references to objects like Pythons normal list objects, but otherwise behave the same. But since the first example uses a value that can't be stored in such an array, this is unlikely to be the case here. Instead the question is most likely using the word array as it's used in many other languages, which is the same as Pythons list type.) A: In both examples arr[i] takes reference of object whether it is n or resulting object of i * 2. In first example, n is already defined so it only takes reference, but in second example, it has to evaluate i * 2, GC has to allocate space if needed for this new resulting object, and then use its reference.
Python Memory Model
I have a very large list Suppose I do that (yeah, I know the code is very unpythonic, but for the example's sake..): n = (2**32)**2 for i in xrange(10**7) li[i] = n works fine. however: for i in xrange(10**7) li[i] = i**2 consumes a significantly larger amount of memory. I don't understand why that is - storing the big number takes more bits, and in Java, the the second option is indeed more memory-efficient... Does anyone have an explanation for this?
[ "Java special-cases a few value types (including integers) so that they're stored by value (instead of, by object reference like everything else). Python doesn't special-case such types, so that assigning n to many entries in a list (or other normal Python container) doesn't have to make copies.\nEdit: note that the references are always to objects, not \"to variables\" -- there's no such thing as \"a reference to a variable\" in Python (or Java). For example:\n>>> n = 23\n>>> a = [n,n]\n>>> print id(n), id(a[0]), id(a[1])\n8402048 8402048 8402048\n>>> n = 45\n>>> print id(n), id(a[0]), id(a[1])\n8401784 8402048 8402048\n\nWe see from the first print that both entries in list a refer to exactly the same object as n refers to -- but when n is reassigned, it now refers to a different object, while both entries in a still refer to the previous one.\nAn array.array (from the Python standard library module array) is very different from a list: it keeps compact copies of a homogeneous type, taking as few bits per item as are needed to store copies of values of that type. All normal containers keep references (internally implemented in the C-coded Python runtime as pointers to PyObject structures: each pointer, on a 32-bit build, takes 4 bytes, each PyObject at least 16 or so [including pointer to type, reference count, actual value, and malloc rounding up]), arrays don't (so they can't be heterogeneous, can't have items except from a few basic types, etc).\nFor example, a 1000-items container, with all items being different small integers (ones whose values can fit in 2 bytes each), would take about 2,000 bytes of data as an array.array('h'), but about 20,000 as a list. But if all items were the same number, the array would still take 2,000 bytes of data, the list would take only 20 or so [[in every one of these cases you have to add about another 16 or 32 bytes for the container-object proper, in addition to the memory for the data]].\nHowever, although the question says \"array\" (even in a tag), I doubt its arr is actually an array -- if it were, it could not store (2**32)*2 (largest int values in an array are 32 bits) and the memory behavior reported in the question would not actually be observed. So, the question is probably in fact about a list, not an array.\nEdit: a comment by @ooboo asks lots of reasonable followup questions, and rather than trying to squish the detailed explanation in a comment I'm moving it here.\n\nIt's weird, though - after all, how is\n the reference to the integer stored?\n id(variable) gives an integer, the\n reference is an integer itself, isn't\n it cheaper to use the integer?\n\nCPython stores references as pointers to PyObject (Jython and IronPython, written in Java and C#, use those language's implicit references; PyPy, written in Python, has a very flexible back-end and can use lots of different strategies)\nid(v) gives (on CPython only) the numeric value of the pointer (just as a handy way to uniquely identify the object). A list can be heterogeneous (some items may be integers, others objects of different types) so it's just not a sensible option to store some items as pointers to PyObject and others differently (each object also needs a type indication and, in CPython, a reference count, at least) -- array.array is homogeneous and limited so it can (and does) indeed store a copy of the items' values rather than references (this is often cheaper, but not for collections where the same item appears a LOT, such as a sparse array where the vast majority of items are 0).\nA Python implementation would be fully allowed by the language specs to try subtler tricks for optimization, as long as it preserves semantics untouched, but as far as I know none currently does for this specific issue (you could try hacking a PyPy backend, but don't be surprised if the overhead of checking for int vs non-int overwhelms the hoped-for gains).\n\nAlso, would it make a difference if I\n assigned 2**64 to every slot instead\n of assigning n, when n holds a\n reference to 2**64? What happens when\n I just write 1?\n\nThese are examples of implementation choices that every implementation is fully allowed to make, as it's not hard to preserve the semantics (so hypothetically even, say, 3.1 and 3.2 could behave differently in this regard).\nWhen you use an int literal (or any other literal of an immutable type), or other expression producing a result of such a type, it's up to the implementation to decide whether to make a new object of that type unconditionally, or spend some time checking among such objects to see if there's an existing one it can reuse.\nIn practice, CPython (and I believe the other implementations, but I'm less familiar with their internals) uses a single copy of sufficiently small integers (keeps a predefined C array of a few small integer values in PyObject form, ready to use or reuse at need) but doesn't go out of its way in general to look for other existing reusable objects.\nBut for example identical literal constants within the same function are easily and readily compiled as references to a single constant object in the function's table of constants, so that's an optimization that's very easily done, and I believe every current Python implementation does perform it.\nIt can sometimes be hard to remember than Python is a language and it has several implementations that may (legitimately and correctly) differ in a lot of such details -- everybody, including pedants like me, tends to say just \"Python\" rather than \"CPython\" when talking about the popular C-coded implementation (excepts in contexts like this one where drawing the distinction between language and implementation is paramount;-). Nevertheless, the distinction is quite important, and well worth repeating once in a while.\n", "In your first example you are storing the same integer len(arr) times. So python need just store the integer once in memory and refers to it len(arr) times.\nIn your second example, you are storing len(arr) different integers. Now python must allocate storage for len(arr) integers and refer to to them in each of the len(arr) slots.\n", "You have only one variable n, but you create many i**2.\nWhat happens is that Python works with references. Each time you do array[i] = n you create a new reference to the value of n. Not to the variable, mind you, to the value. However, in the second case, when you do array[i] = i**2 you create a new value, and reference this new value. This will of course use up much more memory.\nIn fact, Python will keep reusing the same value and just use references to it even if it's recalculated. So for example:\nl = []\nx = 2\nfor i in xrange(1000000):\n l.append(x*2)\n\nWill generally not use more memory than \nl = []\nx = 2\nfor i in xrange(1000000):\n l.append(x)\n\nHowever, in the case of\nl = []\nx = 2\nfor i in xrange(1000000):\n l.append(i)\n\neach value of i will get a reference and therefore be kept in memory, using up a lot of memory compared to the other examples.\n(Alex pointed out some confusion in terminology. In python there is a module called array. These types of arrays store integer values, instead of references to objects like Pythons normal list objects, but otherwise behave the same. But since the first example uses a value that can't be stored in such an array, this is unlikely to be the case here.\nInstead the question is most likely using the word array as it's used in many other languages, which is the same as Pythons list type.)\n", "In both examples arr[i] takes reference of object whether it is n or resulting object of i * 2.\nIn first example, n is already defined so it only takes reference, but in second example, it has to evaluate i * 2, GC has to allocate space if needed for this new resulting object, and then use its reference.\n" ]
[ 18, 6, 3, 0 ]
[]
[]
[ "arrays", "memory", "model", "python" ]
stackoverflow_0001059674_arrays_memory_model_python.txt
Q: Strange behavior with ModelForm and saving This problem is very strange and I'm hoping someone can help me. For the sake of argument, I have a Author model with ForeignKey relationship to the Book model. When I display an author, I would like to have a ChoiceField that ONLY displays the books associated with that author. As such, I override the AuthorForm.init() method and I create a List of choices (tuples) based upon a query that filters books based upon the author ID. The tuple is a composite of the book ID and the book name (i.e., (1, 'Moby Dick')). Those "choices" are then assigned to the ModelForm's choices attribute. When the form renders in the template, the ChoiceField is properly displayed, listing only those books associated with that author. This is where things get weird. When I save the form, I receive a ValueError (Cannot assign "u'1'":Author.book" must be a Book instance). This error makes sense due to the FK relationship. However, if I add a "print" statement to the code, make no other changes, and then save the record, it works. The ValueError magically disappears. I've tried this a number of times, ensuring I haven't inadvertently made another change, and it works each time. Does anyone know what's going on here? A: Not quite sure what you are doing wrong, but it is best to just modify the queryset: class ClientForm(forms.ModelForm): def __init__(self, *args, **kwargs): self.affiliate = kwargs.pop('affiliate') super(ClientForm, self).__init__(*args, **kwargs) self.fields["referral"].queryset = Referral.objects.filter(affiliate = self.affiliate) class Meta: model = Client The above is straight out of one my projects and it works perfectly to only show the Referral objects related to the passed affiliate: form = ClientForm(affiliate=request.affiliate)
Strange behavior with ModelForm and saving
This problem is very strange and I'm hoping someone can help me. For the sake of argument, I have a Author model with ForeignKey relationship to the Book model. When I display an author, I would like to have a ChoiceField that ONLY displays the books associated with that author. As such, I override the AuthorForm.init() method and I create a List of choices (tuples) based upon a query that filters books based upon the author ID. The tuple is a composite of the book ID and the book name (i.e., (1, 'Moby Dick')). Those "choices" are then assigned to the ModelForm's choices attribute. When the form renders in the template, the ChoiceField is properly displayed, listing only those books associated with that author. This is where things get weird. When I save the form, I receive a ValueError (Cannot assign "u'1'":Author.book" must be a Book instance). This error makes sense due to the FK relationship. However, if I add a "print" statement to the code, make no other changes, and then save the record, it works. The ValueError magically disappears. I've tried this a number of times, ensuring I haven't inadvertently made another change, and it works each time. Does anyone know what's going on here?
[ "Not quite sure what you are doing wrong, but it is best to just modify the queryset:\nclass ClientForm(forms.ModelForm):\n\n def __init__(self, *args, **kwargs):\n self.affiliate = kwargs.pop('affiliate')\n super(ClientForm, self).__init__(*args, **kwargs)\n self.fields[\"referral\"].queryset = Referral.objects.filter(affiliate = self.affiliate)\n\n class Meta:\n model = Client\n\nThe above is straight out of one my projects and it works perfectly to only show the Referral objects related to the passed affiliate:\nform = ClientForm(affiliate=request.affiliate)\n\n" ]
[ 2 ]
[]
[]
[ "django", "modelform", "python" ]
stackoverflow_0001059831_django_modelform_python.txt
Q: How to delete all the items of a specific key in a list of dicts? I'm trying to remove some items of a dict based on their key, here is my code: d1 = {'a': 1, 'b': 2} d2 = {'a': 1} l = [d1, d2, d1, d2, d1, d2] for i in range(len(l)): if l[i].has_key('b'): del l[i]['b'] print l The output will be: [{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}] Is there a better way to do it? A: d1 = {'a': 1, 'b': 2} d2 = {'a': 1} l = [d1, d2, d1, d2, d1, d2] for d in l: d.pop('b',None) print l A: A slight simplification: for d in l: if d.has_key('b'): del d['b'] Some people might also do for d in l: try: del d['b'] except KeyError: pass Catching exceptions like this is not considered as expensive in Python as in other languages. A: I like your way of doing it (except that you use a loop variable, but others pointed that out already), it's excplicit and easy to understand. If you want something that minimizes typing then this works: [x.pop('b', None) for x in l] Note though that only one 'b' will be deleted, because your list l references the dictionaries. So run your code above, and then print out d1, and you'll notice that in fact you deleted the b-key from d1 as well. To avoid this you need to copy the dictionaries: d1 = {'a': 1, 'b': 2} d2 = {'a': 1} l = [d1.copy(), d2.copy(), d1.copy(), d2.copy(), d1.copy(), d2.copy()] [b.pop('b', None) for b in l] d1 will now retain the b key. A: d1 = {'a': 1, 'b': 2} d2 = {'a': 1} l = [d1, d2, d1, d2, d1, d2] for i in range(len(l)): if l[i].has_key('b'): del l[i]['b'] print l Here is little review of your code: iterating on list is not done like in C. If you don't need reference the list index it's better to use for item in l and then replace l[i] by item. for key existence test you can just write if 'b' in l[i] So your code becomes: for item in l: if 'b' in item: del item['b'] One more thing you need to be careful is that on the first iteration that calls del, you will in fact delete all you need as d1 is mutable. You need to think that d1 is a reference and not the value (a bit like a pointer in C). As Lennart Regebro mentioned to optimize your code you can also use list comprehension.
How to delete all the items of a specific key in a list of dicts?
I'm trying to remove some items of a dict based on their key, here is my code: d1 = {'a': 1, 'b': 2} d2 = {'a': 1} l = [d1, d2, d1, d2, d1, d2] for i in range(len(l)): if l[i].has_key('b'): del l[i]['b'] print l The output will be: [{'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}, {'a': 1}] Is there a better way to do it?
[ "d1 = {'a': 1, 'b': 2}\nd2 = {'a': 1}\nl = [d1, d2, d1, d2, d1, d2]\nfor d in l:\n d.pop('b',None)\nprint l\n\n", "A slight simplification:\n for d in l:\n if d.has_key('b'):\n del d['b']\n\nSome people might also do\n for d in l:\n try:\n del d['b']\n except KeyError:\n pass\n\nCatching exceptions like this is not considered as expensive in Python as in other languages.\n", "I like your way of doing it (except that you use a loop variable, but others pointed that out already), it's excplicit and easy to understand. If you want something that minimizes typing then this works:\n[x.pop('b', None) for x in l]\nNote though that only one 'b' will be deleted, because your list l references the dictionaries. So run your code above, and then print out d1, and you'll notice that in fact you deleted the b-key from d1 as well.\nTo avoid this you need to copy the dictionaries:\nd1 = {'a': 1, 'b': 2}\nd2 = {'a': 1}\n\nl = [d1.copy(), d2.copy(), d1.copy(), d2.copy(), d1.copy(), d2.copy()]\n[b.pop('b', None) for b in l]\n\nd1 will now retain the b key.\n", "d1 = {'a': 1, 'b': 2}\nd2 = {'a': 1}\nl = [d1, d2, d1, d2, d1, d2]\nfor i in range(len(l)):\n if l[i].has_key('b'):\n del l[i]['b']\nprint l\nHere is little review of your code:\n\niterating on list is not done like in C. If you don't need reference the list index it's better to use for item in l and then replace l[i] by item.\nfor key existence test you can just write if 'b' in l[i]\n\nSo your code becomes:\nfor item in l:\n if 'b' in item:\n del item['b']\n\nOne more thing you need to be careful is that on the first iteration that calls del, you will in fact delete all you need as d1 is mutable. You need to think that d1 is a reference and not the value (a bit like a pointer in C).\nAs Lennart Regebro mentioned to optimize your code you can also use list comprehension.\n" ]
[ 16, 3, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001059924_python.txt
Q: How do i output a dynamically generated web page to a .html page instead of .py cgi page? So ive just started learning python on WAMP, ive got the results of a html form using cgi, and successfully performed a database search with mysqldb. I can return the results to a page that ends with .py by using print statements in the python cgi code, but i want to create a webpage that's .html and have that returned to the user, and/or keep them on the same webaddress when the database search results return. thanks paul edit: to clarify on my local machine, i see /localhost/search.html in the address bar i submit the html form, and receive a results page at /localhost/cgi-bin/searchresults.py. i want to see the results on /localhost/results.html or /localhost/search.html. if this was on a public server im ASSUMING it would return .../cgi-bin/searchresults.py, the last time i saw /cgi-bin/ directories was in the 90s in a url. ive glanced at addhandler, as david suggested, im not sure if thats what i want. edit: thanks all of you for your input, yep without using frameworks, mod_rewrite seems the way to go, but having looked at that, I decided to save myself the trouble and go with django with mod_wsgi, mainly because of the size of its userbase and amount of docs. i might switch to a lighter/more customisable framework, once ive got the basics A: First, I'd suggest that you remember that URLs are URLs and that file extensions don't matter, and that you should just leave it. If that isn't enough, then remember that URLs are URLs and that file extensions don't matter — and configure Apache to use a different rule to determine that is a CGI program rather than a static file to be served up as is. You can use AddHandler to add a handler for files on the hard disk with a .html extension. Alternatively, you could use mod_rewrite to tell Apache that …/foo.html means …/foo.py Finally, I'd suggest that if you do muck around with what URLs look like, that you remove any sign of something that looks like a file extension (so that …/foo is requested rather then …/foo.anything). As for keeping the user on the same address for results as for the request … that is just a matter of having the program output the basic page without results if it doesn't get the query string parameters that indicate a search term had been passed.
How do i output a dynamically generated web page to a .html page instead of .py cgi page?
So ive just started learning python on WAMP, ive got the results of a html form using cgi, and successfully performed a database search with mysqldb. I can return the results to a page that ends with .py by using print statements in the python cgi code, but i want to create a webpage that's .html and have that returned to the user, and/or keep them on the same webaddress when the database search results return. thanks paul edit: to clarify on my local machine, i see /localhost/search.html in the address bar i submit the html form, and receive a results page at /localhost/cgi-bin/searchresults.py. i want to see the results on /localhost/results.html or /localhost/search.html. if this was on a public server im ASSUMING it would return .../cgi-bin/searchresults.py, the last time i saw /cgi-bin/ directories was in the 90s in a url. ive glanced at addhandler, as david suggested, im not sure if thats what i want. edit: thanks all of you for your input, yep without using frameworks, mod_rewrite seems the way to go, but having looked at that, I decided to save myself the trouble and go with django with mod_wsgi, mainly because of the size of its userbase and amount of docs. i might switch to a lighter/more customisable framework, once ive got the basics
[ "First, I'd suggest that you remember that URLs are URLs and that file extensions don't matter, and that you should just leave it.\nIf that isn't enough, then remember that URLs are URLs and that file extensions don't matter — and configure Apache to use a different rule to determine that is a CGI program rather than a static file to be served up as is. You can use AddHandler to add a handler for files on the hard disk with a .html extension.\nAlternatively, you could use mod_rewrite to tell Apache that …/foo.html means …/foo.py\nFinally, I'd suggest that if you do muck around with what URLs look like, that you remove any sign of something that looks like a file extension (so that …/foo is requested rather then …/foo.anything).\nAs for keeping the user on the same address for results as for the request … that is just a matter of having the program output the basic page without results if it doesn't get the query string parameters that indicate a search term had been passed.\n" ]
[ 3 ]
[]
[]
[ "html", "python", "webpage" ]
stackoverflow_0001060289_html_python_webpage.txt
Q: HTML Agility Pack or HTML Screen Scraping libraries for Java, Ruby, Python? I found the HTML Agility Pack useful and easy to use for screen scraping web sites. What's the equivalent library for HTML screen scraping in Java, Ruby, Python? A: Found what I was looking for: Options for HTML scraping? A: BeautifulSoup is the standard Python screen scraping tool. Recently, however, I used the (incomplete at the moment) pyQuery, which is more or less a rewrite of jQuery into python, and found it to be very useful.
HTML Agility Pack or HTML Screen Scraping libraries for Java, Ruby, Python?
I found the HTML Agility Pack useful and easy to use for screen scraping web sites. What's the equivalent library for HTML screen scraping in Java, Ruby, Python?
[ "Found what I was looking for:\nOptions for HTML scraping?\n", "BeautifulSoup is the standard Python screen scraping tool.\nRecently, however, I used the (incomplete at the moment) pyQuery, which is more or less a rewrite of jQuery into python, and found it to be very useful.\n" ]
[ 5, 3 ]
[]
[]
[ "html", "java", "python", "ruby", "screen_scraping" ]
stackoverflow_0001060484_html_java_python_ruby_screen_scraping.txt
Q: Modeling a complex relationship in Django I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve. Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories. Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site. Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms. Does anyone have an idea if there is a third, better solution to this dead end? A: Why not just have both types of category in one model, so you just have 3 models? Site Category Sites = models.ManyToManyField(Site) IsCommon = models.BooleanField() Item Category = models.ForeignKey(Category) You say "Internally, those two type of Categories are completely identical". So in sounds like this is possible. Note it is perfectly valid for a ManyToManyField to have only one value, so you don't need "ForeignKey and a ManyToMany field on the same Category model" which just sounds like a hassle. Just put only one value in the ManyToMany field A: As as alternative implementation you could use django content types (generic relations) to accomplish the connection of the items. A bonus for using this implementation is that it allows you to utilize the category models in different ways depending on your data needs down the road. You can make using the site categories easier by writing model methods for pulling and sorting categories. Django's contrib admin also supports the generic relation inlines. Your models would be as follow: Site(models.Model): label = models.CharField(max_length=255) Category(models.Model): site = models.ManyToManyField(Site) label = models.CharField(max_length=255) SiteCategory(models.Model): site = models.ForeignKey(Site) label = models.CharField(max_length=255) Item(models.Model): label = models.CharField(max_length=255) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') For a more in depth review of content types and how to query the generic relations you can read here: http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/ A: Caveat: I know Object-Relation mapping, Rails, and Python, but not Django specifically. I see two additinal options: Thinking from a database point of view, I could make the table needed for the many-many relation hold an additional field which indicates a "common" vs. "site" relationship and add constraints to limit the type of "site" relationships. This can be done in Django, I think, in the section "Extra Fields on Many-To-Many Relationships." If you are at an earlier version of Django, you can still do this by making the many-many-table an explict model. Thinking from an object point of view, I could see splitting the Categories into three classes: BaseCategory CommonCategory(BaseCategory) SiteCategory(BaseCategory) and then use one of Django's inheritance models.
Modeling a complex relationship in Django
I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve. Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories. Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site. Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms. Does anyone have an idea if there is a third, better solution to this dead end?
[ "Why not just have both types of category in one model, so you just have 3 models?\nSite\n\nCategory\n Sites = models.ManyToManyField(Site)\n IsCommon = models.BooleanField()\n\nItem\n Category = models.ForeignKey(Category)\n\nYou say \"Internally, those two type of Categories are completely identical\". So in sounds like this is possible. Note it is perfectly valid for a ManyToManyField to have only one value, so you don't need \"ForeignKey and a ManyToMany field on the same Category model\" which just sounds like a hassle. Just put only one value in the ManyToMany field\n", "As as alternative implementation you could use django content types (generic relations) to accomplish the connection of the items. A bonus for using this implementation is that it allows you to utilize the category models in different ways depending on your data needs down the road. \nYou can make using the site categories easier by writing model methods for pulling and sorting categories. Django's contrib admin also supports the generic relation inlines.\nYour models would be as follow:\nSite(models.Model):\n label = models.CharField(max_length=255)\n\nCategory(models.Model):\n site = models.ManyToManyField(Site)\n label = models.CharField(max_length=255)\n\nSiteCategory(models.Model):\n site = models.ForeignKey(Site)\n label = models.CharField(max_length=255)\n\nItem(models.Model):\n label = models.CharField(max_length=255)\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n content_object = generic.GenericForeignKey('content_type', 'object_id')\n\nFor a more in depth review of content types and how to query the generic relations you can read here: http://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/\n", "Caveat: I know Object-Relation mapping, Rails, and Python, but not Django specifically.\nI see two additinal options:\n\nThinking from a database point of view, I could make the table needed for the many-many relation hold an additional field which indicates a \"common\" vs. \"site\" relationship and add constraints to limit the type of \"site\" relationships. This can be done in Django, I think, in the section \"Extra Fields on Many-To-Many Relationships.\"\n\nIf you are at an earlier version of Django, you can still do this by making the many-many-table an explict model.\n\nThinking from an object point of view, I could see splitting the Categories into three classes:\nBaseCategory\nCommonCategory(BaseCategory)\nSiteCategory(BaseCategory)\n\nand then use one of Django's inheritance models.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "django", "django_models", "entity_relationship", "python" ]
stackoverflow_0001053344_django_django_models_entity_relationship_python.txt
Q: How to call a data member of the base class if it is being overwritten as a property in the derived class? This question is similar to this other one, with the difference that the data member in the base class is not wrapped by the descriptor protocol. In other words, how can I access a member of the base class if I am overriding its name with a property in the derived class? class Base(object): def __init__(self): self.foo = 5 class Derived(Base): def __init__(self): Base.__init__(self) @property def foo(self): return 1 + self.foo # doesn't work of course! @foo.setter def foo(self, f): self._foo = f bar = Base() print bar.foo foobar = Derived() print foobar.foo Please note that I also need to define a setter because otherwise the assignment of self.foo in the base class doesn't work. All in all the descriptor protocol doesn't seem to interact well with inheritance... A: Life is simpler if you use delegation instead of inheritance. This is Python. You aren't obligated to inherit from Base. class LooksLikeDerived( object ): def __init__( self ): self.base= Base() @property def foo(self): return 1 + self.base.foo # always works @foo.setter def foo(self, f): self.base.foo = f But what about other methods of Base? You duplicate the names in LooksLikeDerived and simply. def someMethodOfBase( self, *args, **kw ): return self.base.someMethodOfBase( *args **kw ) Yes, it doesn't feel "DRY". However, it prevents a lot of problems when "wrapping" some class in new functionality like you're trying to do. A: Defining def __init__(self): self.foo = 5 in Base makes foo a member (attribute) of the instance, not of the class. The class Base has no knowledge of foo, so there is no way to access it by something like a super() call. This is not necessary, however. When you instanciate foobar = Derived() and the __init__() method of the base class calls self.foo = 5 this will not result in the creation / overwriting of the attribute, but instead in Derived's setter being called, meaning self.foo.fset(5) and thus self._foo = 5. So if you put return 1 + self._foo in your getter, you pretty much get what you want. If you need the value that self.foo is set to in Base's constructor, just look at _foo, which was set correctly by the @foo.setter. A: class Foo(object): def __new__(cls, *args, **kw): return object.__new__(cls, *args, **kw) def __init__(self): self.foo = 5 class Bar(Foo): def __new__(cls, *args, **kw): self = object.__new__(cls, *args, **kw) self.__foo = Foo.__new__(Foo) return self def __init__(self): Foo.__init__(self) @property def foo(self): return 1 + self.__foo.foo @foo.setter def foo(self, foo): self.__foo.foo = foo bar = Bar() bar.foo = 10 print bar.foo A: once you have property with same name 'foo' it overrides the behaviour of access of name 'foo' only way out seems that you explicitly set 'foo' in dict btw: I use python 2.5 hence had to change code a bit class Base(object): def __init__(self): self.foo = 5 class Derived(Base): def __init__(self): Base.__init__(self) def g_foo(self): return 1 + self.__dict__['foo'] # works now! def s_foo(self, f): self.__dict__['foo'] = f self._foo = f foo = property(g_foo, s_foo) bar = Base() print bar.foo foobar = Derived() print foobar.foo A: Honestly, the thing to look at here is that you're trying to twist your code around a design that is simply poor. The property descriptors handle the request for a 'foo' attribute, and you want to bypass these completely, which is just wrong. You're already causing Base.init to assign foobar._foo = 5, so thats exactly where the getter needs to look, too. class Base(object): def init(self): self.foo = 5 class Derived(Base): def __init__(self): Base.__init__(self) @property def foo(self): return 1 + self._foo # DOES work of course! @foo.setter def foo(self, f): self._foo = f bar = Base() print bar.foo foobar = Derived() print foobar.foo
How to call a data member of the base class if it is being overwritten as a property in the derived class?
This question is similar to this other one, with the difference that the data member in the base class is not wrapped by the descriptor protocol. In other words, how can I access a member of the base class if I am overriding its name with a property in the derived class? class Base(object): def __init__(self): self.foo = 5 class Derived(Base): def __init__(self): Base.__init__(self) @property def foo(self): return 1 + self.foo # doesn't work of course! @foo.setter def foo(self, f): self._foo = f bar = Base() print bar.foo foobar = Derived() print foobar.foo Please note that I also need to define a setter because otherwise the assignment of self.foo in the base class doesn't work. All in all the descriptor protocol doesn't seem to interact well with inheritance...
[ "Life is simpler if you use delegation instead of inheritance. This is Python. You aren't obligated to inherit from Base.\nclass LooksLikeDerived( object ):\n def __init__( self ):\n self.base= Base()\n\n @property\n def foo(self):\n return 1 + self.base.foo # always works\n\n @foo.setter\n def foo(self, f):\n self.base.foo = f\n\nBut what about other methods of Base? You duplicate the names in LooksLikeDerived and simply.\ndef someMethodOfBase( self, *args, **kw ):\n return self.base.someMethodOfBase( *args **kw )\n\nYes, it doesn't feel \"DRY\". However, it prevents a lot of problems when \"wrapping\" some class in new functionality like you're trying to do.\n", "Defining\ndef __init__(self):\n self.foo = 5\n\nin Base makes foo a member (attribute) of the instance, not of the class. The class Base has no knowledge of foo, so there is no way to access it by something like a super() call.\nThis is not necessary, however. When you instanciate\nfoobar = Derived()\n\nand the __init__() method of the base class calls\nself.foo = 5\n\nthis will not result in the creation / overwriting of the attribute, but instead in Derived's setter being called, meaning\nself.foo.fset(5)\n\nand thus self._foo = 5. So if you put\nreturn 1 + self._foo\n\nin your getter, you pretty much get what you want. If you need the value that self.foo is set to in Base's constructor, just look at _foo, which was set correctly by the @foo.setter.\n", "class Foo(object):\n def __new__(cls, *args, **kw):\n return object.__new__(cls, *args, **kw)\n\n def __init__(self):\n self.foo = 5\n\nclass Bar(Foo):\n def __new__(cls, *args, **kw):\n self = object.__new__(cls, *args, **kw)\n self.__foo = Foo.__new__(Foo)\n return self\n\n def __init__(self):\n Foo.__init__(self)\n\n @property\n def foo(self):\n return 1 + self.__foo.foo\n\n @foo.setter\n def foo(self, foo):\n self.__foo.foo = foo\n\nbar = Bar()\nbar.foo = 10\nprint bar.foo\n\n", "once you have property with same name 'foo' it overrides the behaviour of access of name 'foo'\nonly way out seems that you explicitly set 'foo' in dict\nbtw: I use python 2.5 hence had to change code a bit\nclass Base(object):\n def __init__(self):\n self.foo = 5\n\nclass Derived(Base):\n def __init__(self):\n Base.__init__(self)\n\n def g_foo(self):\n return 1 + self.__dict__['foo'] # works now!\n\n def s_foo(self, f):\n self.__dict__['foo'] = f\n self._foo = f\n\n foo = property(g_foo, s_foo)\n\nbar = Base()\nprint bar.foo\n\nfoobar = Derived()\nprint foobar.foo\n\n", "Honestly, the thing to look at here is that you're trying to twist your code around a design that is simply poor. The property descriptors handle the request for a 'foo' attribute, and you want to bypass these completely, which is just wrong. You're already causing Base.init to assign foobar._foo = 5, so thats exactly where the getter needs to look, too.\nclass Base(object):\n def init(self):\n self.foo = 5\nclass Derived(Base):\n def __init__(self):\n Base.__init__(self)\n\n @property\n def foo(self):\n return 1 + self._foo # DOES work of course!\n\n @foo.setter\n def foo(self, f):\n self._foo = f\n\nbar = Base()\nprint bar.foo\n\nfoobar = Derived()\nprint foobar.foo\n\n" ]
[ 9, 3, 1, 0, 0 ]
[]
[]
[ "descriptor", "inheritance", "overloading", "python" ]
stackoverflow_0001057518_descriptor_inheritance_overloading_python.txt
Q: print statement in for loop only executes once I am teaching myself python. I was thinking of small programs, and came up with an idea to do a keno number generator. For any who don't know, you can pick 4-12 numbers, ranged 1-80, to match. So the first is part asks how many numbers, the second generates them. I came up with x = raw_input('How many numbers do you want to play?') for i in x: random.randrange(1,81) print i Which doesn't work, it prints x. So I am wondering the best way to do this. Make a random.randrange function? And how do i call it x times based on user input. As always, thank you in advance for the help A: This should do what you want: x = raw_input('How many numbers do you want to play?') for i in xrange(int(x)): print random.randrange(1,81) In Python indentation matters. It is the way it knows when you're in a specific block of code. So basically we use the xrange function to create a range to loop through (we call int on x because it expects an integer while raw_input returns a string). We then print the randrange return value inside the for block.
print statement in for loop only executes once
I am teaching myself python. I was thinking of small programs, and came up with an idea to do a keno number generator. For any who don't know, you can pick 4-12 numbers, ranged 1-80, to match. So the first is part asks how many numbers, the second generates them. I came up with x = raw_input('How many numbers do you want to play?') for i in x: random.randrange(1,81) print i Which doesn't work, it prints x. So I am wondering the best way to do this. Make a random.randrange function? And how do i call it x times based on user input. As always, thank you in advance for the help
[ "This should do what you want:\nx = raw_input('How many numbers do you want to play?')\nfor i in xrange(int(x)):\n print random.randrange(1,81)\n\nIn Python indentation matters. It is the way it knows when you're in a specific block of code. So basically we use the xrange function to create a range to loop through (we call int on x because it expects an integer while raw_input returns a string). We then print the randrange return value inside the for block.\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0001061534_python.txt
Q: How can I make sure all my Python code "compiles"? My background is C and C++. I like Python a lot, but there's one aspect of it (and other interpreted languages I guess) that is really hard to work with when you're used to compiled languages. When I've written something in Python and come to the point where I can run it, there's still no guarantee that no language-specific errors remain. For me that means that I can't rely solely on my runtime defense (rigorous testing of input, asserts etc.) to avoid crashes, because in 6 months when some otherwise nice code finally gets run, it might crack due to some stupid typo. Clearly a system should be tested enough to make sure all code has been run, but most of the time I use Python for in-house scripts and small tools, which ofcourse never gets the QA attention they need. Also, some code is so simple that (if your background is C/C++) you know it will work fine as long as it compiles (e.g. getter-methods inside classes, usually a simple return of a member variable). So, my question is the obvious - is there any way (with a special tool or something) I can make sure all the code in my Python script will "compile" and run? A: Look at PyChecker and PyLint. Here's example output from pylint, resulting from the trivial program: print a As you can see, it detects the undefined variable, which py_compile won't (deliberately). in foo.py: ************* Module foo C: 1: Black listed name "foo" C: 1: Missing docstring E: 1: Undefined variable 'a' ... |error |1 |1 |= | Trivial example of why tests aren't good enough, even if they cover "every line": bar = "Foo" foo = "Bar" def baz(X): return bar if X else fo0 print baz(input("True or False: ")) EDIT: PyChecker handles the ternary for me: Processing ternary... True or False: True Foo Warnings... ternary.py:6: No global (fo0) found ternary.py:8: Using input() is a security problem, consider using raw_input() A: Others have mentioned tools like PyLint which are pretty good, but the long and the short of it is that it's simply not possible to do 100%. In fact, you might not even want to do it. Part of the benefit to Python's dynamicity is that you can do crazy things like insert names into the local scope through a dictionary access. What it comes down to is that if you want a way to catch type errors at compile time, you shouldn't use Python. A language choice always involves a set of trade-offs. If you choose Python over C, just be aware that you're trading a strong type system for faster development, better string manipulation, etc. A: I think what you are looking for is code test line coverage. You want to add tests to your script that will make sure all of your lines of code, or as many as you have time to, get tested. Testing is a great deal of work, but if you want the kind of assurance you are asking for, there is no free lunch, sorry :( . A: If you are using Eclipse with Pydev as an IDE, it can flag many typos for you with red squigglies immediately, and has Pylint integration too. For example: foo = 5 print food will be flagged as "Undefined variable: food". Of course this is not always accurate (perhaps food was defined earlier using setattr or other exotic techniques), but it works well most of the time. In general, you can only statically analyze your code to the extent that your code is actually static; the more dynamic your code is, the more you really do need automated testing. A: Your code actually gets compiled when you run it, the Python runtime will complain if there is a syntax error in the code. Compared to statically compiled languages like C/C++ or Java, it does not check whether variable names and types are correct – for that you need to actually run the code (e.g. with automated tests).
How can I make sure all my Python code "compiles"?
My background is C and C++. I like Python a lot, but there's one aspect of it (and other interpreted languages I guess) that is really hard to work with when you're used to compiled languages. When I've written something in Python and come to the point where I can run it, there's still no guarantee that no language-specific errors remain. For me that means that I can't rely solely on my runtime defense (rigorous testing of input, asserts etc.) to avoid crashes, because in 6 months when some otherwise nice code finally gets run, it might crack due to some stupid typo. Clearly a system should be tested enough to make sure all code has been run, but most of the time I use Python for in-house scripts and small tools, which ofcourse never gets the QA attention they need. Also, some code is so simple that (if your background is C/C++) you know it will work fine as long as it compiles (e.g. getter-methods inside classes, usually a simple return of a member variable). So, my question is the obvious - is there any way (with a special tool or something) I can make sure all the code in my Python script will "compile" and run?
[ "Look at PyChecker and PyLint.\nHere's example output from pylint, resulting from the trivial program:\nprint a\n\nAs you can see, it detects the undefined variable, which py_compile won't (deliberately).\nin foo.py:\n\n************* Module foo\nC: 1: Black listed name \"foo\"\nC: 1: Missing docstring\nE: 1: Undefined variable 'a'\n\n\n...\n\n|error |1 |1 |= |\n\nTrivial example of why tests aren't good enough, even if they cover \"every line\":\nbar = \"Foo\"\nfoo = \"Bar\"\ndef baz(X):\n return bar if X else fo0\n\nprint baz(input(\"True or False: \"))\n\nEDIT: PyChecker handles the ternary for me:\nProcessing ternary...\nTrue or False: True\nFoo\n\nWarnings...\n\nternary.py:6: No global (fo0) found\nternary.py:8: Using input() is a security problem, consider using raw_input()\n\n", "Others have mentioned tools like PyLint which are pretty good, but the long and the short of it is that it's simply not possible to do 100%. In fact, you might not even want to do it. Part of the benefit to Python's dynamicity is that you can do crazy things like insert names into the local scope through a dictionary access.\nWhat it comes down to is that if you want a way to catch type errors at compile time, you shouldn't use Python. A language choice always involves a set of trade-offs. If you choose Python over C, just be aware that you're trading a strong type system for faster development, better string manipulation, etc.\n", "I think what you are looking for is code test line coverage. You want to add tests to your script that will make sure all of your lines of code, or as many as you have time to, get tested. Testing is a great deal of work, but if you want the kind of assurance you are asking for, there is no free lunch, sorry :( .\n", "If you are using Eclipse with Pydev as an IDE, it can flag many typos for you with red squigglies immediately, and has Pylint integration too. For example:\nfoo = 5\nprint food\n\nwill be flagged as \"Undefined variable: food\". Of course this is not always accurate (perhaps food was defined earlier using setattr or other exotic techniques), but it works well most of the time.\nIn general, you can only statically analyze your code to the extent that your code is actually static; the more dynamic your code is, the more you really do need automated testing.\n", "Your code actually gets compiled when you run it, the Python runtime will complain if there is a syntax error in the code. Compared to statically compiled languages like C/C++ or Java, it does not check whether variable names and types are correct – for that you need to actually run the code (e.g. with automated tests).\n" ]
[ 21, 2, 1, 1, 0 ]
[]
[]
[ "code_analysis", "parsing", "python" ]
stackoverflow_0001026966_code_analysis_parsing_python.txt
Q: python decimal comparison python decimal comparison >>> from decimal import Decimal >>> Decimal('1.0') > 2.0 True I was expecting it to convert 2.0 correctly, but after reading thru PEP 327 I understand there were some reason for not implictly converting float to Decimal, but shouldn't in that case it should raise TypeError as it does in this case >>> Decimal('1.0') + 2.0 Traceback (most recent call last): File "<string>", line 1, in <string> TypeError: unsupported operand type(s) for +: 'Decimal' and 'float' so does all other operator / - % // etc so my questions are is this right behavior? (not to raise exception in cmp) What if I derive my own class and right a float converter basically Decimal(repr(float_value)), are there any caveats? my use case involves only comparison of prices System details: Python 2.5.2 on Ubuntu 8.04.1 A: Re 1, it's indeed the behavior we designed -- right or wrong as it may be (sorry if that trips your use case up, but we were trying to be general!). Specifically, it's long been the case that every Python object could be subject to inequality comparison with every other -- objects of types that aren't really comparable get arbitrarily compared (consistently in a given run, not necessarily across runs); main use case was sorting a heterogeneous list to group elements in it by type. An exception was introduced for complex numbers only, making them non-comparable to anything -- but that was still many years ago, when we were occasionally cavalier about breaking perfectly good user code. Nowadays we're much stricter about backwards compatibility within a major release (e.g. along the 2.* line, and separately along the 3.* one, though incompatibilities are allowed between 2 and 3 -- indeed that's the whole point of having a 3.* series, letting us fix past design decisions even in incompatible ways). The arbitrary comparisons turned out to be more trouble than they're worth, causing user confusion; and the grouping by type can now be obtained easily e.g. with a key=lambda x: str(type(x)) argument to sort; so in Python 3 comparisons between objects of different types, unless the objects themselves specifically allow it in the comparison methods, does raise an exception: >>> decimal.Decimal('2.0') > 1.2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: Decimal() > float() In other words, in Python 3 this behaves exactly as you think it should; but in Python 2 it doesn't (and never will in any Python 2.*). Re 2, you'll be fine -- though, look to gmpy for what I hope is an interesting way to convert doubles to infinite-precision fractions through Farey trees. If the prices you're dealing with are precise to no more than cents, use '%.2f' % x rather than repr(x)!-) Rather than a subclass of Decimal, I'd use a factory function such as def to_decimal(float_price): return decimal.Decimal('%.2f' % float_price) since, once produced, the resulting Decimal is a perfectly ordinary one. A: The greater-than comparison works because, by default, it works for all objects. >>> 'abc' > 123 True Decimal is right merely because it correctly follows the spec. Whether the spec was the correct approach is a separate question. :) Only the normal caveats when dealing with floats, which briefly summarized are: beware of edge cases such as negative zero, +/-infinity, and NaN, don't test for equality (related to the next point), and count on math being slightly inaccurate. >>> print (1.1 + 2.2 == 3.3) False A: If it's "right" is a matter of opinion, but the rationale of why there is no automatic conversion exists in the PEP, and that was the decision taken. The caveat basically is that you can't always exactly convert between float and decimal. Therefore the conversion should not be implicit. If you in your application know that you never have enough significant numbers for this to affect you, making classes that allow this implicit behaviour shouldn't be a problem. Also, one main argument is that real world use cases doesn't exist. It's likely to be simpler if you just use Decimal everywhere.
python decimal comparison
python decimal comparison >>> from decimal import Decimal >>> Decimal('1.0') > 2.0 True I was expecting it to convert 2.0 correctly, but after reading thru PEP 327 I understand there were some reason for not implictly converting float to Decimal, but shouldn't in that case it should raise TypeError as it does in this case >>> Decimal('1.0') + 2.0 Traceback (most recent call last): File "<string>", line 1, in <string> TypeError: unsupported operand type(s) for +: 'Decimal' and 'float' so does all other operator / - % // etc so my questions are is this right behavior? (not to raise exception in cmp) What if I derive my own class and right a float converter basically Decimal(repr(float_value)), are there any caveats? my use case involves only comparison of prices System details: Python 2.5.2 on Ubuntu 8.04.1
[ "Re 1, it's indeed the behavior we designed -- right or wrong as it may be (sorry if that trips your use case up, but we were trying to be general!).\nSpecifically, it's long been the case that every Python object could be subject to inequality comparison with every other -- objects of types that aren't really comparable get arbitrarily compared (consistently in a given run, not necessarily across runs); main use case was sorting a heterogeneous list to group elements in it by type.\nAn exception was introduced for complex numbers only, making them non-comparable to anything -- but that was still many years ago, when we were occasionally cavalier about breaking perfectly good user code. Nowadays we're much stricter about backwards compatibility within a major release (e.g. along the 2.* line, and separately along the 3.* one, though incompatibilities are allowed between 2 and 3 -- indeed that's the whole point of having a 3.* series, letting us fix past design decisions even in incompatible ways).\nThe arbitrary comparisons turned out to be more trouble than they're worth, causing user confusion; and the grouping by type can now be obtained easily e.g. with a key=lambda x: str(type(x)) argument to sort; so in Python 3 comparisons between objects of different types, unless the objects themselves specifically allow it in the comparison methods, does raise an exception:\n>>> decimal.Decimal('2.0') > 1.2\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unorderable types: Decimal() > float()\n\nIn other words, in Python 3 this behaves exactly as you think it should; but in Python 2 it doesn't (and never will in any Python 2.*).\nRe 2, you'll be fine -- though, look to gmpy for what I hope is an interesting way to convert doubles to infinite-precision fractions through Farey trees. If the prices you're dealing with are precise to no more than cents, use '%.2f' % x rather than repr(x)!-)\nRather than a subclass of Decimal, I'd use a factory function such as\ndef to_decimal(float_price):\n return decimal.Decimal('%.2f' % float_price)\n\nsince, once produced, the resulting Decimal is a perfectly ordinary one.\n", "The greater-than comparison works because, by default, it works for all objects.\n>>> 'abc' > 123\nTrue\n\nDecimal is right merely because it correctly follows the spec. Whether the spec was the correct approach is a separate question. :)\nOnly the normal caveats when dealing with floats, which briefly summarized are: beware of edge cases such as negative zero, +/-infinity, and NaN, don't test for equality (related to the next point), and count on math being slightly inaccurate.\n>>> print (1.1 + 2.2 == 3.3)\nFalse\n\n", "If it's \"right\" is a matter of opinion, but the rationale of why there is no automatic conversion exists in the PEP, and that was the decision taken. The caveat basically is that you can't always exactly convert between float and decimal. Therefore the conversion should not be implicit. If you in your application know that you never have enough significant numbers for this to affect you, making classes that allow this implicit behaviour shouldn't be a problem.\nAlso, one main argument is that real world use cases doesn't exist. It's likely to be simpler if you just use Decimal everywhere.\n" ]
[ 26, 3, 1 ]
[]
[]
[ "comparison", "decimal", "python" ]
stackoverflow_0001062008_comparison_decimal_python.txt
Q: Modifying list contents in Python I have a list like: list = [[1,2,3],[4,5,6],[7,8,9]] I want to append a number at the start of every value in the list programmatically, say the number is 9. I want the new list to be like: list = [[9,1,2,3],[9,4,5,6],[9,7,8,9]] How do I go about doing this in Python? I know it is a very trivial question but I couldn't find a way to get this done. A: for sublist in thelist: sublist.insert(0, 9) don't use built-in names such as list for your own stuff, that's just a stupid accident in the making -- call YOUR stuff mylist or thelist or the like, not list. Edit: as the OP aks how to insert > 1 item at the start of each sublist, let me point out that the most efficient way is by assignment of the multiple items to a slice of each sublist (most list mutators can be seen as readable alternatives to slice assignments;-), i.e.: for sublist in thelist: sublist[0:0] = 8, 9 sublist[0:0] is the empty slice at the start of sublist, and by assigning items to it you're inserting the items at that very spot. A: >>> someList = [[1,2,3],[4,5,6],[7,8,9]] >>> someList = [[9] + i for i in someList] >>> someList [[9, 1, 2, 3], [9, 4, 5, 6], [9, 7, 8, 9]] (someList because list is already used by python) A: Use the insert method, which modifies the list in place: >>> numberlists = [[1,2,3],[4,5,6]] >>> for numberlist in numberlists: ... numberlist.insert(0,9) ... >>> numberlists [[9, 1, 2, 3], [9, 4, 5, 6]] or, more succintly [numberlist.insert(0,9) for numberlist in numberlists] or, differently, using list concatenation, which creates a new list newnumberlists = [[9] + numberlist for numberlist in numberlists] A: If you're going to be doing a lot of prepending, perhaps consider using deques* instead of lists: >>> mylist = [[1,2,3],[4,5,6],[7,8,9]] >>> from collections import deque >>> mydeque = deque() >>> for li in mylist: ... mydeque.append(deque(li)) ... >>> mydeque deque([deque([1, 2, 3]), deque([4, 5, 6]), deque([7, 8, 9])]) >>> for di in mydeque: ... di.appendleft(9) ... >>> mydeque deque([deque([9, 1, 2, 3]), deque([9, 4, 5, 6]), deque([9, 7, 8, 9])]) *Deques are a generalization of stacks and queues (the name is pronounced "deck" and is short for "double-ended queue"). Deques support thread-safe, memory-efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. And, as others have mercifully mentioned: For the love of all things dull and ugly, please do not name variables after your favorite data-structures. A: #!/usr/bin/env python def addNine(val): val.insert(0,9) return val if __name__ == '__main__': s = [[1,2,3],[4,5,6],[7,8,9]] print map(addNine,s) Output: [[9, 1, 2, 3], [9, 4, 5, 6], [9, 7, 8, 9]]
Modifying list contents in Python
I have a list like: list = [[1,2,3],[4,5,6],[7,8,9]] I want to append a number at the start of every value in the list programmatically, say the number is 9. I want the new list to be like: list = [[9,1,2,3],[9,4,5,6],[9,7,8,9]] How do I go about doing this in Python? I know it is a very trivial question but I couldn't find a way to get this done.
[ "for sublist in thelist:\n sublist.insert(0, 9)\n\ndon't use built-in names such as list for your own stuff, that's just a stupid accident in the making -- call YOUR stuff mylist or thelist or the like, not list.\nEdit: as the OP aks how to insert > 1 item at the start of each sublist, let me point out that the most efficient way is by assignment of the multiple items to a slice of each sublist (most list mutators can be seen as readable alternatives to slice assignments;-), i.e.:\nfor sublist in thelist:\n sublist[0:0] = 8, 9\n\nsublist[0:0] is the empty slice at the start of sublist, and by assigning items to it you're inserting the items at that very spot.\n", ">>> someList = [[1,2,3],[4,5,6],[7,8,9]]\n>>> someList = [[9] + i for i in someList]\n>>> someList\n[[9, 1, 2, 3], [9, 4, 5, 6], [9, 7, 8, 9]]\n\n(someList because list is already used by python)\n", "Use the insert method, which modifies the list in place:\n>>> numberlists = [[1,2,3],[4,5,6]]\n>>> for numberlist in numberlists:\n... numberlist.insert(0,9)\n...\n>>> numberlists\n[[9, 1, 2, 3], [9, 4, 5, 6]]\n\nor, more succintly\n[numberlist.insert(0,9) for numberlist in numberlists]\n\nor, differently, using list concatenation, which creates a new list\nnewnumberlists = [[9] + numberlist for numberlist in numberlists]\n\n", "If you're going to be doing a lot of prepending, \nperhaps consider using deques* instead of lists:\n>>> mylist = [[1,2,3],[4,5,6],[7,8,9]]\n\n>>> from collections import deque\n>>> mydeque = deque()\n>>> for li in mylist:\n... mydeque.append(deque(li))\n...\n>>> mydeque\ndeque([deque([1, 2, 3]), deque([4, 5, 6]), deque([7, 8, 9])])\n>>> for di in mydeque:\n... di.appendleft(9)\n...\n>>> mydeque\ndeque([deque([9, 1, 2, 3]), deque([9, 4, 5, 6]), deque([9, 7, 8, 9])])\n\n*Deques are a generalization of stacks and queues (the name is pronounced \"deck\" and is short for \"double-ended queue\"). Deques support thread-safe, memory-efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction.\nAnd, as others have mercifully mentioned: \nFor the love of all things dull and ugly, \nplease do not name variables after your favorite data-structures.\n", "#!/usr/bin/env python\n\ndef addNine(val):\n val.insert(0,9)\n return val\n\nif __name__ == '__main__':\n s = [[1,2,3],[4,5,6],[7,8,9]]\n print map(addNine,s)\n\nOutput:\n[[9, 1, 2, 3], [9, 4, 5, 6], [9, 7, 8, 9]]\n\n" ]
[ 16, 12, 2, 2, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001061937_list_python.txt
Q: Python NotImplemented constant Looking through decimal.py, it uses NotImplemented in many special methods. e.g. class A(object): def __lt__(self, a): return NotImplemented def __add__(self, a): return NotImplemented The Python docs say: NotImplemented Special value which can be returned by the “rich comparison” special methods (__eq__(), __lt__(), and friends), to indicate that the comparison is not implemented with respect to the other type. It doesn't talk about other special methods and neither does it describe the behavior. It seems to be a magic object which if returned from other special methods raises TypeError, and in “rich comparison” special methods does nothing. e.g. print A() < A() prints True, but print A() + 1 raises TypeError, so I am curious as to what's going on and what is the usage/behavior of NotImplemented. A: NotImplemented allows you to indicate that a comparison between the two given operands has not been implemented (rather than indicating that the comparison is valid, but yields False, for the two operands). From the Python Language Reference: For objects x and y, first x.__op__(y) is tried. If this is not implemented or returns NotImplemented, y.__rop__(x) is tried. If this is also not implemented or returns NotImplemented, a TypeError exception is raised. But see the following exception: Exception to the previous item: if the left operand is an instance of a built-in type or a new-style class, and the right operand is an instance of a proper subclass of that type or class and overrides the base's __rop__() method, the right operand's __rop__() method is tried before the left operand's __op__() method. This is done so that a subclass can completely override binary operators. Otherwise, the left operand's __op__() method would always accept the right operand: when an instance of a given class is expected, an instance of a subclass of that class is always acceptable. A: It actually has the same meaning when returned from __add__ as from __lt__, the difference is Python 2.x is trying other ways of comparing the objects before giving up. Python 3.x does raise a TypeError. In fact, Python can try other things for __add__ as well, look at __radd__ and (though I'm fuzzy on it) __coerce__. # 2.6 >>> class A(object): ... def __lt__(self, other): ... return NotImplemented >>> A() < A() True # 3.1 >>> class A(object): ... def __lt__(self, other): ... return NotImplemented >>> A() < A() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unorderable types: A() < A() See Ordering Comparisions (3.0 docs) for more info. A: If you return it from __add__ it will behave like the object has no __add__ method, and raise a TypeError. If you return NotImplemented from a rich comparison function, Python will behave like the method wasn't implemented, that is, it will defer to using __cmp__.
Python NotImplemented constant
Looking through decimal.py, it uses NotImplemented in many special methods. e.g. class A(object): def __lt__(self, a): return NotImplemented def __add__(self, a): return NotImplemented The Python docs say: NotImplemented Special value which can be returned by the “rich comparison” special methods (__eq__(), __lt__(), and friends), to indicate that the comparison is not implemented with respect to the other type. It doesn't talk about other special methods and neither does it describe the behavior. It seems to be a magic object which if returned from other special methods raises TypeError, and in “rich comparison” special methods does nothing. e.g. print A() < A() prints True, but print A() + 1 raises TypeError, so I am curious as to what's going on and what is the usage/behavior of NotImplemented.
[ "NotImplemented allows you to indicate that a comparison between the two given operands has not been implemented (rather than indicating that the comparison is valid, but yields False, for the two operands).\nFrom the Python Language Reference:\n\nFor objects x and y, first x.__op__(y)\nis tried. If this is not implemented\nor returns NotImplemented,\ny.__rop__(x) is tried. If this is also\nnot implemented or returns\nNotImplemented, a TypeError exception\nis raised. But see the following\nexception:\n\n\nException to the previous\nitem: if the left operand is an\ninstance of a built-in type or a\nnew-style class, and the right operand\nis an instance of a proper subclass of\nthat type or class and overrides the\nbase's __rop__() method, the right\noperand's __rop__() method is tried\nbefore the left operand's __op__()\nmethod. This is done so that a\nsubclass can completely override\nbinary operators. Otherwise, the left\noperand's __op__() method would always\naccept the right operand: when an\ninstance of a given class is expected,\nan instance of a subclass of that\nclass is always acceptable.\n\n", "It actually has the same meaning when returned from __add__ as from __lt__, the difference is Python 2.x is trying other ways of comparing the objects before giving up. Python 3.x does raise a TypeError. In fact, Python can try other things for __add__ as well, look at __radd__ and (though I'm fuzzy on it) __coerce__.\n# 2.6\n>>> class A(object):\n... def __lt__(self, other):\n... return NotImplemented\n>>> A() < A()\nTrue\n\n# 3.1\n>>> class A(object):\n... def __lt__(self, other):\n... return NotImplemented\n>>> A() < A()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unorderable types: A() < A()\n\nSee Ordering Comparisions (3.0 docs) for more info.\n", "If you return it from __add__ it will behave like the object has no __add__ method, and raise a TypeError.\nIf you return NotImplemented from a rich comparison function, Python will behave like the method wasn't implemented, that is, it will defer to using __cmp__. \n" ]
[ 35, 7, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001062096_python.txt
Q: obtain collection_name from parent's key in GAE is it possible to ask parent for its refered collection_name based on one of its keys, lets say i have a parent db model and its key, can i know ths children who refer to this parent through collection name or otherwise class Parent(db.Model): user = db.UserProperty() class Childs(db.Model): refer = db.ReferenceProperty(Parent,collection_name='children') A: I think you're asking "can I get the set of all the children that refer to a given parent". In which case, yes you can, it's a property of the Parent class. Assuming you have a Parent object p then the children that reference it will be in p.children If you hadn't specified the collection_name on the ReferenceProperty they would be in p.childs_set Check out the documentation. A: Yes, you can. ReferenceProperty has another handy feature: back-references. When a model has a ReferenceProperty to another model, each referenced entity gets a property whose value is a Query that returns all of the entities of the first model that refer to it. # To fetch and iterate over every Childs entity that refers to the # Parent instance p: for child in p.children: # ...
obtain collection_name from parent's key in GAE
is it possible to ask parent for its refered collection_name based on one of its keys, lets say i have a parent db model and its key, can i know ths children who refer to this parent through collection name or otherwise class Parent(db.Model): user = db.UserProperty() class Childs(db.Model): refer = db.ReferenceProperty(Parent,collection_name='children')
[ "I think you're asking \"can I get the set of all the children that refer to a given parent\".\nIn which case, yes you can, it's a property of the Parent class.\nAssuming you have a Parent object p then the children that reference it will be in p.children\nIf you hadn't specified the collection_name on the ReferenceProperty they would be in p.childs_set\nCheck out the documentation.\n", "Yes, you can.\n\nReferenceProperty has another handy feature: back-references. When a model has a ReferenceProperty to another model, each referenced entity gets a property whose value is a Query that returns all of the entities of the first model that refer to it.\n\n# To fetch and iterate over every Childs entity that refers to the\n# Parent instance p:\nfor child in p.children:\n# ...\n\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001062108_google_app_engine_python.txt
Q: Python 3.0 smtplib I have a very simple piece of code that I used in previous versions of Python without issues (version 2.5 and prior). Now with 3.0, the following code give the error on the login line "argument 1 must be string or buffer, not str". import smtplib smtpserver = 'mail.somedomain.com' AUTHREQUIRED = 1 # if you need to use SMTP AUTH set to 1 smtpuser = '[email protected]' # for SMTP AUTH, set SMTP username here smtppass = 'somepassword' # for SMTP AUTH, set SMTP password here msg = "Some message to send" RECIPIENTS = ['[email protected]'] SENDER = '[email protected]' session = smtplib.SMTP(smtpserver) if AUTHREQUIRED: session.login(smtpuser, smtppass) smtpresult = session.sendmail(SENDER, RECIPIENTS, msg) Google shows there are some issues with that error not being clear, but I still can't figure out what I need to try to make it work. Suggestions included defining the username as b"username", but that doesn't seem to work either. A: UPDATE: just noticed from a look at the bug tracker there's a suggested fix also: Edit smtplib.py and replace the existing encode_plain() definition with this: def encode_plain(user, password): s = "\0%s\0%s" % (user, password) return encode_base64(s.encode('ascii'), eol='') Tested here on my installation and it works properly. A: Traceback (most recent call last): File "smtptest.py", line 18, in <module> session.login(smtpuser, smtppass) File "c:\Python30\lib\smtplib.py", line 580, in login AUTH_PLAIN + " " + encode_plain(user, password)) File "c:\Python30\lib\smtplib.py", line 545, in encode_plain return encode_base64("\0%s\0%s" % (user, password)) File "c:\Python30\lib\email\base64mime.py", line 96, in body_encode enc = b2a_base64(s[i:i + max_unencoded]).decode("ascii") TypeError: b2a_base64() argument 1 must be bytes or buffer, not str Your code is correct. This is a bug in smtplib or in the base64mime.py. You can track the issue here: http://bugs.python.org/issue5259 Hopefully the devs will post a patch soon. A: As a variation on Jay's answer, rather than edit smtplib.py you could "monkey patch" it at run time. Put this somewhere in your code: def encode_plain(user, password): s = "\0%s\0%s" % (user, password) return encode_base64(s.encode('ascii'), eol='') import smtplib encode_plain.func_globals = vars(smtplib) smtplib.encode_plain = encode_plain This is kind of ugly but useful if you want to deploy your code onto other systems without making changes to their python libraries. A: This issue has been addressed in Python3.1. Get the update at http://www.python.org/download/releases/3.1/
Python 3.0 smtplib
I have a very simple piece of code that I used in previous versions of Python without issues (version 2.5 and prior). Now with 3.0, the following code give the error on the login line "argument 1 must be string or buffer, not str". import smtplib smtpserver = 'mail.somedomain.com' AUTHREQUIRED = 1 # if you need to use SMTP AUTH set to 1 smtpuser = '[email protected]' # for SMTP AUTH, set SMTP username here smtppass = 'somepassword' # for SMTP AUTH, set SMTP password here msg = "Some message to send" RECIPIENTS = ['[email protected]'] SENDER = '[email protected]' session = smtplib.SMTP(smtpserver) if AUTHREQUIRED: session.login(smtpuser, smtppass) smtpresult = session.sendmail(SENDER, RECIPIENTS, msg) Google shows there are some issues with that error not being clear, but I still can't figure out what I need to try to make it work. Suggestions included defining the username as b"username", but that doesn't seem to work either.
[ "UPDATE: just noticed from a look at the bug tracker there's a suggested fix also: \nEdit smtplib.py and replace the existing encode_plain() definition with this: \ndef encode_plain(user, password):\n s = \"\\0%s\\0%s\" % (user, password)\n return encode_base64(s.encode('ascii'), eol='')\n\nTested here on my installation and it works properly. \n", "Traceback (most recent call last):\n File \"smtptest.py\", line 18, in <module>\n session.login(smtpuser, smtppass)\n File \"c:\\Python30\\lib\\smtplib.py\", line 580, in login\n AUTH_PLAIN + \" \" + encode_plain(user, password))\n File \"c:\\Python30\\lib\\smtplib.py\", line 545, in encode_plain\n return encode_base64(\"\\0%s\\0%s\" % (user, password))\n File \"c:\\Python30\\lib\\email\\base64mime.py\", line 96, in body_encode\n enc = b2a_base64(s[i:i + max_unencoded]).decode(\"ascii\")\nTypeError: b2a_base64() argument 1 must be bytes or buffer, not str\n\nYour code is correct. This is a bug in smtplib or in the base64mime.py. \nYou can track the issue here:\nhttp://bugs.python.org/issue5259\nHopefully the devs will post a patch soon.\n", "As a variation on Jay's answer, rather than edit smtplib.py you could \"monkey patch\" it at run time.\nPut this somewhere in your code:\n\ndef encode_plain(user, password):\n s = \"\\0%s\\0%s\" % (user, password)\n return encode_base64(s.encode('ascii'), eol='')\n\nimport smtplib\nencode_plain.func_globals = vars(smtplib)\nsmtplib.encode_plain = encode_plain\n\nThis is kind of ugly but useful if you want to deploy your code onto other systems without making changes to their python libraries.\n", "This issue has been addressed in Python3.1. Get the update at http://www.python.org/download/releases/3.1/\n" ]
[ 4, 3, 2, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0000549391_python_python_3.x.txt
Q: Python - Reading multiple lines into list OK guys/gals stuck again on something simple I have a text file which has multiple lines per entry, the data is in the following format firstword word word word wordx word word word interesting1 word word word word wordy word word word wordz word word word interesting2 word word word lastword this sequence repeats a hundred or so times, all other words are the same apart from interesting1 and interesting2, no blank lines. The interesting2 is pertinent to interesting1 but not to anything else and I want to link the two interesting items together, discarding the rest such as interesting1 = interesting2 interesting1 = interesting2 interesting1 = interesting2 etc, 1 lne per sequence Each line begins with a different word my attempt was to read the file and do an "if wordx in line" statement to identify the first interesting line, slice out the value, find the second line, ("if wordz in line) slice out the value and concatenate the second with the first. It's clumsy though, I had to use global variables, temp variables etc, and I'm sure there must be a way of identifying the range between firstword and lastword and placing that into a single list, then slicing both values out together. Any suggestions gratefully acknowledged, thanks for your time A: from itertools import izip, tee, islice i1, i2 = tee(open("foo.txt")) for line2, line4 in izip(islice(i1,1, None, 4), islice(i2, 3, None, 4)) : print line2.split(" ")[4], "=", line4.split(" ")[4] A: In that case, make a regexp that matches the repeating text, and has groups for the interesting bits. Then you should be able to use findall to find all cases of interesting1 and interesting2. Like so: import re text = open("foo.txt").read() RE = re.compile('firstword.*?wordx word word word (.*?) word.*?wordz word word word (.*?) word', re.DOTALL) print RE.findall(text) Although as mentioned in the comments, the islice is definitely a neater solution. A: I've thrown in a bagful of assertions to check the regularity of your data layout. C:\SO>type words.py # sample pseudo-file contents guff = """\ firstword word word word wordx word word word interesting1-1 word word word word wordy word word word wordz word word word interesting2-1 word word word lastword miscellaneous rubbish firstword word word word wordx word word word interesting1-2 word word word word wordy word word word wordz word word word interesting2-2 word word word lastword firstword word word word wordx word word word interesting1-3 word word word word wordy word word word wordz word word word interesting2-3 word word word lastword """ # change the RHS of each of these to reflect reality FIRSTWORD = 'firstword' WORDX = 'wordx' WORDY = 'wordy' WORDZ = 'wordz' LASTWORD = 'lastword' from StringIO import StringIO f = StringIO(guff) while True: a = f.readline() if not a: break # end of file a = a.split() if not a: continue # empty line if a[0] != FIRSTWORD: continue # skip extraneous matter assert len(a) == 4 b = f.readline().split(); assert len(b) == 9 c = f.readline().split(); assert len(c) == 4 d = f.readline().split(); assert len(d) == 9 assert a[0] == FIRSTWORD assert b[0] == WORDX assert c[0] == WORDY assert d[0] == WORDZ assert d[-1] == LASTWORD print b[4], d[4] C:\SO>\python26\python words.py interesting1-1 interesting2-1 interesting1-2 interesting2-2 interesting1-3 interesting2-3 C:\SO>
Python - Reading multiple lines into list
OK guys/gals stuck again on something simple I have a text file which has multiple lines per entry, the data is in the following format firstword word word word wordx word word word interesting1 word word word word wordy word word word wordz word word word interesting2 word word word lastword this sequence repeats a hundred or so times, all other words are the same apart from interesting1 and interesting2, no blank lines. The interesting2 is pertinent to interesting1 but not to anything else and I want to link the two interesting items together, discarding the rest such as interesting1 = interesting2 interesting1 = interesting2 interesting1 = interesting2 etc, 1 lne per sequence Each line begins with a different word my attempt was to read the file and do an "if wordx in line" statement to identify the first interesting line, slice out the value, find the second line, ("if wordz in line) slice out the value and concatenate the second with the first. It's clumsy though, I had to use global variables, temp variables etc, and I'm sure there must be a way of identifying the range between firstword and lastword and placing that into a single list, then slicing both values out together. Any suggestions gratefully acknowledged, thanks for your time
[ "from itertools import izip, tee, islice\n\ni1, i2 = tee(open(\"foo.txt\"))\n\nfor line2, line4 in izip(islice(i1,1, None, 4), islice(i2, 3, None, 4)) :\n print line2.split(\" \")[4], \"=\", line4.split(\" \")[4]\n\n", "In that case, make a regexp that matches the repeating text, and has groups for the interesting bits. Then you should be able to use findall to find all cases of interesting1 and interesting2.\nLike so:\n import re\ntext = open(\"foo.txt\").read()\nRE = re.compile('firstword.*?wordx word word word (.*?) word.*?wordz word word word (.*?) word', re.DOTALL)\nprint RE.findall(text)\n\nAlthough as mentioned in the comments, the islice is definitely a neater solution.\n", "I've thrown in a bagful of assertions to check the regularity of your data layout.\nC:\\SO>type words.py\n\n# sample pseudo-file contents\nguff = \"\"\"\\\nfirstword word word word\nwordx word word word interesting1-1 word word word word\nwordy word word word\nwordz word word word interesting2-1 word word word lastword\n\nmiscellaneous rubbish\n\nfirstword word word word\nwordx word word word interesting1-2 word word word word\nwordy word word word\nwordz word word word interesting2-2 word word word lastword\nfirstword word word word\nwordx word word word interesting1-3 word word word word\nwordy word word word\nwordz word word word interesting2-3 word word word lastword\n\n\"\"\"\n\n# change the RHS of each of these to reflect reality\nFIRSTWORD = 'firstword'\nWORDX = 'wordx'\nWORDY = 'wordy'\nWORDZ = 'wordz'\nLASTWORD = 'lastword'\n\nfrom StringIO import StringIO\nf = StringIO(guff)\n\nwhile True:\n a = f.readline()\n if not a: break # end of file\n a = a.split()\n if not a: continue # empty line\n if a[0] != FIRSTWORD: continue # skip extraneous matter\n assert len(a) == 4\n b = f.readline().split(); assert len(b) == 9\n c = f.readline().split(); assert len(c) == 4\n d = f.readline().split(); assert len(d) == 9\n assert a[0] == FIRSTWORD\n assert b[0] == WORDX\n assert c[0] == WORDY\n assert d[0] == WORDZ\n assert d[-1] == LASTWORD\n print b[4], d[4]\n\nC:\\SO>\\python26\\python words.py\ninteresting1-1 interesting2-1\ninteresting1-2 interesting2-2\ninteresting1-3 interesting2-3\n\nC:\\SO>\n\n" ]
[ 6, 0, 0 ]
[]
[]
[ "line", "parsing", "python", "text" ]
stackoverflow_0001062171_line_parsing_python_text.txt
Q: Python Script Executed with Makefile I am writing python scripts and execute them in a Makefile. The python script is used to process data in a pipeline. I would like Makefile to execute the script every time I make a change to my python scripts. Does anyone have an idea of how to do this? A: That's not a lot of information, so this answer is a bit vague. The basic principle of Makefiles is to list dependencies for each target; in this case, your target (let's call it foo) depends on your python script (let's call it do-foo.py): foo: do-foo.py python do-foo.py > foo Now foo will be rerun whenever do-foo.py changes (provided, of course, you call make). A: And in case when the scripts that need to be run don't produce any useful output file that can be used as a target, you can just use a dummy target: scripts=a.py b.py c.py checkfile=.pipeline_up_to_date $(checkfile): $(scripts) touch $(checkfile) echo "Launching some commands now." default: $(checkfile) A: If you want that Makefile to be automatically "maked" immediately after saving, pyinotify, which is a wrapper for inotify, might be the only possibility under Linux. It registers at the kernel to detect FS changes and calls back your function. See my previous post on that topic.
Python Script Executed with Makefile
I am writing python scripts and execute them in a Makefile. The python script is used to process data in a pipeline. I would like Makefile to execute the script every time I make a change to my python scripts. Does anyone have an idea of how to do this?
[ "That's not a lot of information, so this answer is a bit vague. The basic principle of Makefiles is to list dependencies for each target; in this case, your target (let's call it foo) depends on your python script (let's call it do-foo.py):\nfoo: do-foo.py\n python do-foo.py > foo\n\nNow foo will be rerun whenever do-foo.py changes (provided, of course, you call make).\n", "And in case when the scripts that need to be run don't produce any useful output file that can be used as a target, you can just use a dummy target:\nscripts=a.py b.py c.py\ncheckfile=.pipeline_up_to_date\n\n$(checkfile): $(scripts)\n touch $(checkfile)\n echo \"Launching some commands now.\"\n\ndefault: $(checkfile)\n\n", "If you want that Makefile to be automatically \"maked\" immediately after saving, pyinotify, which is a wrapper for inotify, might be the only possibility under Linux. It registers at the kernel to detect FS changes and calls back your function.\nSee my previous post on that topic.\n" ]
[ 21, 4, 0 ]
[]
[]
[ "makefile", "python" ]
stackoverflow_0001062436_makefile_python.txt
Q: Python win32 com : how to handle 'out' parameter? I need to access a third-party COM server with following interface definition (idl): interface IDisplay : IDispatch { HRESULT getFramebuffer ( [in] ULONG aScreenId, [out] IFramebuffer * * aFramebuffer, [out] LONG * aXOrigin, [out] LONG * aYOrigin ); }; As you can see, it returns 3 values via [out] parameter modificators. How to handle this via python win32 COM api? For example, i create an object and get IDisplay from it: object = win32com.client.Dispatch( "VirtualBox.VirtualBox" ) display = object.display How to call display.getFrameBuffer() so it will work? I have tried different ways, but it's always 'type mismatch' on second argument ([out] for IFrameBuffer) A: Since those are out parameters, can't you simply do the following? Framebuffer, XOrigin, YOrigin = display.getFrameBuffer(ScreenId) There is some good references in Python Programming on Win32 Chapter 12 Advanced Python and COM And they indicate that the syntax should be like above. They also mention using MakePy for COM objects: There are a number of good reasons to use MakePy: (copied from the book) The Python interface to automation objects is faster for objects supported by a MakePy module. Any constants defined by the type library are made available to the Python program. We discuss COM constants in more detail later in the chapter. There is much better support for advanced parameter types, specifically, parameters declared by COM as BYREF can be used only with MakePy-supported objects. We discuss passing parameters later in the chapter. A: Use the makepy module, invoking it as follows: >>> import win32com.client.makepy as makepy >>> makepy.main() A window will open with a list of type libraries. Scroll to "Virtual Box Type Library" and select it, then click "OK". A Python module will be created in a location which is printed out (typically %TEMP%\gen_py\2.x\). The generated class will automatically be used by win32com.client.Dispatch, but if you need it explicitly you can access it via functions in the win32com.client.gencache module.
Python win32 com : how to handle 'out' parameter?
I need to access a third-party COM server with following interface definition (idl): interface IDisplay : IDispatch { HRESULT getFramebuffer ( [in] ULONG aScreenId, [out] IFramebuffer * * aFramebuffer, [out] LONG * aXOrigin, [out] LONG * aYOrigin ); }; As you can see, it returns 3 values via [out] parameter modificators. How to handle this via python win32 COM api? For example, i create an object and get IDisplay from it: object = win32com.client.Dispatch( "VirtualBox.VirtualBox" ) display = object.display How to call display.getFrameBuffer() so it will work? I have tried different ways, but it's always 'type mismatch' on second argument ([out] for IFrameBuffer)
[ "Since those are out parameters, can't you simply do the following?\nFramebuffer, XOrigin, YOrigin = display.getFrameBuffer(ScreenId)\n\nThere is some good references in Python Programming on Win32 Chapter 12 Advanced Python and COM\nAnd they indicate that the syntax should be like above. They also mention using \nMakePy for COM objects:\nThere are a number of good reasons to use MakePy: (copied from the book)\n\nThe Python interface to automation objects is faster for objects supported by a MakePy module.\nAny constants defined by the type library are made available to the Python program. We discuss COM constants in more detail later in the chapter.\nThere is much better support for advanced parameter types, specifically, parameters declared by COM as BYREF can be used only with MakePy-supported objects. We discuss passing parameters later in the chapter.\n\n", "Use the makepy module, invoking it as follows:\n>>> import win32com.client.makepy as makepy\n>>> makepy.main()\n\nA window will open with a list of type libraries. Scroll to \"Virtual Box Type Library\" and select it, then click \"OK\". A Python module will be created in a location which is printed out (typically %TEMP%\\gen_py\\2.x\\).\nThe generated class will automatically be used by win32com.client.Dispatch, but if you need it explicitly you can access it via functions in the win32com.client.gencache module.\n" ]
[ 8, 3 ]
[]
[]
[ "com", "python" ]
stackoverflow_0001062129_com_python.txt
Q: Is registered atexit handler inherited by spawned child processes? I am writing a daemon program using python 2.5. In the main process an exit handler is registered with atexit module, it seems that the handler gets called when each child process ends, which is not I expected. I noticed this behavior isn't mentioned in python atexit doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5. A: When you fork to make a child process, that child is an exact copy of the parent -- including of course registered exit functions as well as all other code and data structures. I believe that's the issue you're observing -- of course it's not mentioned in each and every module, because it necessarily applies to every single one. A: There isn't an API to do it in Python 2.5, but you can just: import atexit atexit._exithandlers = [] in your child processes - if you know you only have one exit handler installed, and that no other handlers are installed. However, be aware that some parts of the stdlib (e.g. logging) register atexit handlers. To avoid trampling on them, you could try: my_handler_entries = [e for e in atexit._exithandlers if e[0] == my_handler_func] for e in my_handler_entries: atexit._exithandlers.remove(e) where my_handler_func is the atexit handler you registered, and this should remove your entry without removing the others. A: atexit.register() basically registers your function in atexit._exithandlers, which is a module private list of functions called by sys.exitfunc(). You can set exitfunc() to your custom made exit handler function, which then checks for child status or simply unregisters it. What about just copying the 3.0 atexit.py to your local source tree and using that instead? EDIT: I copied the atexit.py from my 2.6 version and extended it by def unregister(func, *targs, **kargs): _exithandlers.remove((func, targs, kargs)) If you take that instead of your original version it should work. I have not tested it with subprocesses, though.
Is registered atexit handler inherited by spawned child processes?
I am writing a daemon program using python 2.5. In the main process an exit handler is registered with atexit module, it seems that the handler gets called when each child process ends, which is not I expected. I noticed this behavior isn't mentioned in python atexit doc, anybody knows the issue? If this is how it should behave, how can I unregister the exit handler in children processes? There is a atexit.unregister in version 3.0, but I am using 2.5.
[ "When you fork to make a child process, that child is an exact copy of the parent -- including of course registered exit functions as well as all other code and data structures. I believe that's the issue you're observing -- of course it's not mentioned in each and every module, because it necessarily applies to every single one.\n", "There isn't an API to do it in Python 2.5, but you can just:\nimport atexit\natexit._exithandlers = []\n\nin your child processes - if you know you only have one exit handler installed, and that no other handlers are installed. However, be aware that some parts of the stdlib (e.g. logging) register atexit handlers. To avoid trampling on them, you could try:\nmy_handler_entries = [e for e in atexit._exithandlers if e[0] == my_handler_func]\nfor e in my_handler_entries:\n atexit._exithandlers.remove(e)\n\nwhere my_handler_func is the atexit handler you registered, and this should remove your entry without removing the others.\n", "atexit.register() basically registers your function in atexit._exithandlers, which is a module private list of functions called by sys.exitfunc(). You can set exitfunc() to your custom made exit handler function, which then checks for child status or simply unregisters it. What about just copying the 3.0 atexit.py to your local source tree and using that instead?\nEDIT: I copied the atexit.py from my 2.6 version and extended it by\ndef unregister(func, *targs, **kargs):\n _exithandlers.remove((func, targs, kargs))\n\nIf you take that instead of your original version it should work. I have not tested it with subprocesses, though.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "atexit", "multiprocessing", "python" ]
stackoverflow_0001052716_atexit_multiprocessing_python.txt
Q: Calling a non-returning python function from a python script I want to call a wrapped C++ function from a python script which is not returning immediately (in detail: it is a function which starts a QApplication window and the last line in that function is QApplication->exec()). So after that function call I want to move on to my next line in the python script but on executing this script and the previous line it hangs forever. In contrast when I manually type my script line for line in the python command line I can go on to my next line after pressing enter a second time on the non-returning function call line. So how to solve the issue when executing the script? Thanks!! Edit: My python interpreter is embedded in an application. I want to write an extension for this application as a separate Qt4 window. All the python stuff is only for make my graphical plugin accessible per script (per boost.python wrapping). My python script: import imp import os Plugin = imp.load_dynamic('Plugin', os.getcwd() + 'Plugin.dll') qt = Plugin.StartQt4() # it hangs here when executing as script pl = PluginCPP.PluginCPP() # Creates a QMainWindow pl.ShowWindow() # shows the window The C++ code for the Qt start function looks like this: class StartQt4 { public: StartQt4() { int i = 0; QApplication* qapp = new QApplication(i, NULL); qapp->exec(); } }; A: Use a thread (longer example here): from threading import Thread class WindowThread(Thread): def run(self): callCppFunctionHere() WindowThread().start() A: QApplication::exec() starts the main loop of the application and will only return after the application quits. If you want to run code after the application has been started, you should resort to Qt's event handling mechanism. From http://doc.trolltech.com/4.5/qapplication.html#exec : To make your application perform idle processing, i.e. executing a special function whenever there are no pending events, use a QTimer with 0 timeout. More advanced idle processing schemes can be achieved using processEvents(). A: I assume you're already using PyQT?
Calling a non-returning python function from a python script
I want to call a wrapped C++ function from a python script which is not returning immediately (in detail: it is a function which starts a QApplication window and the last line in that function is QApplication->exec()). So after that function call I want to move on to my next line in the python script but on executing this script and the previous line it hangs forever. In contrast when I manually type my script line for line in the python command line I can go on to my next line after pressing enter a second time on the non-returning function call line. So how to solve the issue when executing the script? Thanks!! Edit: My python interpreter is embedded in an application. I want to write an extension for this application as a separate Qt4 window. All the python stuff is only for make my graphical plugin accessible per script (per boost.python wrapping). My python script: import imp import os Plugin = imp.load_dynamic('Plugin', os.getcwd() + 'Plugin.dll') qt = Plugin.StartQt4() # it hangs here when executing as script pl = PluginCPP.PluginCPP() # Creates a QMainWindow pl.ShowWindow() # shows the window The C++ code for the Qt start function looks like this: class StartQt4 { public: StartQt4() { int i = 0; QApplication* qapp = new QApplication(i, NULL); qapp->exec(); } };
[ "Use a thread (longer example here):\nfrom threading import Thread\n\nclass WindowThread(Thread):\n def run(self):\n callCppFunctionHere()\n\nWindowThread().start()\n\n", "QApplication::exec() starts the main loop of the application and will only return after the application quits. If you want to run code after the application has been started, you should resort to Qt's event handling mechanism.\nFrom http://doc.trolltech.com/4.5/qapplication.html#exec :\n\nTo make your application perform idle\n processing, i.e. executing a special\n function whenever there are no pending\n events, use a QTimer with 0 timeout.\n More advanced idle processing schemes\n can be achieved using processEvents().\n\n", "I assume you're already using PyQT?\n" ]
[ 2, 1, 0 ]
[]
[]
[ "c++", "function", "python", "scripting" ]
stackoverflow_0001062562_c++_function_python_scripting.txt
Q: Python Encryption: Encrypting password using PGP public key I have the key pair generated by the GPG. Now I want to use the public key for encrypting the password. I need to make a function in Python. Can somebody guide me on how to do this? I studied the Crypto package but was unable to find out how to encrypt the password using the public key. I also read about the chilkat Python encryption library, but it is not giving the desired output. Maybe I don't how to use this library at the SSH secure shell client. Please guide me. Thanks A: Have a look at PyGPGME A: See also the answers provided to the following questions found in this search Python Encryption questions at Stack Overflow : Python and PGP/encryption Encrypt a string using a public key How to do PGP in Python (generate keys, encrypt/decrypt)
Python Encryption: Encrypting password using PGP public key
I have the key pair generated by the GPG. Now I want to use the public key for encrypting the password. I need to make a function in Python. Can somebody guide me on how to do this? I studied the Crypto package but was unable to find out how to encrypt the password using the public key. I also read about the chilkat Python encryption library, but it is not giving the desired output. Maybe I don't how to use this library at the SSH secure shell client. Please guide me. Thanks
[ "Have a look at PyGPGME\n", "See also the answers provided to the following questions found in this search Python Encryption questions at Stack Overflow :\n\nPython and PGP/encryption\nEncrypt a string using a public key\nHow to do PGP in Python (generate keys, encrypt/decrypt)\n\n" ]
[ 2, 0 ]
[]
[]
[ "cryptography", "encryption", "gnupg", "python" ]
stackoverflow_0001063014_cryptography_encryption_gnupg_python.txt
Q: How to code a Download/Upload Speed Monitor in PHP,Python, or Java? I have to code a up/download speed monitor. It will obtain the current download and upload transfer speed of the computer which it has been installed and post it to another server periodically. c But I don't have an idea about how to catch instant transfer rates of a computer. As you know some of network monitoring programs can trace it but I could not find anything written in PHP, Python or Java? A: You don't say which operating system you're interested in. A quick google turned up this: http://excess.org/speedometer/ "Measure and display the rate of data across a network connection or data being stored in a file" Opensource, written in Python A: JPCAP (a java packet capture library-sniffer) is suitable for this job and I've done it.
How to code a Download/Upload Speed Monitor in PHP,Python, or Java?
I have to code a up/download speed monitor. It will obtain the current download and upload transfer speed of the computer which it has been installed and post it to another server periodically. c But I don't have an idea about how to catch instant transfer rates of a computer. As you know some of network monitoring programs can trace it but I could not find anything written in PHP, Python or Java?
[ "You don't say which operating system you're interested in.\nA quick google turned up this: http://excess.org/speedometer/\n\"Measure and display the rate of data across a network connection or data being stored in a file\"\nOpensource, written in Python\n", "JPCAP (a java packet capture library-sniffer) is suitable for this job and I've done it.\n" ]
[ 3, 1 ]
[]
[]
[ "java", "networking", "php", "python" ]
stackoverflow_0001057449_java_networking_php_python.txt
Q: How can I make a list of files, modification dates and paths? I have directory with subdirectories and I have to make a list like: file_name1 modification_date1 path1 file_name2 modification_date2 path2 and write the list into text file how can i do it in python? A: For traversing the subdirectories, use os.walk(). For getting modification date, use os.stat() The modification time will be a timestamp counting seconds from epoch, there are various methods in the time module that help you convert those to something easier to use. A: import os import time for root, dirs, files in os.walk('your_root_directory'): for f in files: modification_time_seconds = os.stat(os.path.join(root, f)).st_mtime local_mod_time = time.localtime(modification_time_seconds) print '%s %s.%s.%s %s' % (f, local_mod_time.tm_mon, local_mod_time.tm_mday, local_mod_time.tm_year, root)
How can I make a list of files, modification dates and paths?
I have directory with subdirectories and I have to make a list like: file_name1 modification_date1 path1 file_name2 modification_date2 path2 and write the list into text file how can i do it in python?
[ "For traversing the subdirectories, use os.walk().\nFor getting modification date, use os.stat()\nThe modification time will be a timestamp counting seconds from epoch, there are various methods in the time module that help you convert those to something easier to use.\n", "import os\nimport time\n\nfor root, dirs, files in os.walk('your_root_directory'):\n for f in files:\n modification_time_seconds = os.stat(os.path.join(root, f)).st_mtime\n local_mod_time = time.localtime(modification_time_seconds)\n\n print '%s %s.%s.%s %s' % (f, local_mod_time.tm_mon, local_mod_time.tm_mday, local_mod_time.tm_year, root) \n\n" ]
[ 3, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001063037_python.txt
Q: Custom Python exception with different include paths Update: This is, as I was told, no principle Python related problem, but seems to be more specific. See below for more explanations to my problem. I have a custom exception (let's call it CustomException), that lives in a file named exceptions.py. Now imagine, that I can import this file via two paths: import application.exceptions or import some.application.exceptions with the same result. Furthermore I have no control over which way the module is imported in other modules. Now to show my problem: Assume that the function do_something comes from another module that imports exceptions.py in a way I don't know. If I do this: import application.exceptions try: do_something () except application.exceptions.CustomException: catch_me () it might work or not, depending on how the sub-module imported exceptions.py (which I do not know). Question: Is there a way to circumvent this problem, i.e., a name for the exception that will always be understood regardless of inclusion path? If not, what would be best practices to avoid these name clashes? Cheers, Update It is a Django app. some would be the name of the Django 'project', application the name of one Django app. My code with the try..except clause sits in another app, frontend, and lives there as a view in a file some/frontend/views.py. The PYTHONPATH is clean, that is, from my project only /path/to/project is in the path. In the frontend/views.py I import the exceptions.py via import application.exceptions, which seems to work. (Now, in retrospective, I don't know exactly, why it works...) The exception is raised in the exceptions.py file itself. Update 2 It might be interesting for some readers, that I finally found the place, where imports went wrong. The sys.path didn't show any suspect irregularities. My Django project lay in /var/www/django/project. I had installed the apps app1 and app2, but noted them in the settings.py as INSTALLED_APPS = [ 'project.app1', 'project.app2', ] The additional project. was the culprit for messing up sys.modules. Rewriting the settings to INSTALLED_APPS = [ 'app1', 'app2', ] solved the problem. A: Why that would be a problem? exception would me matched based on class type and it would be same however it is imported e.g. import exceptions l=[] try: l[1] except exceptions.IndexError,e: print e try: l[1] except IndexError,e: print e both catch the same exception you can even assign it to a new name, though not recommended usually import os os.myerror = exceptions.IndexError try: l[1] except os.myerror,e: print e A: "If not, what would be best practices to avoid these name clashes?" That depends entirely on why they happen. In a normal installation, you can not import from both application.exceptions and somepath.application.exceptions, unless the first case is a relative path from within the module somepath. And in that case Python will understand that the modules are the same, and you won't have a problem. You are unclear on if you really have a problem or if it's theory. If you do have a problem, I'd guess that there is something fishy with your PYTHONPATH. Maybe both a directory and it's subdirectory is in the PATH? A: Even if the same module is imported several times and in different ways, the CustomException class is still the same object, so it doesn't matter how you refer to it. A: I don't know if there is a way to handle this inclusion path issue. My suggestion would be to use the 'as' keyword in your import Something like: import some.application.exceptions as my_exceptions or import application.exceptions as my_exceptions
Custom Python exception with different include paths
Update: This is, as I was told, no principle Python related problem, but seems to be more specific. See below for more explanations to my problem. I have a custom exception (let's call it CustomException), that lives in a file named exceptions.py. Now imagine, that I can import this file via two paths: import application.exceptions or import some.application.exceptions with the same result. Furthermore I have no control over which way the module is imported in other modules. Now to show my problem: Assume that the function do_something comes from another module that imports exceptions.py in a way I don't know. If I do this: import application.exceptions try: do_something () except application.exceptions.CustomException: catch_me () it might work or not, depending on how the sub-module imported exceptions.py (which I do not know). Question: Is there a way to circumvent this problem, i.e., a name for the exception that will always be understood regardless of inclusion path? If not, what would be best practices to avoid these name clashes? Cheers, Update It is a Django app. some would be the name of the Django 'project', application the name of one Django app. My code with the try..except clause sits in another app, frontend, and lives there as a view in a file some/frontend/views.py. The PYTHONPATH is clean, that is, from my project only /path/to/project is in the path. In the frontend/views.py I import the exceptions.py via import application.exceptions, which seems to work. (Now, in retrospective, I don't know exactly, why it works...) The exception is raised in the exceptions.py file itself. Update 2 It might be interesting for some readers, that I finally found the place, where imports went wrong. The sys.path didn't show any suspect irregularities. My Django project lay in /var/www/django/project. I had installed the apps app1 and app2, but noted them in the settings.py as INSTALLED_APPS = [ 'project.app1', 'project.app2', ] The additional project. was the culprit for messing up sys.modules. Rewriting the settings to INSTALLED_APPS = [ 'app1', 'app2', ] solved the problem.
[ "Why that would be a problem? exception would me matched based on class type and it would be same however it is imported e.g.\nimport exceptions\nl=[]\ntry:\n l[1]\nexcept exceptions.IndexError,e:\n print e\n\ntry:\n l[1]\nexcept IndexError,e:\n print e\n\nboth catch the same exception\nyou can even assign it to a new name, though not recommended usually\nimport os\nos.myerror = exceptions.IndexError\ntry:\n l[1]\nexcept os.myerror,e:\n print e\n\n", "\"If not, what would be best practices to avoid these name clashes?\"\nThat depends entirely on why they happen. In a normal installation, you can not import from both application.exceptions and somepath.application.exceptions, unless the first case is a relative path from within the module somepath. And in that case Python will understand that the modules are the same, and you won't have a problem.\nYou are unclear on if you really have a problem or if it's theory. If you do have a problem, I'd guess that there is something fishy with your PYTHONPATH. Maybe both a directory and it's subdirectory is in the PATH?\n", "Even if the same module is imported several times and in different ways, the CustomException class is still the same object, so it doesn't matter how you refer to it.\n", "I don't know if there is a way to handle this inclusion path issue.\nMy suggestion would be to use the 'as' keyword in your import\nSomething like:\nimport some.application.exceptions as my_exceptions\nor \nimport application.exceptions as my_exceptions\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "django", "exception", "python" ]
stackoverflow_0001063228_django_exception_python.txt
Q: KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because: his routine is as fast as mine (+-5%), and he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart. chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it. A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good. Another remark: Lennart's and my routines generate the same answers in the same order. Are they in fact the same algorithm seen from different angles? I don't know. Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.) Here's the Python routine: #!/usr/bin/env python #filename: makeAddCombos.07.py -- stripped for StackOverflow def initialize_combo( max_val, n_cells, target_sum): """returns combo Starting from left, fills combo to max_val or an intermediate value from 1 up. E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1]. """ combo = [] #Put 1 in each cell. combo += [1] * n_cells need = target_sum - sum(combo) #Fill as many cells as possible to max_val. n_full_cells = need //(max_val - 1) top_up = max_val - 1 for i in range( n_full_cells): combo[i] += top_up need = target_sum - sum(combo) # Then add the rest to next item. if need > 0: combo[n_full_cells] += need return combo #def initialize_combo() def scrunch_left( combo): """returns (new_combo,done) done Boolean; if True, ignore new_combo, all done; if Falso, new_combo is valid. Starts a new combo list. Scanning from right to left, looks for first element at least 2 greater than right-end element. If one is found, decrements it, then scrunches all available counts on its right up against its right-hand side. Returns the modified combo. If none found, (that is, either no step or single step of 1), process done. """ new_combo = [] right_end = combo[-1] length = len(combo) c_range = range(length-1, -1, -1) found_step_gt_1 = False for index in c_range: value = combo[index] if (value - right_end) > 1: found_step_gt_1 = True break if not found_step_gt_1: return ( new_combo,True) if index > 0: new_combo += combo[:index] ceil = combo[index] - 1 new_combo += [ceil] new_combo += [1] * ((length - 1) - index) need = sum(combo[index:]) - sum(new_combo[index:]) fill_height = ceil - 1 ndivf = need // fill_height nmodf = need % fill_height if ndivf > 0: for j in range(index + 1, index + ndivf + 1): new_combo[j] += fill_height if nmodf > 0: new_combo[index + ndivf + 1] += nmodf return (new_combo, False) #def scrunch_left() def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum): """ Build combos, list of tuples of 2 or more addends. """ combo = initialize_combo( max_val, n_cells, target_sum) combos.append( tuple( combo)) while True: (combo, done) = scrunch_left( combo) if done: break else: combos.append( tuple( combo)) return combos #def make_combos_n_cells_ge_two() if __name__ == '__main__': combos = [] max_val = 8 n_cells = 5 target_sum = 20 if n_cells == 1: combos.append( (target_sum,)) else: combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum) import pprint pprint.pprint( combos) A: Your algorithm seems pretty good at first blush, and I don't think OO or another language would improve the code. I can't say if recursion would have helped but I admire the non-recursive approach. I bet it was harder to get working and it's harder to read but it likely is more efficient and it's definitely quite clever. To be honest I didn't analyze the algorithm in detail but it certainly looks like something that took a long while to get working correctly. I bet there were lots of off-by-1 errors and weird edge cases you had to think through, eh? Given all that, basically all I tried to do was pretty up your code as best I could by replacing the numerous C-isms with more idiomatic Python-isms. Often times what requires a loop in C can be done in one line in Python. Also I tried to rename things to follow Python naming conventions better and cleaned up the comments a bit. Hope I don't offend you with any of my changes. You can take what you want and leave the rest. :-) Here are the notes I took as I worked: Changed the code that initializes tmp to a bunch of 1's to the more idiomatic tmp = [1] * n_cells. Changed for loop that sums up tmp_sum to idiomatic sum(tmp). Then replaced all the loops with a tmp = <list> + <list> one-liner. Moved raise doneException to init_tmp_new_ceiling and got rid of the succeeded flag. The check in init_tmp_new_ceiling actually seems unnecessary. Removing it, the only raises left were in make_combos_n_cells, so I just changed those to regular returns and dropped doneException entirely. Normalized mix of 4 spaces and 8 spaces for indentation. Removed unnecessary parentheses around your if conditions. tmp[p2] - tmp[p1] == 0 is the same thing as tmp[p2] == tmp[p1]. Changed while True: if new_ceiling_flag: break to while not new_ceiling_flag. You don't need to initialize variables to 0 at the top of your functions. Removed combos list and changed function to yield its tuples as they are generated. Renamed tmp to combo. Renamed new_ceiling_flag to ceiling_changed. And here's the code for your perusal: def initial_combo(ceiling=5, target_sum=13, num_cells=4): """ Returns a list of possible addends, probably to be modified further. Starts a new combo list, then, starting from left, fills items to ceiling or intermediate between 1 and ceiling or just 1. E.g.: Given ceiling = 5, target_sum = 13, num_cells = 4: creates [5,5,2,1]. """ num_full_cells = (target_sum - num_cells) // (ceiling - 1) combo = [ceiling] * num_full_cells \ + [1] * (num_cells - num_full_cells) if num_cells > num_full_cells: combo[num_full_cells] += target_sum - sum(combo) return combo def all_combos(ceiling, target_sum, num_cells): # p0 points at the rightmost item and moves left under some conditions # p1 starts out at rightmost items and steps left # p2 starts out immediately to the left of p1 and steps left as p1 does # So, combo[p2] and combo[p1] always point at a pair of adjacent items. # d combo[p2] - combo[p1]; immediate difference # cd combo[p2] - combo[p0]; cumulative difference # The ceiling decreases by 1 each iteration. while True: combo = initial_combo(ceiling, target_sum, num_cells) yield tuple(combo) ceiling_changed = False # Generate all of the remaining combos with this ceiling. while not ceiling_changed: p2, p1, p0 = -2, -1, -1 while combo[p2] == combo[p1] and abs(p2) <= num_cells: # 3,3,3,3 if abs(p2) == num_cells: return p2 -= 1 p1 -= 1 p0 -= 1 cd = 0 # slide_ptrs_left loop while abs(p2) <= num_cells: d = combo[p2] - combo[p1] cd += d # 5,5,3,3 or 5,5,4,3 if cd > 1: if abs(p2) < num_cells: # 5,5,3,3 --> 5,4,4,3 if d > 1: combo[p2] -= 1 combo[p1] += 1 # d == 1; 5,5,4,3 --> 5,4,4,4 else: combo[p2] -= 1 combo[p0] += 1 yield tuple(combo) # abs(p2) == num_cells; 5,4,4,3 else: ceiling -= 1 ceiling_changed = True # Resume at make_combo_same_ceiling while # and follow branch. break # 4,3,3,3 or 4,4,3,3 elif cd == 1: if abs(p2) == num_cells: return p1 -= 1 p2 -= 1 if __name__ == '__main__': print list(all_combos(ceiling=6, target_sum=12, num_cells=4)) A: Here's the simplest recursive solution that I can think of to "find all possible combinations of n numbers with values x such that 1 <= x <= max_val and x(1) + ... + x(n) = target". I'm developing it from scratch. Here's a version without any optimization at all, just for simplicity: def apcnx(n, max_val, target, xsofar=(), sumsofar=0): if n==0: if sumsofar==target: yield xsofar return if xsofar: minx = xsofar[-1] - 1 else: minx = 0 for x in xrange(minx, max_val): for xposs in apcnx(n-1, max_val, target, xsofar + (x+1,), sumsofar+x+1): yield xposs for xs in apcnx(4, 6, 12): print xs The base case n==0 (where we can't yield any more numbers) either yield the tuple so far if it satisfies the condition, or nothing, then finishes (returns). If we're supposed to yield longer tuples than we've built so far, the if/else makes sure we only yield non-decreasing tuples, to avoid repetition (you did say "combination" rather than "permutation"). The for tries all possibilities for "this" item and loops over whatever the next-lower-down level of recursion is still able to yield. The output I see is: (1, 1, 4, 6) (1, 1, 5, 5) (1, 2, 3, 6) (1, 2, 4, 5) (1, 3, 3, 5) (1, 3, 4, 4) (2, 2, 2, 6) (2, 2, 3, 5) (2, 2, 4, 4) (2, 3, 3, 4) (3, 3, 3, 3) which seems correct. There are a bazillion possible optimizations, but, remember: First make it work, then make it fast I corresponded with Kent Beck to properly attribute this quote in "Python in a Nutshell", and he tells me he got it from his dad, whose job was actually unrelated to programming;-). In this case, it seems to me that the key issue is understanding what's going on, and any optimization might interfere, so I'm going all out for "simple and understandable"; we can, if need be!, optimize the socks off it once the OP confirms they can understand what's going on in this sheer, unoptimized version! A: First of all, I'd use variable names that mean something, so that the code gets comprehensible. Then, after I understood the problem, it's clearly a recursive problem, as once you have chosen one number, the question of finding the possible values for the rest of the squares are exactly the same problem, but with different values in. So I would do it like this: from __future__ import division from math import ceil def make_combos(max_val,target_sum,n_cells): combos = [] # The highest possible value of the next cell is whatever is # largest of the max_val, or the target_sum minus the number # of remaining cells (as you can't enter 0). highest = min(max_val, target_sum - n_cells + 1) # The lowest is the lowest number you can have that will add upp to # target_sum if you multiply it with n_cells. lowest = int(ceil(target_sum/n_cells)) for x in range(highest, lowest-1, -1): if n_cells == 1: # This is the last cell, no more recursion. combos.append((x,)) break # Recurse to get the next cell: # Set the max to x (or we'll get duplicates like # (6,3,2,1) and (6,2,3,1), which is pointless. # Reduce the target_sum with x to keep the sum correct. # Reduce the number of cells with 1. for combo in make_combos(x, target_sum-x, n_cells-1): combos.append((x,)+combo) return combos if __name__ == '__main__': import pprint # And by using pprint the output gets easier to read pprint.pprint(make_combos( 6,12,4)) I also notice that your solution still seems buggy. For the values max_val=8, target_sum=20 and n_cells=5 your code doesn't find the solution (8,6,4,1,1,), as an example. I'm not sure if that means I've missed a rule in this or not, but as I understand the rules that should be a valid option. Here's a version using generators, It saves a couple of lines, and memory if the values are really big, but as recursion, generators can be tricky to "get". from __future__ import division from math import ceil def make_combos(max_val,target_sum,n_cells): highest = min(max_val, target_sum - n_cells + 1) lowest = int(ceil(target_sum/n_cells)) for x in xrange(highest, lowest-1, -1): if n_cells == 1: yield (x,) break for combo in make_combos(x, target_sum-x, n_cells-1): yield (x,)+combo if __name__ == '__main__': import pprint pprint.pprint(list(make_combos( 6,12,4))) A: Sorry to say, your code is kind of long and not particularly readable. If you can try to summarize it somehow, maybe someone can help you write it more clearly. As for the problem itself, my first thought would be to use recursion. (For all I know, you're already doing that. Sorry again for my inability to read your code.) Think of a way that you can reduce the problem to a smaller easier version of the same problem, repeatedly, until you have a trivial case with a very simple answer. To be a bit more concrete, you have these three parameters, max_val, target_sum, and n_cells. Can you set one of those numbers to some particular value, in order to give you an extremely simple problem requiring no thought at all? Once you have that, can you reduce the slightly harder version of the problem to the already solved one? EDIT: Here is my code. I don't like the way it does de-duplication. I'm sure there's a more Pythonic way. Also, it disallows using the same number twice in one combination. To undo this behavior, just take out the line if n not in numlist:. I'm not sure if this is completely correct, but it seems to work and is (IMHO) more readable. You could easily add memoization and that would probably speed it up quite a bit. def get_combos(max_val, target, n_cells): if target <= 0: return [] if n_cells is 1: if target > max_val: return [] else: return [[target]] else: combos = [] for n in range(1, max_val+1, 1): for numlist in get_combos(max_val, target-n, n_cells-1): if n not in numlist: combos.append(numlist + [n]) return combos def deduplicate(combos): for numlist in combos: numlist.sort() answer = [tuple(numlist) for numlist in combos] return set(answer) def kenken(max_val, target, n_cells): return deduplicate(get_combos(max_val, target, n_cells)) A: First of all, I am learning Python myself so this solution won't be great but this is just an attempt at solving this. I have tried to solve it recursively and I think a recursive solution would be ideal for this kind of problem although THAT recursive solution might not be this one: def GetFactors(maxVal, noOfCells, targetSum): l = [] while(maxVal != 0): remCells = noOfCells - 1 if(remCells > 2): retList = GetFactors(maxVal, remCells, targetSum - maxVal) #Append the returned List to the original List #But first, add the maxVal to the start of every elem of returned list. for i in retList: i.insert(0, maxVal) l.extend(retList) else: remTotal = targetSum - maxVal for i in range(1, remTotal/2 + 1): itemToInsert = remTotal - i; if (i > maxVal or itemToInsert > maxVal): continue l.append([maxVal, i, remTotal - i]) maxVal -= 1 return l if __name__ == "__main__": l = GetFactors(5, 5, 15) print l A: Here a simple solution in C/C++: const int max = 6; int sol[N_CELLS]; void enum_solutions(int target, int n, int min) { if (target == 0 && n == 0) report_solution(); /* sol[0]..sol[N_CELLS-1] is a solution */ if (target <= 0 || n == 0) return; /* nothing further to explore */ sol[n - 1] = min; /* remember */ for (int i = min; i <= max; i++) enum_solutions(target - i, n - 1, i); } enum_solutions(12, 4, 1); A: Here is a naive, but succinct, solution using generators: def descending(v): """Decide if a square contains values in descending order""" return list(reversed(v)) == sorted(v) def latinSquares(max_val, target_sum, n_cells): """Return all descending n_cells-dimensional squares, no cell larger than max_val, sum equal to target_sum.""" possibilities = itertools.product(range(1,max_val+1),repeat=n_cells) for square in possibilities: if descending(square) and sum(square) == target_sum: yield square I could have optimized this code by directly enumerating the list of descending grids, but I find itertools.product much clearer for a first-pass solution. Finally, calling the function: for m in latinSquares(6, 12, 4): print m A: And here is another recursive, generator-based solution, but this time using some simple math to calculate ranges at each step, avoiding needless recursion: def latinSquares(max_val, target_sum, n_cells): if n_cells == 1: assert(max_val >= target_sum >= 1) return ((target_sum,),) else: lower_bound = max(-(-target_sum / n_cells), 1) upper_bound = min(max_val, target_sum - n_cells + 1) assert(lower_bound <= upper_bound) return ((v,) + w for v in xrange(upper_bound, lower_bound - 1, -1) for w in latinSquares(v, target_sum - v, n_cells - 1)) This code will fail with an AssertionError if you supply parameters that are impossible to satisfy; this is a side-effect of my "correctness criterion" that we never do an unnecessary recursion. If you don't want that side-effect, remove the assertions. Note the use of -(-x/y) to round up after division. There may be a more pythonic way to write that. Note also I'm using generator expressions instead of yield. for m in latinSquares(6,12,4): print m A: Little bit offtopic, but still might help at programming kenken. I got good results using DLX algorhitm for solving Killer Sudoku (very simmilar as KenKen it has cages, but only sums). It took less than second for most of problems and it was implemented in MATLAB language. reference this forum http://www.setbb.com/phpbb/viewtopic.php?t=1274&highlight=&mforum=sudoku killer sudoku "look at wikipedia, cant post hyper link" damt spammers
KenKen puzzle addends: REDUX A (corrected) non-recursive algorithm
This question relates to those parts of the KenKen Latin Square puzzles which ask you to find all possible combinations of ncells numbers with values x such that 1 <= x <= maxval and x(1) + ... + x(ncells) = targetsum. Having tested several of the more promising answers, I'm going to award the answer-prize to Lennart Regebro, because: his routine is as fast as mine (+-5%), and he pointed out that my original routine had a bug somewhere, which led me to see what it was really trying to do. Thanks, Lennart. chrispy contributed an algorithm that seems equivalent to Lennart's, but 5 hrs later, sooo, first to the wire gets it. A remark: Alex Martelli's bare-bones recursive algorithm is an example of making every possible combination and throwing them all at a sieve and seeing which go through the holes. This approach takes 20+ times longer than Lennart's or mine. (Jack up the input to max_val = 100, n_cells = 5, target_sum = 250 and on my box it's 18 secs vs. 8+ mins.) Moral: Not generating every possible combination is good. Another remark: Lennart's and my routines generate the same answers in the same order. Are they in fact the same algorithm seen from different angles? I don't know. Something occurs to me. If you sort the answers, starting, say, with (8,8,2,1,1) and ending with (4,4,4,4,4) (what you get with max_val=8, n_cells=5, target_sum=20), the series forms kind of a "slowest descent", with the first ones being "hot" and the last one being "cold" and the greatest possible number of stages in between. Is this related to "informational entropy"? What's the proper metric for looking at it? Is there an algorithm that producs the combinations in descending (or ascending) order of heat? (This one doesn't, as far as I can see, although it's close over short stretches, looking at normalized std. dev.) Here's the Python routine: #!/usr/bin/env python #filename: makeAddCombos.07.py -- stripped for StackOverflow def initialize_combo( max_val, n_cells, target_sum): """returns combo Starting from left, fills combo to max_val or an intermediate value from 1 up. E.g.: Given max_val = 5, n_cells=4, target_sum = 11, creates [5,4,1,1]. """ combo = [] #Put 1 in each cell. combo += [1] * n_cells need = target_sum - sum(combo) #Fill as many cells as possible to max_val. n_full_cells = need //(max_val - 1) top_up = max_val - 1 for i in range( n_full_cells): combo[i] += top_up need = target_sum - sum(combo) # Then add the rest to next item. if need > 0: combo[n_full_cells] += need return combo #def initialize_combo() def scrunch_left( combo): """returns (new_combo,done) done Boolean; if True, ignore new_combo, all done; if Falso, new_combo is valid. Starts a new combo list. Scanning from right to left, looks for first element at least 2 greater than right-end element. If one is found, decrements it, then scrunches all available counts on its right up against its right-hand side. Returns the modified combo. If none found, (that is, either no step or single step of 1), process done. """ new_combo = [] right_end = combo[-1] length = len(combo) c_range = range(length-1, -1, -1) found_step_gt_1 = False for index in c_range: value = combo[index] if (value - right_end) > 1: found_step_gt_1 = True break if not found_step_gt_1: return ( new_combo,True) if index > 0: new_combo += combo[:index] ceil = combo[index] - 1 new_combo += [ceil] new_combo += [1] * ((length - 1) - index) need = sum(combo[index:]) - sum(new_combo[index:]) fill_height = ceil - 1 ndivf = need // fill_height nmodf = need % fill_height if ndivf > 0: for j in range(index + 1, index + ndivf + 1): new_combo[j] += fill_height if nmodf > 0: new_combo[index + ndivf + 1] += nmodf return (new_combo, False) #def scrunch_left() def make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum): """ Build combos, list of tuples of 2 or more addends. """ combo = initialize_combo( max_val, n_cells, target_sum) combos.append( tuple( combo)) while True: (combo, done) = scrunch_left( combo) if done: break else: combos.append( tuple( combo)) return combos #def make_combos_n_cells_ge_two() if __name__ == '__main__': combos = [] max_val = 8 n_cells = 5 target_sum = 20 if n_cells == 1: combos.append( (target_sum,)) else: combos = make_combos_n_cells_ge_two( combos, max_val, n_cells, target_sum) import pprint pprint.pprint( combos)
[ "Your algorithm seems pretty good at first blush, and I don't think OO or another language would improve the code. I can't say if recursion would have helped but I admire the non-recursive approach. I bet it was harder to get working and it's harder to read but it likely is more efficient and it's definitely quite clever. To be honest I didn't analyze the algorithm in detail but it certainly looks like something that took a long while to get working correctly. I bet there were lots of off-by-1 errors and weird edge cases you had to think through, eh?\nGiven all that, basically all I tried to do was pretty up your code as best I could by replacing the numerous C-isms with more idiomatic Python-isms. Often times what requires a loop in C can be done in one line in Python. Also I tried to rename things to follow Python naming conventions better and cleaned up the comments a bit. Hope I don't offend you with any of my changes. You can take what you want and leave the rest. :-)\nHere are the notes I took as I worked:\n\nChanged the code that initializes tmp to a bunch of 1's to the more idiomatic tmp = [1] * n_cells.\nChanged for loop that sums up tmp_sum to idiomatic sum(tmp).\nThen replaced all the loops with a tmp = <list> + <list> one-liner.\nMoved raise doneException to init_tmp_new_ceiling and got rid of the succeeded flag.\nThe check in init_tmp_new_ceiling actually seems unnecessary. Removing it, the only raises left were in make_combos_n_cells, so I just changed those to regular returns and dropped doneException entirely.\nNormalized mix of 4 spaces and 8 spaces for indentation.\nRemoved unnecessary parentheses around your if conditions.\ntmp[p2] - tmp[p1] == 0 is the same thing as tmp[p2] == tmp[p1].\nChanged while True: if new_ceiling_flag: break to while not new_ceiling_flag.\nYou don't need to initialize variables to 0 at the top of your functions.\nRemoved combos list and changed function to yield its tuples as they are generated.\nRenamed tmp to combo.\nRenamed new_ceiling_flag to ceiling_changed.\n\nAnd here's the code for your perusal:\ndef initial_combo(ceiling=5, target_sum=13, num_cells=4):\n \"\"\"\n Returns a list of possible addends, probably to be modified further.\n Starts a new combo list, then, starting from left, fills items to ceiling\n or intermediate between 1 and ceiling or just 1. E.g.:\n Given ceiling = 5, target_sum = 13, num_cells = 4: creates [5,5,2,1].\n \"\"\"\n num_full_cells = (target_sum - num_cells) // (ceiling - 1)\n\n combo = [ceiling] * num_full_cells \\\n + [1] * (num_cells - num_full_cells)\n\n if num_cells > num_full_cells:\n combo[num_full_cells] += target_sum - sum(combo)\n\n return combo\n\ndef all_combos(ceiling, target_sum, num_cells):\n # p0 points at the rightmost item and moves left under some conditions\n # p1 starts out at rightmost items and steps left\n # p2 starts out immediately to the left of p1 and steps left as p1 does\n # So, combo[p2] and combo[p1] always point at a pair of adjacent items.\n # d combo[p2] - combo[p1]; immediate difference\n # cd combo[p2] - combo[p0]; cumulative difference\n\n # The ceiling decreases by 1 each iteration.\n while True:\n combo = initial_combo(ceiling, target_sum, num_cells)\n yield tuple(combo)\n\n ceiling_changed = False\n\n # Generate all of the remaining combos with this ceiling.\n while not ceiling_changed:\n p2, p1, p0 = -2, -1, -1\n\n while combo[p2] == combo[p1] and abs(p2) <= num_cells:\n # 3,3,3,3\n if abs(p2) == num_cells:\n return\n\n p2 -= 1\n p1 -= 1\n p0 -= 1\n\n cd = 0\n\n # slide_ptrs_left loop\n while abs(p2) <= num_cells:\n d = combo[p2] - combo[p1]\n cd += d\n\n # 5,5,3,3 or 5,5,4,3\n if cd > 1:\n if abs(p2) < num_cells:\n # 5,5,3,3 --> 5,4,4,3\n if d > 1:\n combo[p2] -= 1\n combo[p1] += 1\n # d == 1; 5,5,4,3 --> 5,4,4,4\n else:\n combo[p2] -= 1\n combo[p0] += 1\n\n yield tuple(combo)\n\n # abs(p2) == num_cells; 5,4,4,3\n else:\n ceiling -= 1\n ceiling_changed = True\n\n # Resume at make_combo_same_ceiling while\n # and follow branch.\n break\n\n # 4,3,3,3 or 4,4,3,3\n elif cd == 1:\n if abs(p2) == num_cells:\n return\n\n p1 -= 1\n p2 -= 1\n\nif __name__ == '__main__':\n print list(all_combos(ceiling=6, target_sum=12, num_cells=4))\n\n", "Here's the simplest recursive solution that I can think of to \"find all possible combinations of n numbers with values x such that 1 <= x <= max_val and x(1) + ... + x(n) = target\". I'm developing it from scratch. Here's a version without any optimization at all, just for simplicity:\ndef apcnx(n, max_val, target, xsofar=(), sumsofar=0):\n if n==0:\n if sumsofar==target:\n yield xsofar\n return\n\n if xsofar:\n minx = xsofar[-1] - 1\n else:\n minx = 0\n\n for x in xrange(minx, max_val):\n for xposs in apcnx(n-1, max_val, target, xsofar + (x+1,), sumsofar+x+1):\n yield xposs\n\nfor xs in apcnx(4, 6, 12):\n print xs\n\nThe base case n==0 (where we can't yield any more numbers) either yield the tuple so far if it satisfies the condition, or nothing, then finishes (returns).\nIf we're supposed to yield longer tuples than we've built so far, the if/else makes sure we only yield non-decreasing tuples, to avoid repetition (you did say \"combination\" rather than \"permutation\").\nThe for tries all possibilities for \"this\" item and loops over whatever the next-lower-down level of recursion is still able to yield.\nThe output I see is:\n(1, 1, 4, 6)\n(1, 1, 5, 5)\n(1, 2, 3, 6)\n(1, 2, 4, 5)\n(1, 3, 3, 5)\n(1, 3, 4, 4)\n(2, 2, 2, 6)\n(2, 2, 3, 5)\n(2, 2, 4, 4)\n(2, 3, 3, 4)\n(3, 3, 3, 3)\n\nwhich seems correct.\nThere are a bazillion possible optimizations, but, remember:\n\nFirst make it work, then make it fast\n\nI corresponded with Kent Beck to properly attribute this quote in \"Python in a Nutshell\", and he tells me he got it from his dad, whose job was actually unrelated to programming;-).\nIn this case, it seems to me that the key issue is understanding what's going on, and any optimization might interfere, so I'm going all out for \"simple and understandable\"; we can, if need be!, optimize the socks off it once the OP confirms they can understand what's going on in this sheer, unoptimized version!\n", "First of all, I'd use variable names that mean something, so that the code gets comprehensible. Then, after I understood the problem, it's clearly a recursive problem, as once you have chosen one number, the question of finding the possible values for the rest of the squares are exactly the same problem, but with different values in.\nSo I would do it like this:\nfrom __future__ import division\nfrom math import ceil\n\ndef make_combos(max_val,target_sum,n_cells):\n combos = []\n # The highest possible value of the next cell is whatever is \n # largest of the max_val, or the target_sum minus the number \n # of remaining cells (as you can't enter 0).\n highest = min(max_val, target_sum - n_cells + 1)\n # The lowest is the lowest number you can have that will add upp to \n # target_sum if you multiply it with n_cells.\n lowest = int(ceil(target_sum/n_cells))\n for x in range(highest, lowest-1, -1):\n if n_cells == 1: # This is the last cell, no more recursion.\n combos.append((x,))\n break\n # Recurse to get the next cell:\n # Set the max to x (or we'll get duplicates like\n # (6,3,2,1) and (6,2,3,1), which is pointless.\n # Reduce the target_sum with x to keep the sum correct.\n # Reduce the number of cells with 1.\n for combo in make_combos(x, target_sum-x, n_cells-1):\n combos.append((x,)+combo)\n return combos\n\nif __name__ == '__main__':\n import pprint\n # And by using pprint the output gets easier to read\n pprint.pprint(make_combos( 6,12,4))\n\nI also notice that your solution still seems buggy. For the values max_val=8, target_sum=20 and n_cells=5 your code doesn't find the solution (8,6,4,1,1,), as an example. I'm not sure if that means I've missed a rule in this or not, but as I understand the rules that should be a valid option.\nHere's a version using generators, It saves a couple of lines, and memory if the values are really big, but as recursion, generators can be tricky to \"get\". \nfrom __future__ import division\nfrom math import ceil\n\ndef make_combos(max_val,target_sum,n_cells):\n highest = min(max_val, target_sum - n_cells + 1)\n lowest = int(ceil(target_sum/n_cells))\n for x in xrange(highest, lowest-1, -1):\n if n_cells == 1:\n yield (x,)\n break\n for combo in make_combos(x, target_sum-x, n_cells-1):\n yield (x,)+combo\n\nif __name__ == '__main__':\n import pprint\n pprint.pprint(list(make_combos( 6,12,4)))\n\n", "Sorry to say, your code is kind of long and not particularly readable. If you can try to summarize it somehow, maybe someone can help you write it more clearly.\nAs for the problem itself, my first thought would be to use recursion. (For all I know, you're already doing that. Sorry again for my inability to read your code.) Think of a way that you can reduce the problem to a smaller easier version of the same problem, repeatedly, until you have a trivial case with a very simple answer.\nTo be a bit more concrete, you have these three parameters, max_val, target_sum, and n_cells. Can you set one of those numbers to some particular value, in order to give you an extremely simple problem requiring no thought at all? Once you have that, can you reduce the slightly harder version of the problem to the already solved one?\nEDIT: Here is my code. I don't like the way it does de-duplication. I'm sure there's a more Pythonic way. Also, it disallows using the same number twice in one combination. To undo this behavior, just take out the line if n not in numlist:. I'm not sure if this is completely correct, but it seems to work and is (IMHO) more readable. You could easily add memoization and that would probably speed it up quite a bit.\ndef get_combos(max_val, target, n_cells):\n if target <= 0:\n return []\n if n_cells is 1:\n if target > max_val:\n return []\n else:\n return [[target]]\n else:\n combos = []\n for n in range(1, max_val+1, 1):\n for numlist in get_combos(max_val, target-n, n_cells-1):\n if n not in numlist:\n combos.append(numlist + [n])\n return combos\n\ndef deduplicate(combos):\n for numlist in combos:\n numlist.sort()\n answer = [tuple(numlist) for numlist in combos]\n return set(answer)\n\ndef kenken(max_val, target, n_cells):\n return deduplicate(get_combos(max_val, target, n_cells))\n\n", "First of all, I am learning Python myself so this solution won't be great but this is just an attempt at solving this. I have tried to solve it recursively and I think a recursive solution would be ideal for this kind of problem although THAT recursive solution might not be this one:\ndef GetFactors(maxVal, noOfCells, targetSum):\n l = []\n while(maxVal != 0):\n remCells = noOfCells - 1\n if(remCells > 2):\n retList = GetFactors(maxVal, remCells, targetSum - maxVal)\n #Append the returned List to the original List\n #But first, add the maxVal to the start of every elem of returned list.\n for i in retList:\n i.insert(0, maxVal)\n l.extend(retList)\n\n else:\n remTotal = targetSum - maxVal\n for i in range(1, remTotal/2 + 1):\n itemToInsert = remTotal - i;\n if (i > maxVal or itemToInsert > maxVal):\n continue\n l.append([maxVal, i, remTotal - i])\n maxVal -= 1\n return l\n\n\n\nif __name__ == \"__main__\":\n l = GetFactors(5, 5, 15)\n print l\n\n", "Here a simple solution in C/C++:\nconst int max = 6;\nint sol[N_CELLS];\n\nvoid enum_solutions(int target, int n, int min) {\n if (target == 0 && n == 0)\n report_solution(); /* sol[0]..sol[N_CELLS-1] is a solution */\n if (target <= 0 || n == 0) return; /* nothing further to explore */\n sol[n - 1] = min; /* remember */\n for (int i = min; i <= max; i++)\n enum_solutions(target - i, n - 1, i);\n}\n\nenum_solutions(12, 4, 1);\n\n", "Here is a naive, but succinct, solution using generators:\ndef descending(v):\n \"\"\"Decide if a square contains values in descending order\"\"\"\n return list(reversed(v)) == sorted(v)\n\ndef latinSquares(max_val, target_sum, n_cells):\n \"\"\"Return all descending n_cells-dimensional squares,\n no cell larger than max_val, sum equal to target_sum.\"\"\"\n possibilities = itertools.product(range(1,max_val+1),repeat=n_cells)\n for square in possibilities:\n if descending(square) and sum(square) == target_sum:\n yield square\n\nI could have optimized this code by directly enumerating the list of descending grids, but I find itertools.product much clearer for a first-pass solution. Finally, calling the function:\nfor m in latinSquares(6, 12, 4):\n print m\n\n", "And here is another recursive, generator-based solution, but this time using some simple math to calculate ranges at each step, avoiding needless recursion:\ndef latinSquares(max_val, target_sum, n_cells):\n if n_cells == 1:\n assert(max_val >= target_sum >= 1)\n return ((target_sum,),)\n else:\n lower_bound = max(-(-target_sum / n_cells), 1)\n upper_bound = min(max_val, target_sum - n_cells + 1)\n assert(lower_bound <= upper_bound)\n return ((v,) + w for v in xrange(upper_bound, lower_bound - 1, -1)\n for w in latinSquares(v, target_sum - v, n_cells - 1))\n\nThis code will fail with an AssertionError if you supply parameters that are impossible to satisfy; this is a side-effect of my \"correctness criterion\" that we never do an unnecessary recursion. If you don't want that side-effect, remove the assertions.\nNote the use of -(-x/y) to round up after division. There may be a more pythonic way to write that. Note also I'm using generator expressions instead of yield.\nfor m in latinSquares(6,12,4):\n print m\n\n", "Little bit offtopic, but still might help at programming kenken.\nI got good results using DLX algorhitm for solving Killer Sudoku (very simmilar as KenKen it has cages, but only sums). It took less than second for most of problems and it was implemented in MATLAB language. \nreference this forum\nhttp://www.setbb.com/phpbb/viewtopic.php?t=1274&highlight=&mforum=sudoku\nkiller sudoku\n\"look at wikipedia, cant post hyper link\" damt spammers\n" ]
[ 3, 2, 2, 1, 1, 1, 1, 1, 1 ]
[]
[]
[ "algorithm", "combinations", "puzzle", "python", "statistics" ]
stackoverflow_0001061590_algorithm_combinations_puzzle_python_statistics.txt
Q: how to use french letters in a django template? I have some french letters (é, è, à...) in a django template but when it is loaded by django, an UnicodeDecodeError exception is raised. If I don't load the template but directly use a python string. It works ok. Is there something to do to use unicode with django template? A: You are probably storing the template in a non-unicode encoding, such as latin-1. I believe Django assumes that templates are in UTF-8 by default (though there is a setting to override this). Your editor should be capable of saving the template file in the UTF-8 encoding (probably via a dropdown on the save as page, though this may depend on your editor). Re-save the file as UTF-8, and the error should go away. A: This is from the Django unicode documentation related to your problem: " But the common case is to read templates from the filesystem, and this creates a slight complication: not all filesystems store their data encoded as UTF-8. If your template files are not stored with a UTF-8 encoding, set the FILE_CHARSET setting to the encoding of the files on disk. When Django reads in a template file, it will convert the data from this encoding to Unicode. (FILE_CHARSET is set to 'utf-8' by default.) The DEFAULT_CHARSET setting controls the encoding of rendered templates. This is set to UTF-8 by default. "
how to use french letters in a django template?
I have some french letters (é, è, à...) in a django template but when it is loaded by django, an UnicodeDecodeError exception is raised. If I don't load the template but directly use a python string. It works ok. Is there something to do to use unicode with django template?
[ "You are probably storing the template in a non-unicode encoding, such as latin-1. I believe Django assumes that templates are in UTF-8 by default (though there is a setting to override this).\nYour editor should be capable of saving the template file in the UTF-8 encoding (probably via a dropdown on the save as page, though this may depend on your editor). Re-save the file as UTF-8, and the error should go away.\n", "This is from the Django unicode documentation related to your problem:\n\" But the common case is to read templates from the filesystem, and this creates a slight complication: not all filesystems store their data encoded as UTF-8. If your template files are not stored with a UTF-8 encoding, set the FILE_CHARSET setting to the encoding of the files on disk. When Django reads in a template file, it will convert the data from this encoding to Unicode. (FILE_CHARSET is set to 'utf-8' by default.)\nThe DEFAULT_CHARSET setting controls the encoding of rendered templates. This is set to UTF-8 by default. \"\n" ]
[ 7, 3 ]
[]
[]
[ "django", "python", "unicode" ]
stackoverflow_0001063626_django_python_unicode.txt
Q: payment processing - pylons/python I'm building an application that eventually needs to process cc #s. I'd like to handle it completely in my app, and then hand off the information securely to my payment gateway. Ideally the user would have no interaction with the payment gateway directly. Any thoughts? Is there an easier way? A: Most payment gateways offer a few mechanisms for submitting CC payments: 1) A simple HTTPS POST where your application collects the customer's payment details (card number, expiry date, amount, optional CVV) and then submits this to the gateway. The payment parameters are sent through in the POST variables, and the gateway returns a HTTP response. 2) Via an API (often XML over HTTPS). In this case your application collects the customer's payment details, constructs an XML document encapsulating the payment details, and then posts this information to the gateway. The gateway response will be an XML document which your application then has to parse and interpret. 3) Some form of redirect to web pages hosted by the payment gateway. The payment gateway collects the customer's CC number and other details, processes the payment, and then redirects the customer back to a web page hosted by you. Option 3 is usually the easiest solution but would require the customer to interact with pages hosted by the gateway (although this can usually be made to be almost transparent). 1 and 2 above would satisfy your requirements with 1 being the simplest of the two to implement. Because your preference is to have your application collect the payment details, you may need to consider whether you need to acquire PCI DSS compliance, but there are many factors that affect this. There is a lot of information about PCI DSS here and on Wikipedia. A: That's something usual to do. Please follow the instructions your payment gateway gives you on how to send info to them, and write the code. If you have some issue, feel free to ask a more specific question. A: You will probably find that it's easier to just let the payment gateway handle it. It's best to leave PCI compliance to the experts.
payment processing - pylons/python
I'm building an application that eventually needs to process cc #s. I'd like to handle it completely in my app, and then hand off the information securely to my payment gateway. Ideally the user would have no interaction with the payment gateway directly. Any thoughts? Is there an easier way?
[ "Most payment gateways offer a few mechanisms for submitting CC payments:\n1) A simple HTTPS POST where your application collects the customer's payment details (card number, expiry date, amount, optional CVV) and then submits this to the gateway. The payment parameters are sent through in the POST variables, and the gateway returns a HTTP response.\n2) Via an API (often XML over HTTPS). In this case your application collects the customer's payment details, constructs an XML document encapsulating the payment details, and then posts this information to the gateway. The gateway response will be an XML document which your application then has to parse and interpret.\n3) Some form of redirect to web pages hosted by the payment gateway. The payment gateway collects the customer's CC number and other details, processes the payment, and then redirects the customer back to a web page hosted by you.\nOption 3 is usually the easiest solution but would require the customer to interact with pages hosted by the gateway (although this can usually be made to be almost transparent).\n1 and 2 above would satisfy your requirements with 1 being the simplest of the two to implement. \nBecause your preference is to have your application collect the payment details, you may need to consider whether you need to acquire PCI DSS compliance, but there are many factors that affect this. There is a lot of information about PCI DSS here and on Wikipedia.\n", "That's something usual to do. Please follow the instructions your payment gateway gives you on how to send info to them, and write the code. If you have some issue, feel free to ask a more specific question.\n", "You will probably find that it's easier to just let the payment gateway handle it. It's best to leave PCI compliance to the experts.\n" ]
[ 3, 1, 1 ]
[]
[]
[ "payment", "payment_gateway", "pylons", "python" ]
stackoverflow_0001060334_payment_payment_gateway_pylons_python.txt
Q: Can I use Win32 COM to replace text inside a word document? I have to perform a large number of replacements in some documents, and the thing is, I would like to be able to automate that task. Some of the documents contain common strings, and this would be pretty useful if it could be automated. From what I read so far, COM could be one way of doing this, but I don't know if text replacement is supported. I'd like to be able to perform this task in python? Is it possible? Could you post a code snippet showing how to access the document's text? Thanks! A: I like the answers so far; here's a tested example (slightly modified from here) that replaces all occurrences of a string in a Word document: import win32com.client def search_replace_all(word_file, find_str, replace_str): ''' replace all occurrences of `find_str` w/ `replace_str` in `word_file` ''' wdFindContinue = 1 wdReplaceAll = 2 # Dispatch() attempts to do a GetObject() before creating a new one. # DispatchEx() just creates a new one. app = win32com.client.DispatchEx("Word.Application") app.Visible = 0 app.DisplayAlerts = 0 app.Documents.Open(word_file) # expression.Execute(FindText, MatchCase, MatchWholeWord, # MatchWildcards, MatchSoundsLike, MatchAllWordForms, Forward, # Wrap, Format, ReplaceWith, Replace) app.Selection.Find.Execute(find_str, False, False, False, False, False, \ True, wdFindContinue, False, replace_str, wdReplaceAll) app.ActiveDocument.Close(SaveChanges=True) app.Quit() f = 'c:/path/to/my/word.doc' search_replace_all(f, 'string_to_be_replaced', 'replacement_str') A: See if this gives you a start on word automation using python. Once you open a document, you could do the following. After the following code, you can Close the document & open another. Selection.Find.ClearFormatting Selection.Find.Replacement.ClearFormatting With Selection.Find .Text = "test" .Replacement.Text = "test2" .Forward = True .Wrap = wdFindContinue .Format = False .MatchCase = False .MatchWholeWord = False .MatchKashida = False .MatchDiacritics = False .MatchAlefHamza = False .MatchControl = False .MatchWildcards = False .MatchSoundsLike = False .MatchAllWordForms = False End With Selection.Find.Execute Replace:=wdReplaceAll The above code replaces the text "test" with "test2" and does a "replace all". You can turn other options true/false depending on what you need. The simple way to learn this is to create a macro with actions you want to take, see the generated code & use it in your own example (with/without modified parameters). EDIT: After looking at some code by Matthew, you could do the following MSWord.Documents.Open(filename) Selection = MSWord.Selection And then translate the above VB code to Python. Note: The following VB code is shorthand way of assigning property without using the long syntax. (VB) With Selection.Find .Text = "test" .Replacement.Text = "test2" End With Python find = Selection.Find find.Text = "test" find.Replacement.Text = "test2" Pardon my python knowledge. But, I hope you get the idea to move forward. Remember to do a Save & Close on Document, after you are done with the find/replace operation. In the end, you could call MSWord.Quit (to release Word object from memory). A: If this mailing list post is right, accessing the document's text is a simple as: MSWord = win32com.client.Dispatch("Word.Application") MSWord.Visible = 0 MSWord.Documents.Open(filename) docText = MSWord.Documents[0].Content Also see How to: Search for and Replace Text in Documents. The examples use VB and C#, but the basics should apply to Python too. A: Checkout this link: http://python.net/crew/pirx/spam7/ The links on the left side point to the documentation. You can generalize this using the object model, which is found here: http://msdn.microsoft.com/en-us/library/kw65a0we(VS.80).aspx A: You can also achieve this using VBScript. Just type the code into a file named script.vbs, then open a command prompt (Start -> Run -> Cmd), then switch to the folder where the script is and type: cscript script.vbs strFolder = "C:\Files" Const wdFormatDocument = 0 'Select all files in strFolder strComputer = "." Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") Set colFiles = objWMIService.ExecQuery _ ("ASSOCIATORS OF {Win32_Directory.Name='" & strFolder & "'} Where " _ & "ResultClass = CIM_DataFile") 'Start MS Word Set objWord = CreateObject("Word.Application") Const wdReplaceAll = 2 Const wdOrientLandscape = 1 For Each objFile in colFiles If objFile.Extension = "doc" Then strFile = strFolder & "\" & objFile.FileName & "." & objFile.Extension strNewFile = strFolder & "\" & objFile.FileName & ".doc" Wscript.Echo "Processing " & objFile.Name & "..." Set objDoc = objWord.Documents.Open(strFile) objDoc.PageSetup.Orientation = wdOrientLandscape 'Replace text - ^p in a string stands for new paragraph; ^m stands for page break Set objSelection = objWord.Selection objSelection.Find.Text = "String to replace" objSelection.Find.Forward = TRUE objSelection.Find.Replacement.Text = "New string" objSelection.Find.Execute ,,,,,,,,,,wdReplaceAll objDoc.SaveAs strNewFile, wdFormatDocument objDoc.Close Wscript.Echo "Ready" End If Next objWord.Quit
Can I use Win32 COM to replace text inside a word document?
I have to perform a large number of replacements in some documents, and the thing is, I would like to be able to automate that task. Some of the documents contain common strings, and this would be pretty useful if it could be automated. From what I read so far, COM could be one way of doing this, but I don't know if text replacement is supported. I'd like to be able to perform this task in python? Is it possible? Could you post a code snippet showing how to access the document's text? Thanks!
[ "I like the answers so far; \nhere's a tested example (slightly modified from here) \nthat replaces all occurrences of a string in a Word document:\nimport win32com.client\n\ndef search_replace_all(word_file, find_str, replace_str):\n ''' replace all occurrences of `find_str` w/ `replace_str` in `word_file` '''\n wdFindContinue = 1\n wdReplaceAll = 2\n\n # Dispatch() attempts to do a GetObject() before creating a new one.\n # DispatchEx() just creates a new one. \n app = win32com.client.DispatchEx(\"Word.Application\")\n app.Visible = 0\n app.DisplayAlerts = 0\n app.Documents.Open(word_file)\n\n # expression.Execute(FindText, MatchCase, MatchWholeWord,\n # MatchWildcards, MatchSoundsLike, MatchAllWordForms, Forward, \n # Wrap, Format, ReplaceWith, Replace)\n app.Selection.Find.Execute(find_str, False, False, False, False, False, \\\n True, wdFindContinue, False, replace_str, wdReplaceAll)\n app.ActiveDocument.Close(SaveChanges=True)\n app.Quit()\n\nf = 'c:/path/to/my/word.doc'\nsearch_replace_all(f, 'string_to_be_replaced', 'replacement_str')\n\n", "See if this gives you a start on word automation using python.\nOnce you open a document, you could do the following.\nAfter the following code, you can Close the document & open another.\nSelection.Find.ClearFormatting\nSelection.Find.Replacement.ClearFormatting\nWith Selection.Find\n .Text = \"test\"\n .Replacement.Text = \"test2\"\n .Forward = True\n .Wrap = wdFindContinue\n .Format = False\n .MatchCase = False\n .MatchWholeWord = False\n .MatchKashida = False\n .MatchDiacritics = False\n .MatchAlefHamza = False\n .MatchControl = False\n .MatchWildcards = False\n .MatchSoundsLike = False\n .MatchAllWordForms = False\nEnd With\nSelection.Find.Execute Replace:=wdReplaceAll\n\nThe above code replaces the text \"test\" with \"test2\" and does a \"replace all\".\nYou can turn other options true/false depending on what you need.\nThe simple way to learn this is to create a macro with actions you want to take, see the generated code & use it in your own example (with/without modified parameters).\nEDIT: After looking at some code by Matthew, you could do the following\nMSWord.Documents.Open(filename)\nSelection = MSWord.Selection\n\nAnd then translate the above VB code to Python.\nNote: The following VB code is shorthand way of assigning property without using the long syntax.\n(VB)\nWith Selection.Find\n .Text = \"test\"\n .Replacement.Text = \"test2\"\nEnd With\n\nPython\nfind = Selection.Find\nfind.Text = \"test\"\nfind.Replacement.Text = \"test2\"\n\nPardon my python knowledge. But, I hope you get the idea to move forward.\nRemember to do a Save & Close on Document, after you are done with the find/replace operation.\nIn the end, you could call MSWord.Quit (to release Word object from memory).\n", "If this mailing list post is right, accessing the document's text is a simple as:\nMSWord = win32com.client.Dispatch(\"Word.Application\")\nMSWord.Visible = 0 \nMSWord.Documents.Open(filename)\ndocText = MSWord.Documents[0].Content\n\nAlso see How to: Search for and Replace Text in Documents. The examples use VB and C#, but the basics should apply to Python too.\n", "Checkout this link: http://python.net/crew/pirx/spam7/\nThe links on the left side point to the documentation. \nYou can generalize this using the object model, which is found here: \nhttp://msdn.microsoft.com/en-us/library/kw65a0we(VS.80).aspx\n", "You can also achieve this using VBScript. Just type the code into a file named script.vbs, then open a command prompt (Start -> Run -> Cmd), then switch to the folder where the script is and type: cscript script.vbs \n\nstrFolder = \"C:\\Files\"\n\nConst wdFormatDocument = 0\n\n'Select all files in strFolder\nstrComputer = \".\"\nSet objWMIService = GetObject(\"winmgmts:\\\\\" & strComputer & \"\\root\\cimv2\")\nSet colFiles = objWMIService.ExecQuery _\n (\"ASSOCIATORS OF {Win32_Directory.Name='\" & strFolder & \"'} Where \" _\n & \"ResultClass = CIM_DataFile\")\n\n'Start MS Word\nSet objWord = CreateObject(\"Word.Application\")\n\nConst wdReplaceAll = 2\nConst wdOrientLandscape = 1\n\n\nFor Each objFile in colFiles\n If objFile.Extension = \"doc\" Then\n strFile = strFolder & \"\\\" & objFile.FileName & \".\" & objFile.Extension\n strNewFile = strFolder & \"\\\" & objFile.FileName & \".doc\"\n Wscript.Echo \"Processing \" & objFile.Name & \"...\"\n\n Set objDoc = objWord.Documents.Open(strFile)\n\n objDoc.PageSetup.Orientation = wdOrientLandscape\n\n 'Replace text - ^p in a string stands for new paragraph; ^m stands for page break\n Set objSelection = objWord.Selection\n objSelection.Find.Text = \"String to replace\"\n objSelection.Find.Forward = TRUE\n objSelection.Find.Replacement.Text = \"New string\"\n\n objSelection.Find.Execute ,,,,,,,,,,wdReplaceAll\n\n objDoc.SaveAs strNewFile, wdFormatDocument\n objDoc.Close\n Wscript.Echo \"Ready\"\n End If\nNext\n\nobjWord.Quit\n\n\n" ]
[ 13, 9, 3, 2, 2 ]
[]
[]
[ "com", "ms_word", "python", "replace", "winapi" ]
stackoverflow_0001045628_com_ms_word_python_replace_winapi.txt
Q: Why isn't this a valid schema for Rx? I'm using YAML as a configuration file format for a Python project. Recently I found Rx to be the only schema validator available for Python and YAML. :-/ Kwalify works with YAML, but it's only for Ruby and Java. :( I've been reading their lacking documentation all day and just can't seem to write a valid schema to represent my file structure. Help? I have the following YAML config file: cmd: exec: mycmd aliases: [my, cmd] filter: sms: 'regex .*' load: exec: load filter: sms: 'load: .*$' echo: exec: echo % I'm failing at representing a nested structure. What I want is for the outer-most item (cmd, load and echo, in this case) to be an arbitrary string that in turn contains other items. 'exec' is a fixed string and required item; 'aliases' and 'filter' are also fixed, but should be optional. Filter in turn has another set of required and optional items. How should I represent this with Rx? So far I have the following schema (in YAML), which Rx fails to compile: type: //rec required: type: //rec required: exec: //str optional: aliases: type: //arr contents: //str length: {min: 1, max: 10} filter: type: //rec optional: sms: //str email: //str all: //str Testing this in IPython gives me this: /Rx.py in make_schema(self, schema) 68 raise Error('invalid schema argument to make_schema') 69 ---> 70 uri = self.expand_uri(schema["type"]) 71 72 if not self.type_registry.get(uri): raise "unknown type %s" % uri KeyError: 'type' Which leads me to believe I'm not specifying "type" somewhere. :-S Any ideas? I'm pretty tired fighting with this thing... Is there some other way I can write a schema and use it to validate my configuration files? Thanks in advance, Ivan A: Try this: type: //map values: type: //rec required: exec: //str optional: aliases: type: //arr contents: //str length: {min: 1, max: 10} filter: type: //rec optional: sms: //str email: //str all: //str A map can contain any string as a key, whereas a rec can only contain the keys specified in 'required' and 'optional'.
Why isn't this a valid schema for Rx?
I'm using YAML as a configuration file format for a Python project. Recently I found Rx to be the only schema validator available for Python and YAML. :-/ Kwalify works with YAML, but it's only for Ruby and Java. :( I've been reading their lacking documentation all day and just can't seem to write a valid schema to represent my file structure. Help? I have the following YAML config file: cmd: exec: mycmd aliases: [my, cmd] filter: sms: 'regex .*' load: exec: load filter: sms: 'load: .*$' echo: exec: echo % I'm failing at representing a nested structure. What I want is for the outer-most item (cmd, load and echo, in this case) to be an arbitrary string that in turn contains other items. 'exec' is a fixed string and required item; 'aliases' and 'filter' are also fixed, but should be optional. Filter in turn has another set of required and optional items. How should I represent this with Rx? So far I have the following schema (in YAML), which Rx fails to compile: type: //rec required: type: //rec required: exec: //str optional: aliases: type: //arr contents: //str length: {min: 1, max: 10} filter: type: //rec optional: sms: //str email: //str all: //str Testing this in IPython gives me this: /Rx.py in make_schema(self, schema) 68 raise Error('invalid schema argument to make_schema') 69 ---> 70 uri = self.expand_uri(schema["type"]) 71 72 if not self.type_registry.get(uri): raise "unknown type %s" % uri KeyError: 'type' Which leads me to believe I'm not specifying "type" somewhere. :-S Any ideas? I'm pretty tired fighting with this thing... Is there some other way I can write a schema and use it to validate my configuration files? Thanks in advance, Ivan
[ "Try this:\ntype: //map\nvalues:\n type: //rec\n required:\n exec: //str\n optional:\n aliases:\n type: //arr\n contents: //str\n length: {min: 1, max: 10}\n filter:\n type: //rec\n optional:\n sms: //str\n email: //str\n all: //str\n\nA map can contain any string as a key, whereas a rec can only contain the keys specified in 'required' and 'optional'.\n" ]
[ 4 ]
[]
[]
[ "python", "schema", "yaml" ]
stackoverflow_0001061482_python_schema_yaml.txt
Q: Hex data from socket, process and response Lets put it in parts. I got a socket receiving data OK and I got it in the \x31\x31\x31 format. I know that I can get the same number, ripping the \x with something like for i in data: print hex(ord(i)) so I got 31 in each case. But if I want to add 1 to the data (so it shall be "32 32 32")to send it as response, how can I get it in \x32\x32\x32 again? A: use the struct module unpack and get the 3 values in abc (a, b, c) = struct.unpack(">BBB", your_string) then a, b, c = a+1, b+1, c+1 and pack into the response response = struct.pack(">BBB", a, b, c) see the struct module in python documentation for more details A: The "\x31" is not a format but the text representation of the binary data. As you mention ord() will convert one byte of binary data into an int, so you can do maths on it. To convert it back to binary data in a string, you can use chr() if it's on just one integer. If it's many, you can use the %c formatting character of a string: >>> "Return value: %c%c%c" % (5,6,7) 'Return value: \x05\x06\x07' However, a better way is probably to use struct. >>> import struct >>> foo, bar, kaka = struct.unpack("BBB", '\x06\x06\x06') >>> struct.pack("BBB", foo, bar+1, kaka+5) '\x06\x07\x0b' You may even want to take a look at ctypes. A: The opposite of ord() would be chr(). So you could do this to add one to it: newchar = chr(ord(i) + 1) To use that in your example: newdata = '' for i in data: newdata += chr(ord(i) + 1) print repr(newdata) But if you really wanted to work in hex strings, you can use encode() and decode(): >>> 'test string'.encode('hex') '7465737420737472696e67' >>> '7465737420737472696e67'.decode('hex') 'test string' A: OMG, what a fast answering! :D I think that i was stuck on the ">B" parameter to struct, as i used "h" and that sample parameters (newbie on struct.pack talking!) Tried the encode/decode thing but socket on the other side receive them as numbers, not the "\x" representation it wanted. I really enjoy the simplicity of %c format, and will use that as temporal thing (i dont process so many data so have not reason to be ultraefficient atm :D) or the struct thing after playing with it a bit. And in the example, to just play with one char at a time, find struct working too, usint ">B" only hehe. Thanks to all.
Hex data from socket, process and response
Lets put it in parts. I got a socket receiving data OK and I got it in the \x31\x31\x31 format. I know that I can get the same number, ripping the \x with something like for i in data: print hex(ord(i)) so I got 31 in each case. But if I want to add 1 to the data (so it shall be "32 32 32")to send it as response, how can I get it in \x32\x32\x32 again?
[ "use the struct module\nunpack and get the 3 values in abc\n(a, b, c) = struct.unpack(\">BBB\", your_string)\nthen \na, b, c = a+1, b+1, c+1\nand pack into the response\nresponse = struct.pack(\">BBB\", a, b, c)\nsee the struct module in python documentation for more details\n", "The \"\\x31\" is not a format but the text representation of the binary data. As you mention ord() will convert one byte of binary data into an int, so you can do maths on it.\nTo convert it back to binary data in a string, you can use chr() if it's on just one integer. If it's many, you can use the %c formatting character of a string:\n>>> \"Return value: %c%c%c\" % (5,6,7)\n'Return value: \\x05\\x06\\x07'\n\nHowever, a better way is probably to use struct.\n>>> import struct\n>>> foo, bar, kaka = struct.unpack(\"BBB\", '\\x06\\x06\\x06')\n>>> struct.pack(\"BBB\", foo, bar+1, kaka+5)\n'\\x06\\x07\\x0b'\n\nYou may even want to take a look at ctypes.\n", "The opposite of ord() would be chr().\nSo you could do this to add one to it:\nnewchar = chr(ord(i) + 1)\n\nTo use that in your example:\nnewdata = ''\n\nfor i in data:\n newdata += chr(ord(i) + 1)\n\nprint repr(newdata)\n\nBut if you really wanted to work in hex strings, you can use encode() and decode():\n>>> 'test string'.encode('hex')\n'7465737420737472696e67'\n>>> '7465737420737472696e67'.decode('hex')\n'test string'\n\n", "OMG, what a fast answering! :D\nI think that i was stuck on the \">B\" parameter to struct, as i used \"h\" and that sample parameters (newbie on struct.pack talking!)\nTried the encode/decode thing but socket on the other side receive them as numbers, not the \"\\x\" representation it wanted.\nI really enjoy the simplicity of %c format, and will use that as temporal thing (i dont process so many data so have not reason to be ultraefficient atm :D) or the struct thing after playing with it a bit.\nAnd in the example, to just play with one char at a time, find struct working too, usint \">B\" only hehe.\nThanks to all.\n" ]
[ 4, 4, 0, 0 ]
[]
[]
[ "hex", "python", "sockets" ]
stackoverflow_0001063775_hex_python_sockets.txt
Q: PyQt: Consolidating signals to a single slot I am attempting to reduce the amount of signals I have to use in my contextmenus. The menu consists of actions which switches the operation mode of the program, so the operation carried out by the slots is very simple. Quoting the documentation on QMenu::triggered, Normally, you connect each menu action's triggered() signal to its own custom slot, but sometimes you will want to connect several actions to a single slot, for example, when you have a group of closely related actions, such as "left justify", "center", "right justify". However, I can't figure out how to accomplish this, and the documentation does not go into any further detail. Suppose I have actions actionOpMode1 and actionOpMode2 in menu actionMenu, and a slot setOpMode. I want setOpMode to be called with a parameter which somehow relates to which of the actions was triggered. I tried various permutations on this theme: QObject.connect(self.actionMenu, SIGNAL('triggered(QAction)'), self.setOpMode) But I never even got it to call setOpMode, which suggests that actionMenu never "feels triggered", so to speak. In this SO question, it's suggested that it can be done with lamdbas, but this: QObject.connect(self.actionOpMode1, SIGNAL('triggered()'), lambda t: self.setOpMode(t)) gives "<lambda> () takes exactly 1 argument (0 given)". I can't say I really understand how this is supposed to work, so I may have done something wrong when moving from clicked() to triggered(). How is it done? A: Using QObject.Sender is one of the solution, although not the cleanest one. Use QSignalMapper to associate cleanly a value with the object that emitted the signal. A: I use this approach: from functools import partial def bind(self, action, *params): self.connect(action, QtCore.SIGNAL('triggered()'), partial(action, *params, self.onMenuAction)) def onMenuAction(self, *args): pass bind(self.actionOpMode1, 'action1') bind(self.actionOpMode2, 'action2') A: You can use QObject::sender() to figure out which QAction emitted the signal. So your slot might look like this: def triggered(self): sender = QtCore.QObject.sender() if sender == self.actionOpMode1: # do something elif sender == self.actionOpMode2: # do something else Regarding what's going on in the other SO question you mentioned with the lambda, what it does is create a lambda with one parameter which has a default value so to apply that to your example you'd need to do something like this: self.connect(self.actionOpMode1, QtCore.SIGNAL('triggered()'), lambda who="mode1": self.changeMode(who)) self.connect(self.actionOpMode2, QtCore.SIGNAL('triggered()'), lambda who="mode2": self.changeMode(who)) And then have a member function like this: def changeMode(self, who): if who == "mode1": # ... elif who == "mode2": # ... Personally the first approach looks cleaner and more readable to me.
PyQt: Consolidating signals to a single slot
I am attempting to reduce the amount of signals I have to use in my contextmenus. The menu consists of actions which switches the operation mode of the program, so the operation carried out by the slots is very simple. Quoting the documentation on QMenu::triggered, Normally, you connect each menu action's triggered() signal to its own custom slot, but sometimes you will want to connect several actions to a single slot, for example, when you have a group of closely related actions, such as "left justify", "center", "right justify". However, I can't figure out how to accomplish this, and the documentation does not go into any further detail. Suppose I have actions actionOpMode1 and actionOpMode2 in menu actionMenu, and a slot setOpMode. I want setOpMode to be called with a parameter which somehow relates to which of the actions was triggered. I tried various permutations on this theme: QObject.connect(self.actionMenu, SIGNAL('triggered(QAction)'), self.setOpMode) But I never even got it to call setOpMode, which suggests that actionMenu never "feels triggered", so to speak. In this SO question, it's suggested that it can be done with lamdbas, but this: QObject.connect(self.actionOpMode1, SIGNAL('triggered()'), lambda t: self.setOpMode(t)) gives "<lambda> () takes exactly 1 argument (0 given)". I can't say I really understand how this is supposed to work, so I may have done something wrong when moving from clicked() to triggered(). How is it done?
[ "Using QObject.Sender is one of the solution, although not the cleanest one.\nUse QSignalMapper to associate cleanly a value with the object that emitted the signal.\n", "I use this approach:\nfrom functools import partial\n\ndef bind(self, action, *params):\n self.connect(action, QtCore.SIGNAL('triggered()'), \n partial(action, *params, self.onMenuAction))\n\ndef onMenuAction(self, *args):\n pass\n\n\nbind(self.actionOpMode1, 'action1')\nbind(self.actionOpMode2, 'action2')\n\n", "You can use QObject::sender() to figure out which QAction emitted the signal.\nSo your slot might look like this:\ndef triggered(self):\n sender = QtCore.QObject.sender()\n\n if sender == self.actionOpMode1:\n # do something\n elif sender == self.actionOpMode2:\n # do something else\n\nRegarding what's going on in the other SO question you mentioned with the lambda, what it does is create a lambda with one parameter which has a default value so to apply that to your example you'd need to do something like this:\nself.connect(self.actionOpMode1, QtCore.SIGNAL('triggered()'), lambda who=\"mode1\": self.changeMode(who))\nself.connect(self.actionOpMode2, QtCore.SIGNAL('triggered()'), lambda who=\"mode2\": self.changeMode(who))\n\nAnd then have a member function like this:\ndef changeMode(self, who):\n if who == \"mode1\":\n # ...\n elif who == \"mode2\":\n # ...\n\nPersonally the first approach looks cleaner and more readable to me.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "python", "qt" ]
stackoverflow_0001063734_python_qt.txt
Q: In Python, Using pyodbc, How Do You Perform Transactions? I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them. The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions. Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance: 1) I have never written a stored procedure. 2) I heard that pyodbc does not return results from stored procedures as of yet. 3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth. So, what's the best way to go about this? A: By its documentation, pyodbc does support transactions, but only if the odbc driver support it. Furthermore, as pyodbc is compliant with PEP 249, data is stored only when a manual commit is done. This means that you have to explicitely commit() the transaction, or rollback() the entire transaction. Note that pyodbc also support autocommit feature, and in that case you cannot have any transaction. By default, autocommit is off, but your codebase might have tuerned it on. You should check the connection, when it is performed cnxn = pyodbc.connect(cstring, autocommit=True) Alternatively, you can also explicitely turn off the autocommit mode with cnxn.autocommit = False but this might have quite a big impact on your system. Note: you can get more information on the autocommit mode of pyodbc on its wiki
In Python, Using pyodbc, How Do You Perform Transactions?
I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them. The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions. Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance: 1) I have never written a stored procedure. 2) I heard that pyodbc does not return results from stored procedures as of yet. 3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth. So, what's the best way to go about this?
[ "By its documentation, pyodbc does support transactions, but only if the odbc driver support it. Furthermore, as pyodbc is compliant with PEP 249, data is stored only when a manual commit is done.\nThis means that you have to explicitely commit() the transaction, or rollback() the entire transaction.\nNote that pyodbc also support autocommit feature, and in that case you cannot have any transaction.\nBy default, autocommit is off, but your codebase might have tuerned it on.\nYou should check the connection, when it is performed\ncnxn = pyodbc.connect(cstring, autocommit=True)\n\nAlternatively, you can also explicitely turn off the autocommit mode with\ncnxn.autocommit = False\n\nbut this might have quite a big impact on your system. \nNote: you can get more information on the autocommit mode of pyodbc on its wiki\n" ]
[ 25 ]
[ "I don't think pyodbc has any specific support for transactions. You need to send the SQL command to start/commit/rollback transactions.\n" ]
[ -10 ]
[ "pyodbc", "python", "transactions" ]
stackoverflow_0001063770_pyodbc_python_transactions.txt
Q: How to debug the MySQL error message: Caught an exception while rendering I am building Django +MySQL on dreamhost, but met the error messages: Caught an exception while rendering: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') ORDER BY tag.used_count DESC, tag.name ASC' at line 1") I traced hard and found the error splot is with the function below: Can you someone help me check what's wrong with this code? def get_tags_by_questions(self, questions): question_ids = [] for question in questions: question_ids.append(question.id) question_ids_str = ','.join([force_unicode(id) for id in question_ids]) related_tags = self.extra( tables=['tag', 'question_tags'], where=["tag.id = question_tags.tag_id AND question_tags.question_id IN (" + question_ids_str + ")"] ).distinct() return related_tags A: Is it possible that there are no questions, in which case the SQL will contain something like "WHERE question_id IN ()" which wouldn't be valid SQL.
How to debug the MySQL error message: Caught an exception while rendering
I am building Django +MySQL on dreamhost, but met the error messages: Caught an exception while rendering: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') ORDER BY tag.used_count DESC, tag.name ASC' at line 1") I traced hard and found the error splot is with the function below: Can you someone help me check what's wrong with this code? def get_tags_by_questions(self, questions): question_ids = [] for question in questions: question_ids.append(question.id) question_ids_str = ','.join([force_unicode(id) for id in question_ids]) related_tags = self.extra( tables=['tag', 'question_tags'], where=["tag.id = question_tags.tag_id AND question_tags.question_id IN (" + question_ids_str + ")"] ).distinct() return related_tags
[ "Is it possible that there are no questions, in which case the SQL will contain something like \"WHERE question_id IN ()\" which wouldn't be valid SQL.\n" ]
[ 2 ]
[]
[]
[ "django", "dreamhost", "mysql", "python" ]
stackoverflow_0001064152_django_dreamhost_mysql_python.txt
Q: error while uploading project to Google App Engine(python) 2009-06-30 23:36:28,483 ERROR appcfg.py:1272 An unexpected error occurred. Aborting. Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\appcfg.py", line 1250, in DoUpload missing_files = self.Begin() File "C:\Program Files\Google\google_appengine\google\appengine\tools\appcfg.py", line 1045, in Begin version=self.version, payload=self.config.ToYAML()) File "C:\Program Files\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden Error 403: --- begin server output --- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=gb2312"> <TITLE>The requested URL could not be retrieved</TITLE> <STYLE type="text/css"><!--BODY{background-color:#ffffff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--></STYLE> </HEAD><BODY> <H3>The requested URL could not be retrieved</H3> Please double check or try again later. <HR noshade size="1px"> </BODY> --- end server output --- What's wrong? A: You're trying to upload to a URL to which you lack access -- are you sure you're spelling your app name right, own its name on appspot, etc, etc? A: I see a HTTP 403 Forbidden in there, which says to me that your authentications is probably not sorted out correctly. A: HTTP Error 403: Forbidden... bad username/password for the given application (did you change app.yaml at all since the last successful update)? The requested URL could not be retrieved... Or maybe they had a service interruption. Try again after a bit.
error while uploading project to Google App Engine(python)
2009-06-30 23:36:28,483 ERROR appcfg.py:1272 An unexpected error occurred. Aborting. Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\appcfg.py", line 1250, in DoUpload missing_files = self.Begin() File "C:\Program Files\Google\google_appengine\google\appengine\tools\appcfg.py", line 1045, in Begin version=self.version, payload=self.config.ToYAML()) File "C:\Program Files\Google\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 403: Forbidden Error 403: --- begin server output --- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=gb2312"> <TITLE>The requested URL could not be retrieved</TITLE> <STYLE type="text/css"><!--BODY{background-color:#ffffff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--></STYLE> </HEAD><BODY> <H3>The requested URL could not be retrieved</H3> Please double check or try again later. <HR noshade size="1px"> </BODY> --- end server output --- What's wrong?
[ "You're trying to upload to a URL to which you lack access -- are you sure you're spelling your app name right, own its name on appspot, etc, etc?\n", "I see a HTTP 403 Forbidden in there, which says to me that your authentications is probably not sorted out correctly.\n", "HTTP Error 403: Forbidden... bad username/password for the given application (did you change app.yaml at all since the last successful update)?\nThe requested URL could not be retrieved... Or maybe they had a service interruption. Try again after a bit.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "google_app_engine", "python", "uploading" ]
stackoverflow_0001064422_google_app_engine_python_uploading.txt
Q: Are Python commands suitable in Vim's visual mode? I have found the following command in AWK useful in Vim :'<,'>!awk '{ print $2 }' Python may also be useful in Vim. However, I have not found an useful command in Python for Vim's visual mode. Which Python commands do you use in Vim? A: It's hard to make useful one-liner filters in Python. You need to import sys to get stdin, and already you're starting to push it. This isn't to say anything bad about Python. My feeling is that Python is optimized for multi-line scripts, while the languages that do well at one-liners (awk, sed, bash, I could name others but would probably be flamed...) tend work less well (IMHO) when writing scripts of any significant complexity. I do really like Python for writing multi-line scripts that I can invoke from Vim. For example, I've got one Python script that will, when given a signature for a Java constructor, like this: Foo(String name, int size) { will emit a lot of the boilerplate that goes into creating a value class: private final String name; private final int size; public String getName() { return name; } public int getSize() { return size; } @Override public boolean equals(Object that) { return this == that || (that instanceof Foo && equals((Foo) that)); } public boolean equals(Foo that) { return Objects.equal(getName(), that.getName()) && this.getSize() == that.getSize(); } @Override public int hashCode() { return Objects.hashCode( getName(), getSize()); } Foo(String name, int size) { this.name = Preconditions.checkNotNull(name); this.size = size; I use this from Vim by highlighting the signature and then typing !jhelper.py. I also used to use Python scripts I'd written to reverse characters in lines and to reverse the lines of a file before I found out about rev and tac. A: Python is most useful with vim when used to code vim "macros" (you need a vim compiled with +python, but many pre-built ones come that way). Here is a nice presentation about some of the things you can do with (plenty of examples and snippets!), and here are vim's own reference docs about this feature.
Are Python commands suitable in Vim's visual mode?
I have found the following command in AWK useful in Vim :'<,'>!awk '{ print $2 }' Python may also be useful in Vim. However, I have not found an useful command in Python for Vim's visual mode. Which Python commands do you use in Vim?
[ "It's hard to make useful one-liner filters in Python. You need to import sys to get stdin, and already you're starting to push it. This isn't to say anything bad about Python. My feeling is that Python is optimized for multi-line scripts, while the languages that do well at one-liners (awk, sed, bash, I could name others but would probably be flamed...) tend work less well (IMHO) when writing scripts of any significant complexity.\nI do really like Python for writing multi-line scripts that I can invoke from Vim. For example, I've got one Python script that will, when given a signature for a Java constructor, like this:\nFoo(String name, int size) {\n\nwill emit a lot of the boilerplate that goes into creating a value class:\nprivate final String name;\nprivate final int size;\n\npublic String getName() {\n return name;\n}\n\npublic int getSize() {\n return size;\n}\n\n@Override\npublic boolean equals(Object that) {\n return this == that\n || (that instanceof Foo && equals((Foo) that));\n}\n\npublic boolean equals(Foo that) {\n return Objects.equal(getName(), that.getName())\n && this.getSize() == that.getSize();\n}\n\n@Override\npublic int hashCode() {\n return Objects.hashCode(\n getName(),\n getSize());\n}\n\nFoo(String name, int size) {\n this.name = Preconditions.checkNotNull(name);\n this.size = size;\n\nI use this from Vim by highlighting the signature and then typing !jhelper.py.\nI also used to use Python scripts I'd written to reverse characters in lines and to reverse the lines of a file before I found out about rev and tac.\n", "Python is most useful with vim when used to code vim \"macros\" (you need a vim compiled with +python, but many pre-built ones come that way). Here is a nice presentation about some of the things you can do with (plenty of examples and snippets!), and here are vim's own reference docs about this feature.\n" ]
[ 4, 4 ]
[]
[]
[ "python", "vim" ]
stackoverflow_0001064644_python_vim.txt
Q: Formatting a variable in Django and autofields I have this problem I've been trying to tackle for a while. I have a variable that is 17 characters long, and when displaying the variable on my form, I want it to display the last seven characters of this variable in bold...how do I go about this...I'd really appreciate anybody's insight on this. A: {{ thevar|slice:":-7" }}<b>{{ thevar|slice:"-7:" }}</b> The slice built-in filter in Django templates acts like slicing does in Python, so that for example s[:-7] is the string excluding its last 7 characters and s[-7:] is the substring formed by just the last 7 characters.
Formatting a variable in Django and autofields
I have this problem I've been trying to tackle for a while. I have a variable that is 17 characters long, and when displaying the variable on my form, I want it to display the last seven characters of this variable in bold...how do I go about this...I'd really appreciate anybody's insight on this.
[ "{{ thevar|slice:\":-7\" }}<b>{{ thevar|slice:\"-7:\" }}</b>\n\nThe slice built-in filter in Django templates acts like slicing does in Python, so that for example s[:-7] is the string excluding its last 7 characters and s[-7:] is the substring formed by just the last 7 characters.\n" ]
[ 2 ]
[]
[]
[ "django", "python", "string_formatting" ]
stackoverflow_0001064953_django_python_string_formatting.txt
Q: How do I include a PHP script in Python? I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page: <?php print("<h1>News and Updates</h1>"); include("news-generator.php"); print("</body>"); ?> (I cut down the example for simplicity.) Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this: page_html = "<h1>News and Updates</h1>" news_script_output = php("news-generator.php") //should return a string print page_html + news_script_output A: import subprocess def php(script_path): p = subprocess.Popen(['php', script_path], stdout=subprocess.PIPE) result = p.communicate()[0] return result # YOUR CODE BELOW: page_html = "<h1>News and Updates</h1>" news_script_output = php("news-generator.php") print page_html + news_script_output A: PHP is a program. You can run any program with subprocess. The hard part is simulating the whole CGI environment that PHP expects. A: maybe off topic, but if you want to do this in a way where you can access the vars and such created by the php script (eg. array of news items), your best best will be to do the exec of the php script, but return a json encoded array of items from php as a string, then json decode them on the python side, and do your html generation and iteration there. A: I think the best answer would be to have apache render both pages separately and then use javascript to load that page into a div. You have the slight slowdown of the ajax load but then you dont have to worry about it. There is an open-source widget thing that will run multiple languages in 1 page but I cant remember what its called. A: You could use urllib to get the page from the server (localhost) and execute it in the right environment for php. Not pretty, but it'll work. It may cause performance problems if you do it a lot.
How do I include a PHP script in Python?
I have a PHP script (news-generator.php) which, when I include it, grabs a bunch of news items and prints them. Right now, I'm using Python for my website (CGI). When I was using PHP, I used something like this on the "News" page: <?php print("<h1>News and Updates</h1>"); include("news-generator.php"); print("</body>"); ?> (I cut down the example for simplicity.) Is there a way I could make Python execute the script (news-generator.php) and return the output which would work cross-platform? That way, I could do this: page_html = "<h1>News and Updates</h1>" news_script_output = php("news-generator.php") //should return a string print page_html + news_script_output
[ "import subprocess\n\ndef php(script_path):\n p = subprocess.Popen(['php', script_path], stdout=subprocess.PIPE)\n result = p.communicate()[0]\n return result\n\n# YOUR CODE BELOW:\npage_html = \"<h1>News and Updates</h1>\"\nnews_script_output = php(\"news-generator.php\") \nprint page_html + news_script_output\n\n", "PHP is a program. You can run any program with subprocess.\nThe hard part is simulating the whole CGI environment that PHP expects. \n", "maybe off topic, but if you want to do this in a way where you can access the vars and such created by the php script (eg. array of news items), your best best will be to do the exec of the php script, but return a json encoded array of items from php as a string, then json decode them on the python side, and do your html generation and iteration there.\n", "I think the best answer would be to have apache render both pages separately and then use javascript to load that page into a div. You have the slight slowdown of the ajax load but then you dont have to worry about it. \nThere is an open-source widget thing that will run multiple languages in 1 page but I cant remember what its called. \n", "You could use urllib to get the page from the server (localhost) and execute it in the right environment for php. Not pretty, but it'll work. It may cause performance problems if you do it a lot.\n" ]
[ 11, 7, 1, 0, 0 ]
[]
[]
[ "execution", "integration", "php", "python", "scripting" ]
stackoverflow_0001060436_execution_integration_php_python_scripting.txt
Q: for statement in python When an exe file is run it prints out some stuff. I'm trying to run this on some numbers below and print out line 54 ( = blah ). It says process isn't defined and I'm really unsure how to fix this and get what I want printed to the screen. If anyone could post some code or ways to fix this thank you so very much! for j in ('90','52.62263','26.5651','10.8123'): if j == '90': k = ('0',) elif j == '52.62263': k = ('0', '72', '144', '216', '288') elif j == '26.5651': k = (' 324', ' 36', ' 108', ' 180', ' 252') else: k = (' 288', ' 0', ' 72', ' 144', ' 216') for b in k: outputstring = process.communicate()[0] outputlist = outputstring.splitlines() blah = outputlist[53] cmd = ' -j ' + str(j) + ' -b ' + str(b) + ' blah ' process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) print cmd I am trying to print out for example: -j 90 -az 0 (then what blah contains) blah is line 54. Line 54 prints out a lot of information. Words mostly. I want to print out what line 54 says to the screen right after -j 90 -az 0 @ Robbie: line 39 blah = outputlist[53] Indexerror: list index out of range @ Robbie again. Thanks for your help and sorry for the trouble guys... I even tried putting in outputlist[2] and it gives same error :/ A: I can't help but clean that up a little. # aesthetically (so YMMV), I think the code would be better if it were ... # (and I've asked some questions throughout) j_map = { 90: [0], # prefer lists [] to tuples (), I say... 52.62263: [0, 72, 144, 216, 288], 26.5651: [324, 36, 108, 180, 252], 10.8123: [288, 0, 72, 144, 216] } # have a look at dict() in http://docs.python.org/tutorial/datastructures.html # to know what's going on here -- e.g. j_map['90'] is ['0',] # then the following is cleaner for j, k in j_map.iteritems(): # first iteration j = '90', k=[0] # second iteration j = '52.62263'', k= [0,...,288] for b in k: # fixed the ordering of these statements so this may actually work cmd = "program_name -j %f -b %d" % (j, b) # where program_name is the program you're calling # be wary of the printf-style %f formatting and # how program_name takes its input print cmd process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] outputlist = outputstring.splitlines() blah = outputlist[53] You need to define cmd -- right now it's trying to execute something like " -j 90 -b 288". I presume you want something like cmd = "program_name -j 90 -b 288". Don't know if that answers your question at all, but I hope it gives food for thought. A: Are you sure this is right cmd = ' -j ' + str(el) + ' -jk ' + str(az) + ' blah ' Where's your executable? A: The following line outputstring = process.communicate()[0] calls the communicate() method of the process variable, but process has not been defined yet. You define it later in the code. You need to move that definition higher up. Also, your variable names (j,k, and jk) are confusing. A: process isn't defined because your statements are out of order. outputstring = process.communicate()[0] outputlist = outputstring.splitlines() blah = outputlist[53] cmd = ' -j ' + str(j) + ' -b ' + str(b) + ' blah ' process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) cannot possibly work. process on the first line, is undefined.
for statement in python
When an exe file is run it prints out some stuff. I'm trying to run this on some numbers below and print out line 54 ( = blah ). It says process isn't defined and I'm really unsure how to fix this and get what I want printed to the screen. If anyone could post some code or ways to fix this thank you so very much! for j in ('90','52.62263','26.5651','10.8123'): if j == '90': k = ('0',) elif j == '52.62263': k = ('0', '72', '144', '216', '288') elif j == '26.5651': k = (' 324', ' 36', ' 108', ' 180', ' 252') else: k = (' 288', ' 0', ' 72', ' 144', ' 216') for b in k: outputstring = process.communicate()[0] outputlist = outputstring.splitlines() blah = outputlist[53] cmd = ' -j ' + str(j) + ' -b ' + str(b) + ' blah ' process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) print cmd I am trying to print out for example: -j 90 -az 0 (then what blah contains) blah is line 54. Line 54 prints out a lot of information. Words mostly. I want to print out what line 54 says to the screen right after -j 90 -az 0 @ Robbie: line 39 blah = outputlist[53] Indexerror: list index out of range @ Robbie again. Thanks for your help and sorry for the trouble guys... I even tried putting in outputlist[2] and it gives same error :/
[ "I can't help but clean that up a little.\n# aesthetically (so YMMV), I think the code would be better if it were ...\n# (and I've asked some questions throughout)\n\nj_map = {\n 90: [0], # prefer lists [] to tuples (), I say...\n 52.62263: [0, 72, 144, 216, 288],\n 26.5651: [324, 36, 108, 180, 252],\n 10.8123: [288, 0, 72, 144, 216]\n }\n# have a look at dict() in http://docs.python.org/tutorial/datastructures.html\n# to know what's going on here -- e.g. j_map['90'] is ['0',]\n\n# then the following is cleaner\nfor j, k in j_map.iteritems():\n # first iteration j = '90', k=[0]\n # second iteration j = '52.62263'', k= [0,...,288]\n for b in k:\n # fixed the ordering of these statements so this may actually work\n cmd = \"program_name -j %f -b %d\" % (j, b)\n # where program_name is the program you're calling\n # be wary of the printf-style %f formatting and\n # how program_name takes its input\n print cmd\n process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE)\n outputstring = process.communicate()[0]\n outputlist = outputstring.splitlines()\n blah = outputlist[53]\n\nYou need to define cmd -- right now it's trying to execute something like \" -j 90 -b 288\". I presume you want something like cmd = \"program_name -j 90 -b 288\".\nDon't know if that answers your question at all, but I hope it gives food for thought.\n", "Are you sure this is right\ncmd = ' -j ' + str(el) + ' -jk ' + str(az) + ' blah '\n\nWhere's your executable?\n", "The following line\noutputstring = process.communicate()[0]\n\ncalls the communicate() method of the process variable, but process has not been defined yet. You define it later in the code. You need to move that definition higher up.\nAlso, your variable names (j,k, and jk) are confusing.\n", "process isn't defined because your statements are out of order.\n outputstring = process.communicate()[0]\n outputlist = outputstring.splitlines()\n blah = outputlist[53]\n\n cmd = ' -j ' + str(j) + ' -b ' + str(b) + ' blah '\n\n process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE)\n\ncannot possibly work. process on the first line, is undefined. \n" ]
[ 6, 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001065133_python.txt
Q: Unable to understand a statement about customizing Python's Macro Syntax Cody has been building a Pythonic Macro Syntax. He says These macros allow you to define completely custom syntax, from new constructs to new operators. There's no facility for doing this in Python as it stands. I am not sure what he means by new constructs to new operators: Does he refer to binary operators such as +, - and multiplication in Math? his main goal: Where do you benefit in customizing Python's macro syntax? A: No doubt Cody refers to completely new operators that are not currently in Python, such as (I dunno) ^^ or ++ or +* and so on, whatever they might mean. And he's explicitly saying that the macro system lets you define a completely new syntax for Python (his question was about the syntax of the macro definitions themselves). Some people care burningly about syntax and for example would much prefer to see Python uses braces rather than group by indentation; but Python itself will never follow those people's preferences...: >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance So these people might obtain what they crave by defining a completely new syntax for Python through this macro system. Others might use it to define specific custom languages that mostly follow Python's general outlines but add special new keywords, let you call functions without using parentheses, and so on, and so forth. Whether that's a good thing, in fact, is an ancient, moot issue—but some languages such as Lisp have always had macros of such power, and many people who came to Python from Lisp, such as Peter Norvig, would probably be quite happy to get back that syntax-making power they used to have in Lisp but lack in Python.
Unable to understand a statement about customizing Python's Macro Syntax
Cody has been building a Pythonic Macro Syntax. He says These macros allow you to define completely custom syntax, from new constructs to new operators. There's no facility for doing this in Python as it stands. I am not sure what he means by new constructs to new operators: Does he refer to binary operators such as +, - and multiplication in Math? his main goal: Where do you benefit in customizing Python's macro syntax?
[ "No doubt Cody refers to completely new operators that are not currently in Python, such as (I dunno) ^^ or ++ or +* and so on, whatever they might mean. And he's explicitly saying that the macro system lets you define a completely new syntax for Python (his question was about the syntax of the macro definitions themselves).\nSome people care burningly about syntax and for example would much prefer to see Python uses braces rather than group by indentation; but Python itself will never follow those people's preferences...:\n>>> from __future__ import braces\n File \"<stdin>\", line 1\nSyntaxError: not a chance\n\nSo these people might obtain what they crave by defining a completely new syntax for Python through this macro system.\nOthers might use it to define specific custom languages that mostly follow Python's general outlines but add special new keywords, let you call functions without using parentheses, and so on, and so forth.\nWhether that's a good thing, in fact, is an ancient, moot issue—but some languages such as Lisp have always had macros of such power, and many people who came to Python from Lisp, such as Peter Norvig, would probably be quite happy to get back that syntax-making power they used to have in Lisp but lack in Python.\n" ]
[ 6 ]
[]
[]
[ "macros", "python", "syntax" ]
stackoverflow_0001065966_macros_python_syntax.txt
Q: Object Attribute in Random List Not Accessible in Python I'm working on my first object oriented bit of python and I have the following: #!/usr/bin/python import random class triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f #def solver: #pass #initialize Triangle myTri = triangle(0,0,0,0,0,0) #Pick Three Random Angles or Sides to Generate Values For sample = random.sample([myTri.a, myTri.b, myTri.c, myTri.d, myTri.e, myTri.f], 3) #Sets the three randomly picked variables to a Random Number sample[0] = random.randint(1, 100) sample[1] = random.randint(1, 100) sample[2] = random.randint(1, 100) How do I pass myTri.a, for example to random.randint. It is passing the value of '0' which it initialized. I want to be able to assign a random value to three of the .a-.f attributes of myTri. What am I missing? A: When you say [myTri.a, myTri.b, ...] you are not getting a list of the variables themselves, or references to them. Instead you are getting just their values. Since you know they were initialized to 0, it is as if you had written [0, 0, 0, 0, 0, 0]. There's no difference. Then later when you try to assign to sample[0], you are actually just overwriting the 0 that is stored in that array with a random value. Python knows nothing at all about myTri at that point; the connection is lost. Here's what you can do to get the effect you're aiming for. First, pass a list of variable names we want to assign to later to random.sample: sample = random.sample(["a", "b", "c", "d", "e", "f"], 3) That'll give us back 3 random variable names. Now we want to assign to the variables with those same names. We can do that by using the special setattr function, which takes an object and a variable name and sets its value. For instance, setattr(myTri, "b", 72) does the same thing as myTri.b = 72. So rewritten we have: setattr(myTri, sample[0], random.randint(1, 100)) setattr(myTri, sample[1], random.randint(1, 100)) setattr(myTri, sample[2], random.randint(1, 100)) The major concept here is that you're doing a bit of reflection, also known as introspection. You've got dynamic variable names--you don't know exactly who you're messing with--so you've got to consult with some more exotic, out of the way language constructs. Normally I'd actually caution against such tomfoolery, but this is a rare instance where introspection is a reasonable solution. A: To assign to a, b, and c: myTri.a = random.randint(1, 100) myTri.b = random.randint(1, 100) myTri.c = random.randint(1, 100) To assign to one random attribute from a-f: attrs = ['a', 'b', 'c', 'd', 'e', 'f'] setattr(myTri, random.choice(attrs), random.randint(1, 100)) To assign to three random attributes from a-f: attrs = ['a', 'b', 'c', 'd', 'e', 'f'] for attr in random.sample(attrs, 3): setattr(myTri, attr, random.randint(1, 100)) A: Alternative to using setattr: do it when you create a Triangle instance. args = [random.randint(1, 100) for i in xrange(3)] + [0, 0, 0] random.shuffle(args) my_tri = Triangle(*args)
Object Attribute in Random List Not Accessible in Python
I'm working on my first object oriented bit of python and I have the following: #!/usr/bin/python import random class triangle: # Angle A To Angle C Connects Side F # Angle C to Angle B Connects Side D # Angle B to Angle A Connects Side E def __init__(self, a, b, c, d, e, f): self.a = a self.b = b self.c = c self.d = d self.e = e self.f = f #def solver: #pass #initialize Triangle myTri = triangle(0,0,0,0,0,0) #Pick Three Random Angles or Sides to Generate Values For sample = random.sample([myTri.a, myTri.b, myTri.c, myTri.d, myTri.e, myTri.f], 3) #Sets the three randomly picked variables to a Random Number sample[0] = random.randint(1, 100) sample[1] = random.randint(1, 100) sample[2] = random.randint(1, 100) How do I pass myTri.a, for example to random.randint. It is passing the value of '0' which it initialized. I want to be able to assign a random value to three of the .a-.f attributes of myTri. What am I missing?
[ "When you say [myTri.a, myTri.b, ...] you are not getting a list of the variables themselves, or references to them. Instead you are getting just their values. Since you know they were initialized to 0, it is as if you had written [0, 0, 0, 0, 0, 0]. There's no difference.\nThen later when you try to assign to sample[0], you are actually just overwriting the 0 that is stored in that array with a random value. Python knows nothing at all about myTri at that point; the connection is lost.\nHere's what you can do to get the effect you're aiming for. First, pass a list of variable names we want to assign to later to random.sample:\nsample = random.sample([\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"], 3)\n\nThat'll give us back 3 random variable names. Now we want to assign to the variables with those same names. We can do that by using the special setattr function, which takes an object and a variable name and sets its value. For instance, setattr(myTri, \"b\", 72) does the same thing as myTri.b = 72. So rewritten we have:\nsetattr(myTri, sample[0], random.randint(1, 100))\nsetattr(myTri, sample[1], random.randint(1, 100))\nsetattr(myTri, sample[2], random.randint(1, 100))\n\nThe major concept here is that you're doing a bit of reflection, also known as introspection. You've got dynamic variable names--you don't know exactly who you're messing with--so you've got to consult with some more exotic, out of the way language constructs. Normally I'd actually caution against such tomfoolery, but this is a rare instance where introspection is a reasonable solution.\n", "To assign to a, b, and c:\nmyTri.a = random.randint(1, 100)\nmyTri.b = random.randint(1, 100)\nmyTri.c = random.randint(1, 100)\n\nTo assign to one random attribute from a-f:\nattrs = ['a', 'b', 'c', 'd', 'e', 'f']\nsetattr(myTri, random.choice(attrs), random.randint(1, 100))\n\nTo assign to three random attributes from a-f:\nattrs = ['a', 'b', 'c', 'd', 'e', 'f']\nfor attr in random.sample(attrs, 3):\n setattr(myTri, attr, random.randint(1, 100))\n\n", "Alternative to using setattr: do it when you create a Triangle instance.\nargs = [random.randint(1, 100) for i in xrange(3)] + [0, 0, 0]\nrandom.shuffle(args)\nmy_tri = Triangle(*args)\n\n" ]
[ 5, 2, 0 ]
[]
[]
[ "object", "oop", "python", "random" ]
stackoverflow_0001066827_object_oop_python_random.txt
Q: Export set of data in different formats I want to be able to display set of data differently according to url parameters. My URL looks like /page/{limit}/{offset}/{format}/. For example: /page/20/0/xml/ - subset [0:20) in xml /page/100/20/json/ - subset [20:100) in json Also I want to be able to do the same for csv, text, excel, pdf, html, etc... I have to be able to set different mimetypes and content-types for different formats. For XML should be application/xhtml+xml, for csv - text/plain, etc... In HTML mode I want to be able to pass this data into some template (I'm using Django). I'm planing to make set look like: dataset = { "meta" : {"offset" : 15, "limit" : 10, "total" : 1000}, "columns" : {"name" : "Name", "status" : "Status", "creation_date" : "Creation Date"} "items" : [ {"name" : "John Smith", "status" : 1, "creation_date" : "2009-06-30 10:10:09"}, {"name" : "Joe The Plummer", "status" : 2, "creation_date" : "2009-06-30 10:10:09"} ] }; and have output like this: CSV output: Name, Status, Creation Date John Smith, 1, 2009-06-30 10:10:09 Joe The Plummer, 2, 2009-06-30 10:10:09 XML output: <items> <item id="1"> <name>John Smith</name> <status>1</status> <creation_date>2009-06-30 10:10:09</creation_date> </item> <item id="2"> <name>Joe The Plummer</name> <status>2</status> <creation_date>2009-06-30 10:10:09</creation_date> </item> </items> So I think to have implemented my own renderers for each type - like XMLRenderer, RSSRenderer, JSONRenderer, etc... if format == "xml": context = XMLRenderer().render(data = dataset) return HttpResponse(content, mimetype="application/xhtml+xml") elif format == "json": context = JSONRenderer().render(data = dataset) return HttpResponse(content, mimetype="text/plain") elif format == "rss": context = RSSRenderer(title="Some long title here", link="/page/10/10/rss/").render(data = dataset) return HttpResponse(content, mimetype="application/xhtml+xml") # few more formats... else: return render_to_response(SOME_TEMPLATE, dataset) Is it correct approach? A: I suggest having the renderer also know about the mimetype rather than hardcoding the latter in the code that calls the renderer -- better to concentrate format-specific knowledge in one place, so the calling code would be content, mimetype = renderer().render(data=dataset) return HttpResponse(content, mimetype=mimetype) also, this is a great opportunity for the Registry design pattern (most long trees of if/elif are, but one where you're essentially deciding just which object or class to use is perfect!-). So you either hardcode a dict: format2renderer = dict( xml=XMLRenderer, rss=RSSRenderer, # ...etc... ) or maybe even better make renderers registers themselves in the dict at startup, but that may be too advanced/hard to arrange. In either case, what you have before the calling snippet I just quoted would be just: renderer = format2renderer.get(format) if renderer is not None: ... and when None you may apply your default code. I find dict lookups and polymorphism SO much easier to maintain and enhance than if/elif trees!-) A: Yes, that is a correct approach.
Export set of data in different formats
I want to be able to display set of data differently according to url parameters. My URL looks like /page/{limit}/{offset}/{format}/. For example: /page/20/0/xml/ - subset [0:20) in xml /page/100/20/json/ - subset [20:100) in json Also I want to be able to do the same for csv, text, excel, pdf, html, etc... I have to be able to set different mimetypes and content-types for different formats. For XML should be application/xhtml+xml, for csv - text/plain, etc... In HTML mode I want to be able to pass this data into some template (I'm using Django). I'm planing to make set look like: dataset = { "meta" : {"offset" : 15, "limit" : 10, "total" : 1000}, "columns" : {"name" : "Name", "status" : "Status", "creation_date" : "Creation Date"} "items" : [ {"name" : "John Smith", "status" : 1, "creation_date" : "2009-06-30 10:10:09"}, {"name" : "Joe The Plummer", "status" : 2, "creation_date" : "2009-06-30 10:10:09"} ] }; and have output like this: CSV output: Name, Status, Creation Date John Smith, 1, 2009-06-30 10:10:09 Joe The Plummer, 2, 2009-06-30 10:10:09 XML output: <items> <item id="1"> <name>John Smith</name> <status>1</status> <creation_date>2009-06-30 10:10:09</creation_date> </item> <item id="2"> <name>Joe The Plummer</name> <status>2</status> <creation_date>2009-06-30 10:10:09</creation_date> </item> </items> So I think to have implemented my own renderers for each type - like XMLRenderer, RSSRenderer, JSONRenderer, etc... if format == "xml": context = XMLRenderer().render(data = dataset) return HttpResponse(content, mimetype="application/xhtml+xml") elif format == "json": context = JSONRenderer().render(data = dataset) return HttpResponse(content, mimetype="text/plain") elif format == "rss": context = RSSRenderer(title="Some long title here", link="/page/10/10/rss/").render(data = dataset) return HttpResponse(content, mimetype="application/xhtml+xml") # few more formats... else: return render_to_response(SOME_TEMPLATE, dataset) Is it correct approach?
[ "I suggest having the renderer also know about the mimetype rather than hardcoding the latter in the code that calls the renderer -- better to concentrate format-specific knowledge in one place, so the calling code would be\ncontent, mimetype = renderer().render(data=dataset)\nreturn HttpResponse(content, mimetype=mimetype)\n\nalso, this is a great opportunity for the Registry design pattern (most long trees of if/elif are, but one where you're essentially deciding just which object or class to use is perfect!-). So you either hardcode a dict:\nformat2renderer = dict(\n xml=XMLRenderer,\n rss=RSSRenderer,\n # ...etc...\n)\n\nor maybe even better make renderers registers themselves in the dict at startup, but that may be too advanced/hard to arrange. In either case, what you have before the calling snippet I just quoted would be just:\nrenderer = format2renderer.get(format)\nif renderer is not None: ...\n\nand when None you may apply your default code. I find dict lookups and polymorphism SO much easier to maintain and enhance than if/elif trees!-)\n", "Yes, that is a correct approach.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python", "rendering" ]
stackoverflow_0001066516_django_python_rendering.txt
Q: Get rid of toplevel tk panewindow while usong tkMessageBox link text When I do : tkMessageBox.askquestion(title="Symbol Display",message="Is the symbol visible on the console") along with Symbol Display window tk window is also coming. If i press "Yes"...the child window return yes,whereas tk window remains there. Whenever I am tryng to close tk window, End Program - tk comes. on pushing "End Now" button "pythonw.exe" window comes asking to send error report or not. Why is it so ? How can I avoid tk window from popping out without affecting my script execution ??? A: The trick is to invoke withdraw on the Tk root top-level: >>> import tkMessageBox, Tkinter >>> Tkinter.Tk().withdraw() >>> tkMessageBox.askquestion( ... title="Symbol Display", ... message="Is the symbol visible on the console")
Get rid of toplevel tk panewindow while usong tkMessageBox
link text When I do : tkMessageBox.askquestion(title="Symbol Display",message="Is the symbol visible on the console") along with Symbol Display window tk window is also coming. If i press "Yes"...the child window return yes,whereas tk window remains there. Whenever I am tryng to close tk window, End Program - tk comes. on pushing "End Now" button "pythonw.exe" window comes asking to send error report or not. Why is it so ? How can I avoid tk window from popping out without affecting my script execution ???
[ "The trick is to invoke withdraw on the Tk root top-level:\n>>> import tkMessageBox, Tkinter\n>>> Tkinter.Tk().withdraw()\n>>> tkMessageBox.askquestion(\n... title=\"Symbol Display\",\n... message=\"Is the symbol visible on the console\")\n\n" ]
[ 5 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0001067900_python_tkinter.txt
Q: Programmatically detect system-proxy settings on Windows XP with Python I develop a critical application used by a multi-national company. Users in offices all around the globe need to be able to install this application. The application is actually a plugin to Excel and we have an automatic installer based on Setuptools' easy_install that ensures that all a project's dependancies are automatically installed or updated any time a user switches on their Excel. It all works very elegantly as users are seldom aware of all the installation which occurs entirely in the background. Unfortunately we are expanding and opening new offices which all have different proxy settings. These settings seem to change from day to day so we cannot keep up with the outsourced security guys who change stuff without telling us. It sucks but we just have to work around it. I want to programatically detect the system-wide proxy settings on the Windows workstations our users run: Everybody in the organisazation runs Windows XP and Internet Explorer. I've verified that everybody can download our stuff from IE without problems regardless of where they are int the world. So all I need to do is detect what proxy settings IE is using and make Setuptools use those settings. Theoretically all of this information should be in the Registry.. but is there a better way to find it that is guaranteed not to change with people upgrade IE? For example is there a Windows API call I can use to discover the proxy settings? In summary: We use Python 2.4.4 on Windows XP We need to detect the Internet Explorer proxy settings (e.g. host, port and Proxy type) I'm going to use this information to dynamically re-configure easy_install so that it can download the egg files via the proxy. UPDATE0: I forgot one important detail: Each site has an auto-config "pac" file. There's a key in Windows\CurrentVersion\InternetSettings\AutoConfigURL which points to a HTTP document on a local server which contains what looks like a javascript file. The pac script is basically a series of nested if-statements which compare URLs against a regexp and then eventually return the hostname of the chosen proxy-server. The script is a single javascript function called FindProxyForURL(url, host) The challenge is therefore to find out for any given server which proxy to use. The only 100% guaranteed way to do this is to look up the pac file and call the Javascript function from Python. Any suggestions? Is there a more elegant way to do this? A: Here's a sample that should create a bullet green (proxy enable) or red (proxy disable) in your systray It shows how to read and write in windows registry it uses gtk #!/usr/bin/env python import gobject import gtk from _winreg import * class ProxyNotifier: def __init__(self): self.trayIcon = gtk.StatusIcon() self.updateIcon() #set callback on right click to on_right_click self.trayIcon.connect('popup-menu', self.on_right_click) gobject.timeout_add(1000, self.checkStatus) def isProxyEnabled(self): aReg = ConnectRegistry(None,HKEY_CURRENT_USER) aKey = OpenKey(aReg, r"Software\Microsoft\Windows\CurrentVersion\Internet Settings") subCount, valueCount, lastModified = QueryInfoKey(aKey) for i in range(valueCount): try: n,v,t = EnumValue(aKey,i) if n == 'ProxyEnable': return v and True or False except EnvironmentError: break CloseKey(aKey) def invertProxyEnableState(self): aReg = ConnectRegistry(None,HKEY_CURRENT_USER) aKey = OpenKey(aReg, r"Software\Microsoft\Windows\CurrentVersion\Internet Settings", 0, KEY_WRITE) if self.isProxyEnabled() : val = 0 else: val = 1 try: SetValueEx(aKey,"ProxyEnable",0, REG_DWORD, val) except EnvironmentError: print "Encountered problems writing into the Registry..." CloseKey(aKey) def updateIcon(self): if self.isProxyEnabled(): icon=gtk.STOCK_YES else: icon=gtk.STOCK_NO self.trayIcon.set_from_stock(icon) def checkStatus(self): self.updateIcon() return True def on_right_click(self, data, event_button, event_time): self.invertProxyEnableState() self.updateIcon() if __name__ == '__main__': proxyNotifier = ProxyNotifier() gtk.main() A: As far as I know, In a Windows environment, if no proxy environment variables are set, proxy settings are obtained from the registry's Internet Settings section. . Isn't it enough? Or u can get something useful info from registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyServer Edit: sorry for don't know how to format comment's source code, I repost it here. >>> import win32com.client >>> js = win32com.client.Dispatch('MSScriptControl.ScriptControl') >>> js.Language = 'JavaScript' >>> js.AddCode('function add(a, b) {return a+b;}') >>> js.Run('add', 1, 2) 3
Programmatically detect system-proxy settings on Windows XP with Python
I develop a critical application used by a multi-national company. Users in offices all around the globe need to be able to install this application. The application is actually a plugin to Excel and we have an automatic installer based on Setuptools' easy_install that ensures that all a project's dependancies are automatically installed or updated any time a user switches on their Excel. It all works very elegantly as users are seldom aware of all the installation which occurs entirely in the background. Unfortunately we are expanding and opening new offices which all have different proxy settings. These settings seem to change from day to day so we cannot keep up with the outsourced security guys who change stuff without telling us. It sucks but we just have to work around it. I want to programatically detect the system-wide proxy settings on the Windows workstations our users run: Everybody in the organisazation runs Windows XP and Internet Explorer. I've verified that everybody can download our stuff from IE without problems regardless of where they are int the world. So all I need to do is detect what proxy settings IE is using and make Setuptools use those settings. Theoretically all of this information should be in the Registry.. but is there a better way to find it that is guaranteed not to change with people upgrade IE? For example is there a Windows API call I can use to discover the proxy settings? In summary: We use Python 2.4.4 on Windows XP We need to detect the Internet Explorer proxy settings (e.g. host, port and Proxy type) I'm going to use this information to dynamically re-configure easy_install so that it can download the egg files via the proxy. UPDATE0: I forgot one important detail: Each site has an auto-config "pac" file. There's a key in Windows\CurrentVersion\InternetSettings\AutoConfigURL which points to a HTTP document on a local server which contains what looks like a javascript file. The pac script is basically a series of nested if-statements which compare URLs against a regexp and then eventually return the hostname of the chosen proxy-server. The script is a single javascript function called FindProxyForURL(url, host) The challenge is therefore to find out for any given server which proxy to use. The only 100% guaranteed way to do this is to look up the pac file and call the Javascript function from Python. Any suggestions? Is there a more elegant way to do this?
[ "Here's a sample that should create a bullet green (proxy enable) or red (proxy disable) in your systray\nIt shows how to read and write in windows registry\nit uses gtk\n#!/usr/bin/env python\nimport gobject\nimport gtk\nfrom _winreg import *\n\nclass ProxyNotifier:\n def __init__(self): \n self.trayIcon = gtk.StatusIcon()\n self.updateIcon()\n\n #set callback on right click to on_right_click\n self.trayIcon.connect('popup-menu', self.on_right_click)\n gobject.timeout_add(1000, self.checkStatus)\n\n def isProxyEnabled(self):\n\n aReg = ConnectRegistry(None,HKEY_CURRENT_USER)\n\n aKey = OpenKey(aReg, r\"Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\") \n subCount, valueCount, lastModified = QueryInfoKey(aKey)\n\n for i in range(valueCount): \n try:\n n,v,t = EnumValue(aKey,i)\n if n == 'ProxyEnable':\n return v and True or False\n except EnvironmentError: \n break\n CloseKey(aKey) \n\n def invertProxyEnableState(self):\n aReg = ConnectRegistry(None,HKEY_CURRENT_USER)\n aKey = OpenKey(aReg, r\"Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\", 0, KEY_WRITE)\n if self.isProxyEnabled() : \n val = 0 \n else:\n val = 1\n try: \n SetValueEx(aKey,\"ProxyEnable\",0, REG_DWORD, val) \n except EnvironmentError: \n print \"Encountered problems writing into the Registry...\"\n CloseKey(aKey)\n\n def updateIcon(self):\n if self.isProxyEnabled():\n icon=gtk.STOCK_YES\n else:\n icon=gtk.STOCK_NO\n self.trayIcon.set_from_stock(icon)\n\n def checkStatus(self):\n self.updateIcon()\n return True\n\n\n def on_right_click(self, data, event_button, event_time):\n self.invertProxyEnableState()\n self.updateIcon()\n\n\nif __name__ == '__main__':\n proxyNotifier = ProxyNotifier()\n gtk.main()\n\n", "As far as I know, In a Windows environment, if no proxy environment variables are set, proxy settings are obtained from the registry's Internet Settings section. .\nIsn't it enough?\nOr u can get something useful info from registry:\nHKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\\ProxyServer\nEdit:\nsorry for don't know how to format comment's source code, I repost it here.\n>>> import win32com.client\n>>> js = win32com.client.Dispatch('MSScriptControl.ScriptControl')\n>>> js.Language = 'JavaScript'\n>>> js.AddCode('function add(a, b) {return a+b;}')\n>>> js.Run('add', 1, 2)\n3\n\n" ]
[ 4, 3 ]
[]
[]
[ "networking", "proxy", "python", "setuptools", "windows" ]
stackoverflow_0001068212_networking_proxy_python_setuptools_windows.txt
Q: Has anyone tried NetBeans 6.5 Python IDE? Has anyone tried the NetBeans 6.5 Python IDE? What are your opinions? Is it better/worse than PyDev? Do you like it? How does it integrate with source control tools (especially Mercurial)? A: I will share some of the feelings from using it for quite a while now. Things that are roughly the same quality as in Eclipse+Pydev+mercurial: editor, code-completion debugger features Things that are better: autoimport color schemes (Norway today rocks) Mercurial support (though it is getting better and better in Eclipse) Things that are worse: zipped egg packages are not recognized for either code completion or the autoimport libdyn packages (e.g. datetime) are not recognized debugger is having trouble with multiprocessing package you cannot choose file from outside of the project (/usr/bin/paster) to be the main file (this is what I use to debug Pylons applications) Does anyone have something to add to the list? A: BraveSirFoobar, it would be nice to know more about what problems you found -- the very, very slow part, as well as the crash. The first time you run the IDE it will index information about your Python platform and project and libraries - such that it can do quick code completion, go to declaration etc. later - but once that's done it's not supposed to be slow - but there might be bugs. Mercurial should definitely be supported well, since the NetBeans project itself (and Solaris and Java) are all hosted in Mercurial repositories. We plan to have really deep support for Python, much in the style of our Ruby support. One of the things which really helped in our Ruby work was the feedback from our early adopters, so if you try Python and have issues with it, please let us know so we can fix it. (Feedback links here: http://wiki.netbeans.org/Python ) -- Tor A: Compared to pydev, I found it very, very slow, and it crashed (once) when I created a project from existing sources. It's still beta, though. Integration with SCMs will be as good as netbeans is already (I only tried subversion, which worked fine). Feature-wise it was about the same : refactor, debugging, code assist... I'll stick with pydev for the moment, which is IMHO a great tool. A: Sun use Mercurial internally now, so expect that their IDE support for it will be top notch. A: Having worked with PyDev and PyDev extension for Eclipse for the past few months, the move to NetBeans has been a very pleasurable one. Without having to hunt all the different plug-ins for PyDev and Eclipse, NetBeans had everything I needed out of the box: auto completion, super fast index search, style control import control, you name it. And it seemed LESS bug prone than Eclipse (which is pretty stable). Also, the built-in Vim like auto code snippets it uses are just fantastic. IMO, it beats Eclipse hands down. I'm hooked. A: I started using it a little while back and I like it. I usually develop in a simple editor (SciTE), NetBeans is nice to organize larger projects. wrote about it briefly here A: After looking at this, I decided to go ahead with PyDev than NetBeans. However best wishes to NetBeans team for a faster and better Python support. Cant wait for that :) A: How does it compare with PyDev Extensions? I've recently installed it and, to be honest, couldn't imagine myself going back to PyDev. NetBeans seems interesting though, if only I wasn't already hooked onto a couple of other Eclipse plug-ins as well.
Has anyone tried NetBeans 6.5 Python IDE?
Has anyone tried the NetBeans 6.5 Python IDE? What are your opinions? Is it better/worse than PyDev? Do you like it? How does it integrate with source control tools (especially Mercurial)?
[ "I will share some of the feelings from using it for quite a while now. Things that are roughly the same quality as in Eclipse+Pydev+mercurial:\n\neditor, code-completion\ndebugger features\n\nThings that are better:\n\nautoimport\ncolor schemes (Norway today rocks)\nMercurial support (though it is getting better and better in Eclipse)\n\nThings that are worse:\n\nzipped egg packages are not recognized for either code completion or the autoimport\nlibdyn packages (e.g. datetime) are not recognized\ndebugger is having trouble with multiprocessing package\nyou cannot choose file from outside of the project (/usr/bin/paster) to be the main file (this is what I use to debug Pylons applications)\n\nDoes anyone have something to add to the list?\n", "BraveSirFoobar, it would be nice to know more about what problems you found -- the very, very slow part, as well as the crash. The first time you run the IDE it will index information about your Python platform and project and libraries - such that it can do quick code completion, go to declaration etc. later - but once that's done it's not supposed to be slow - but there might be bugs.\nMercurial should definitely be supported well, since the NetBeans project itself (and Solaris and Java) are all hosted in Mercurial repositories.\nWe plan to have really deep support for Python, much in the style of our Ruby support. One of the things which really helped in our Ruby work was the feedback from our early adopters, so if you try Python and have issues with it, please let us know so we can fix it. (Feedback links here: http://wiki.netbeans.org/Python )\n-- Tor\n", "Compared to pydev, I found it very, very slow, and it crashed (once) when I created a project from existing sources. It's still beta, though.\nIntegration with SCMs will be as good as netbeans is already (I only tried subversion, which worked fine).\nFeature-wise it was about the same : refactor, debugging, code assist... I'll stick with pydev for the moment, which is IMHO a great tool.\n", "Sun use Mercurial internally now, so expect that their IDE support for it will be top notch.\n", "Having worked with PyDev and PyDev extension for Eclipse for the past few months, the move to NetBeans has been a very pleasurable one.\nWithout having to hunt all the different plug-ins for PyDev and Eclipse, NetBeans had everything I needed out of the box:\nauto completion, super fast index search, style control import control, you name it.\nAnd it seemed LESS bug prone than Eclipse (which is pretty stable).\nAlso, the built-in Vim like auto code snippets it uses are just fantastic.\nIMO, it beats Eclipse hands down.\nI'm hooked.\n", "I started using it a little while back and I like it. I usually develop in a simple editor (SciTE), NetBeans is nice to organize larger projects.\nwrote about it briefly here\n", "After looking at this, I decided to go ahead with PyDev than NetBeans.\nHowever best wishes to NetBeans team for a faster and better Python support. Cant wait for that :)\n", "How does it compare with PyDev Extensions? I've recently installed it and, to be honest, couldn't imagine myself going back to PyDev.\nNetBeans seems interesting though, if only I wasn't already hooked onto a couple of other Eclipse plug-ins as well.\n" ]
[ 5, 4, 2, 2, 2, 1, 0, 0 ]
[]
[]
[ "ide", "netbeans", "python" ]
stackoverflow_0000371037_ide_netbeans_python.txt
Q: Multiprocessing in python with more then 2 levels I want to do a program and want make a the spawn like this process -> n process -> n process can the second level spawn process with multiprocessing ? using multiprocessinf module of python 2.6 thnx A: @vilalian's answer is correct, but terse. Of course, it's hard to supply more information when your original question was vague. To expand a little, you'd have your original program spawn its n processes, but they'd be slightly different than the original in that you'd want them (each, if I understand your question) to spawn n more processes. You could accomplish this by either by having them run code similar to your original process, but that spawned new sets of programs that performed the task at hand, without further processing, or you could use the same code/entry point, just providing different arguments - something like def main(level): if level == 0: do_work else: for i in range(n): spawn_process_that_runs_main(level-1) and start it off with level == 2 A: You can structure your app as a series of process pools communicating via Queues at any nested depth. Though it can get hairy pretty quick (probably due to the required context switching). It's not erlang though that's for sure. The docs on multiprocessing are extremely useful. Here(little too much to drop in a comment) is some code I use to increase throughput in a program that updates my feeds. I have one process polling for feeds that need to fetched, that stuffs it's results in a queue that a Process Pool of 4 workers picks up those results and fetches the feeds, it's results(if any) are then put in a queue for a Process Pool to parse and put into a queue to shove back in the database. Done sequentially, this process would be really slow due to some sites taking their own sweet time to respond so most of the time the process was waiting on data from the internet and would only use one core. Under this process based model, I'm actually waiting on the database the most it seems and my NIC is saturated most of the time as well as all 4 cores are actually doing something. Your mileage may vary. A: Yes - but, you might run into an issue which would require the fix I committed to python trunk yesterday. See bug http://bugs.python.org/issue5313 A: Sure you can. Expecially if you are using fork to spawn child processes, they works as perfectly normal processes (like the father). Thread management is quite different, but you can also use "second level" sub-treading. Pay attention to not over-complicate your program, as example program with two level threads are normally unused.
Multiprocessing in python with more then 2 levels
I want to do a program and want make a the spawn like this process -> n process -> n process can the second level spawn process with multiprocessing ? using multiprocessinf module of python 2.6 thnx
[ "@vilalian's answer is correct, but terse. Of course, it's hard to supply more information when your original question was vague.\nTo expand a little, you'd have your original program spawn its n processes, but they'd be slightly different than the original in that you'd want them (each, if I understand your question) to spawn n more processes. You could accomplish this by either by having them run code similar to your original process, but that spawned new sets of programs that performed the task at hand, without further processing, or you could use the same code/entry point, just providing different arguments - something like\ndef main(level):\n if level == 0:\n do_work\n else:\n for i in range(n):\n spawn_process_that_runs_main(level-1)\n\nand start it off with level == 2\n", "You can structure your app as a series of process pools communicating via Queues at any nested depth. Though it can get hairy pretty quick (probably due to the required context switching).\nIt's not erlang though that's for sure.\nThe docs on multiprocessing are extremely useful.\nHere(little too much to drop in a comment) is some code I use to increase throughput in a program that updates my feeds. I have one process polling for feeds that need to fetched, that stuffs it's results in a queue that a Process Pool of 4 workers picks up those results and fetches the feeds, it's results(if any) are then put in a queue for a Process Pool to parse and put into a queue to shove back in the database. Done sequentially, this process would be really slow due to some sites taking their own sweet time to respond so most of the time the process was waiting on data from the internet and would only use one core. Under this process based model, I'm actually waiting on the database the most it seems and my NIC is saturated most of the time as well as all 4 cores are actually doing something. Your mileage may vary.\n", "Yes - but, you might run into an issue which would require the fix I committed to python trunk yesterday. See bug http://bugs.python.org/issue5313\n", "Sure you can. Expecially if you are using fork to spawn child processes, they works as perfectly normal processes (like the father). Thread management is quite different, but you can also use \"second level\" sub-treading. \nPay attention to not over-complicate your program, as example program with two level threads are normally unused.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "multiprocessing", "python" ]
stackoverflow_0001066710_multiprocessing_python.txt
Q: Making a wxPython application multilingual I have a application written in wxPython which I want to make multilingual. Our options are using gettext http://docs.python.org/library/gettext.html seprating out all UI text to a messages.py file, and using it to translate text I am very much inclined towards 2nd and I see no benefit in going gettext way, using 2nd way i can have all my messages at one place not in code, so If i need to change a message, code need not be changed, in case of gettext i may have confusing msg-constants as I will be just wrapping the orginal msg instead of converting it to a constant in messages.py basically instead of wx.MessageBox(_("Hi stackoverflow!")) I think wx.MessageBox(messages.GREET_SO) is better, so is there any advantage in gettext way and disadvantage 2nd way? and is there a 3rd way? edit: also gettext languages files seems to be too tied to code, and what happens if i want two messages same in english but different in french e.g. suppose french has more subtle translation for different scnerarios for english one is ok experience: I have already gone 2nd way, and i must say every application should try to extract UI text from code, it gives a chance to refactor, see where UI is creeping into model and where UI text can be improved, gettext in comparison is mechanic, doesn't gives any input for coder, and i think would be more difficult to maintain. and while creating a name for text e.g. PRINT_PROGRESS_MSG, gives a chance to see that at many places, same msg is being used slightly differently and can be merged into a single name, which later on will help when i need to change msg only once. Conclusion: I am still not sure of any advantage to use gettext and am using my own messages file. but I have selected the answer which at least explained few points why gettext can be beneficial. The final solution IMO is which takes the best from both ways i.e my own message identifier wrapped by gettext e.g wx.MessageBox(_("GREET_SO")) A: There are some advantages of gettext: One of the biggest advantages is: when using poedit to do the translations you can benefit from the translation database. Basically poedit ca scan your harddisk and find already translated files and will make suggestions when you translate your file. When you give the code to other people to translate they might already know the gettext way of translating, while you have to explain them your way of translating. You have the text in the context of the code, so it should be easier to translate, when you see the code around the translation Consider text like: print _('%d files of %d files selected') % (num, numTotal) and even more complicated situations. Here it really helps having the code around ... A: Gettext is the way to go, in your example you can also use gettext to "avoid storing the translations in the code": wx.MessageBox(messages.GREET_SO) might be the same with gettext as: wx.MessageBox(_("GREET_SO")) or wx.MessageBox(_("messages.GREET_SO")) Gettext is pretty much the standard for multilingual applications, and I'm pretty sure you'll benefit from using it in the future. Example, you can use Poedit (or other similar app) to assign translations to your collaborators or contributors and later on flag one or several messages as not properly translated. Also if there are missing / extra entries poedit will warn you. Don't fool yourself, gettext is the only proven reliable way to maintain translations.
Making a wxPython application multilingual
I have a application written in wxPython which I want to make multilingual. Our options are using gettext http://docs.python.org/library/gettext.html seprating out all UI text to a messages.py file, and using it to translate text I am very much inclined towards 2nd and I see no benefit in going gettext way, using 2nd way i can have all my messages at one place not in code, so If i need to change a message, code need not be changed, in case of gettext i may have confusing msg-constants as I will be just wrapping the orginal msg instead of converting it to a constant in messages.py basically instead of wx.MessageBox(_("Hi stackoverflow!")) I think wx.MessageBox(messages.GREET_SO) is better, so is there any advantage in gettext way and disadvantage 2nd way? and is there a 3rd way? edit: also gettext languages files seems to be too tied to code, and what happens if i want two messages same in english but different in french e.g. suppose french has more subtle translation for different scnerarios for english one is ok experience: I have already gone 2nd way, and i must say every application should try to extract UI text from code, it gives a chance to refactor, see where UI is creeping into model and where UI text can be improved, gettext in comparison is mechanic, doesn't gives any input for coder, and i think would be more difficult to maintain. and while creating a name for text e.g. PRINT_PROGRESS_MSG, gives a chance to see that at many places, same msg is being used slightly differently and can be merged into a single name, which later on will help when i need to change msg only once. Conclusion: I am still not sure of any advantage to use gettext and am using my own messages file. but I have selected the answer which at least explained few points why gettext can be beneficial. The final solution IMO is which takes the best from both ways i.e my own message identifier wrapped by gettext e.g wx.MessageBox(_("GREET_SO"))
[ "There are some advantages of gettext:\n\nOne of the biggest advantages is: when using poedit to do the translations you can benefit from the translation database. Basically poedit ca scan your harddisk and find already translated files and will make suggestions when you translate your file.\nWhen you give the code to other people to translate they might already know the gettext way of translating, while you have to explain them your way of translating.\nYou have the text in the context of the code, so it should be easier to translate, when you see the code around the translation\nConsider text like: print _('%d files of %d files selected') % (num, numTotal) and even more complicated situations. Here it really helps having the code around ...\n\n", "Gettext is the way to go, in your example you can also use gettext to \"avoid storing the translations in the code\":\nwx.MessageBox(messages.GREET_SO)\n\nmight be the same with gettext as:\nwx.MessageBox(_(\"GREET_SO\")) or wx.MessageBox(_(\"messages.GREET_SO\"))\n\nGettext is pretty much the standard for multilingual applications, and I'm pretty sure you'll benefit from using it in the future. Example, you can use Poedit (or other similar app) to assign translations to your collaborators or contributors and later on flag one or several messages as not properly translated. Also if there are missing / extra entries poedit will warn you. Don't fool yourself, gettext is the only proven reliable way to maintain translations.\n" ]
[ 1, 1 ]
[ "For the web (this is PHP but the idea's the same), I always create multiple language files in a specific directory. en.php, fr.php, et cetera. Those files contain definitions of all output text, in the given language. The user preference for language determines which of those files get included, thus, which language the output appears in. For example...\nin en.php:\n TEXT_I_AM = \"I am\"\nin fr.php:\n TEXT_I_AM = \"Je suis\"\n" ]
[ -1 ]
[ "gettext", "internationalization", "multilingual", "python" ]
stackoverflow_0001043708_gettext_internationalization_multilingual_python.txt
Q: Open-Source Forum with API Does anyone have suggestions for a PHP, Python, or J2EE-based web forum that has a good API for programmatically creating users and forum topics? A: phpBB would be the first that comes to mind as open-source, simply because it's free. In reality almost all forum platforms have some sort of 'api' in that you can do whatever you need programatically, it just may not be as simple as 'add_user(bob)'. A few lines of code and a SQL query or two and you can usually achieve everything you need. Out of personal preference I would recommend vBulletin, however it does have a fee. The benefit of this is that it has a very strong modding community that have probably already accomplished everything you need at http://vbulletin.org.
Open-Source Forum with API
Does anyone have suggestions for a PHP, Python, or J2EE-based web forum that has a good API for programmatically creating users and forum topics?
[ "phpBB would be the first that comes to mind as open-source, simply because it's free. \nIn reality almost all forum platforms have some sort of 'api' in that you can do whatever you need programatically, it just may not be as simple as 'add_user(bob)'. A few lines of code and a SQL query or two and you can usually achieve everything you need.\nOut of personal preference I would recommend vBulletin, however it does have a fee. The benefit of this is that it has a very strong modding community that have probably already accomplished everything you need at http://vbulletin.org.\n" ]
[ 4 ]
[]
[]
[ "forum", "php", "python", "web" ]
stackoverflow_0001069246_forum_php_python_web.txt
Q: How should I setup the Wing IDE for use with IronPython Here is a screen where I should point the Wing IDE to my python files. I am using IronPython. Am I assuming correctly that textbox one gets filled with ipy.exe ? (proper path provided) What should be in the rest of the boxes ? A: I do not know about your question in particular; however few weeks ago, Michael Foord published a guide for using WingIde with IronPython. You can find it here: http://www.voidspace.org.uk/ironpython/wing-how-to.shtml A: Wing IDE at the moment doesn't allow the debug mode with IronPython. You need to link the IDE to the CPython install. (Michael Foord words in the article http://www.voidspace.org.uk/ironpython/wing-how-to.shtml]1). Wing IDE wouldn't run the shell while pointing at Python 3.1. I am unsure whether that is something I have done wrong or that the incompabilities aren't sorted out yet. I have installed CPython implementation from Python download site.Python download I have set the Python executable path to the python.exe that gets installed to your Python Install directory. I have modified the enviroment variables and added the Python directory to the PATH variable.
How should I setup the Wing IDE for use with IronPython
Here is a screen where I should point the Wing IDE to my python files. I am using IronPython. Am I assuming correctly that textbox one gets filled with ipy.exe ? (proper path provided) What should be in the rest of the boxes ?
[ "I do not know about your question in particular; however few weeks ago, Michael Foord published a guide for using WingIde with IronPython.\nYou can find it here: http://www.voidspace.org.uk/ironpython/wing-how-to.shtml\n", "\nWing IDE at the moment doesn't allow the debug mode with IronPython. You need to link the IDE to the CPython install. (Michael Foord words in the article http://www.voidspace.org.uk/ironpython/wing-how-to.shtml]1).\nWing IDE wouldn't run the shell while pointing at Python 3.1. I am unsure whether that is something I have done wrong or that the incompabilities aren't sorted out yet.\n\nI have installed CPython implementation from Python download site.Python download\nI have set the Python executable path to the python.exe that gets installed to your Python Install directory. \nI have modified the enviroment variables and added the Python directory to the PATH variable.\n" ]
[ 2, 0 ]
[]
[]
[ "ironpython", "python", "wing_ide" ]
stackoverflow_0001038695_ironpython_python_wing_ide.txt
Q: how to integrate ZSH and (i)python? I have been in love with zsh for a long time, and more recently I have been discovering the advantages of the ipython interactive interpreter over python itself. Being able to cd, to ls, to run or to ! is indeed very handy. But now it feels weird to have such a clumsy shell when in ipython, and I wonder how I could integrate my zsh and my ipython better. Of course, I could rewrite my .zshrc and all my scripts in python, and emulate most of my shell world from ipython, but it doesn't feel right. And I am obviously not ready to use ipython as a main shell anyway. So, here comes my question: how do you work efficiently between your shell and your python command-loop ? Am I missing some obvious integration strategy ? Should I do all that in emacs ? A: I asked this question on the zsh list and this answer worked for me. YMMV. In genutils.py after the line if not debug: Remove the line: stat = os.system(cmd) Replace it with: stat = subprocess.call(cmd,shell=True,executable='/bin/zsh') you see, the problem is that that "!" call uses os.system to run it, which defaults to manky old /bin/sh . Like I said, it worked for me, although I'm not sure what got borked behind the scenes. A: You can run shell commands by starting them with an exclamation mark and capture the output in a python variable. Example: listing directories in your /tmp directory: ipy> import os ipy> tmplist = !find /tmp ipy> [dir for dir in tmplist if os.path.isdir(dir)] The list object is a special ipython object with several useful methods. Example: listing files ending with .pdf ipy> tmplist.grep(lambda a: a.endswith('.pdf')) # using a lambda ipy> tmplist.grep('\.pdf$') # using a regexp There is a lot of things you can do by reading the list of magic commands: ipy> %magic See the shell section of the Ipython documentation.
how to integrate ZSH and (i)python?
I have been in love with zsh for a long time, and more recently I have been discovering the advantages of the ipython interactive interpreter over python itself. Being able to cd, to ls, to run or to ! is indeed very handy. But now it feels weird to have such a clumsy shell when in ipython, and I wonder how I could integrate my zsh and my ipython better. Of course, I could rewrite my .zshrc and all my scripts in python, and emulate most of my shell world from ipython, but it doesn't feel right. And I am obviously not ready to use ipython as a main shell anyway. So, here comes my question: how do you work efficiently between your shell and your python command-loop ? Am I missing some obvious integration strategy ? Should I do all that in emacs ?
[ "I asked this question on the zsh list and this answer worked for me. YMMV.\nIn genutils.py after the line \n\nif not debug:\n\nRemove the line:\n\nstat = os.system(cmd)\n\nReplace it with:\n\nstat =\n subprocess.call(cmd,shell=True,executable='/bin/zsh')\n\nyou see, the problem is that that \"!\" call uses os.system to run it, which defaults to manky old /bin/sh .\nLike I said, it worked for me, although I'm not sure what got borked behind the scenes.\n", "You can run shell commands by starting them with an exclamation mark and capture the output in a python variable. Example: listing directories in your /tmp directory:\nipy> import os\nipy> tmplist = !find /tmp\nipy> [dir for dir in tmplist if os.path.isdir(dir)]\n\nThe list object is a special ipython object with several useful methods. Example: listing files ending with .pdf\nipy> tmplist.grep(lambda a: a.endswith('.pdf')) # using a lambda\nipy> tmplist.grep('\\.pdf$') # using a regexp\n\nThere is a lot of things you can do by reading the list of magic commands:\nipy> %magic\n\nSee the shell section of the Ipython documentation.\n" ]
[ 12, 7 ]
[]
[]
[ "ipython", "python", "shell", "zsh" ]
stackoverflow_0000973520_ipython_python_shell_zsh.txt
Q: Python-Hotshot error trying to profile a simple program I was trying to learn how to profile a simple python program using hotshot, but am facing a weird error, import sys import hotshot def main(argv): for i in range(1,1000): print i if __name__ == "__main__": prof = hotshot.Profile("hotshot_edi_stats") b,c = prof.runcall(main(sys.argv)) prof.close() and the output, . . 995 996 997 998 999 Traceback (most recent call last): File "t.py", line 9, in <module> b, c = prof.runcall(main(sys.argv)) File "/usr/lib/python2.5/hotshot/__init__.py", line 76, in runcall return self._prof.runcall(func, args, kw) TypeError: 'NoneType' object is not callable Would anyone know why this happens? It looks to me like a problem with the hotshot profiler itself. Alternatively, do people have suggestions on other methods to profile python programs? Thanks! A: And I think I've figured out something I missed for over 2 hours.. Turns out, runcall() should be called as, runcall(main, self.argv) and this makes things work! A: In general, if you have a way to randomly pause or interrupt the program and see the call stack, this method always works.
Python-Hotshot error trying to profile a simple program
I was trying to learn how to profile a simple python program using hotshot, but am facing a weird error, import sys import hotshot def main(argv): for i in range(1,1000): print i if __name__ == "__main__": prof = hotshot.Profile("hotshot_edi_stats") b,c = prof.runcall(main(sys.argv)) prof.close() and the output, . . 995 996 997 998 999 Traceback (most recent call last): File "t.py", line 9, in <module> b, c = prof.runcall(main(sys.argv)) File "/usr/lib/python2.5/hotshot/__init__.py", line 76, in runcall return self._prof.runcall(func, args, kw) TypeError: 'NoneType' object is not callable Would anyone know why this happens? It looks to me like a problem with the hotshot profiler itself. Alternatively, do people have suggestions on other methods to profile python programs? Thanks!
[ "And I think I've figured out something I missed for over 2 hours.. \nTurns out, runcall() should be called as,\nruncall(main, self.argv)\n\nand this makes things work!\n", "In general, if you have a way to randomly pause or interrupt the program and see the call stack, this method always works.\n" ]
[ 3, 1 ]
[]
[]
[ "profiler", "profiling", "python" ]
stackoverflow_0001061361_profiler_profiling_python.txt
Q: How To: View MFC Doc File in Python I want to use Python to access MFC document files generically? Can CArchive be used to query a file and view the structure, or does Python, in opening the document, need to know more about the document structure in order to view the contents? A: I think that the Python code needs to know the document structure. Maybe you should make a python wrapper of your c++ code. In this case, I would recommend to use http://sourceforge.net/projects/pycpp/>pycpp which is my opinion a great library for making python extensions in c++.
How To: View MFC Doc File in Python
I want to use Python to access MFC document files generically? Can CArchive be used to query a file and view the structure, or does Python, in opening the document, need to know more about the document structure in order to view the contents?
[ "I think that the Python code needs to know the document structure. \nMaybe you should make a python wrapper of your c++ code. \nIn this case, I would recommend to use http://sourceforge.net/projects/pycpp/>pycpp which is my opinion a great library for making python extensions in c++.\n" ]
[ 0 ]
[]
[]
[ "file", "mfc", "python", "windows" ]
stackoverflow_0001070932_file_mfc_python_windows.txt
Q: Equivalent for inject() in Python? In Ruby, I'm used to using Enumerable#inject for going through a list or other structure and coming back with some conclusion about it. For example, [1,3,5,7].inject(true) {|allOdd, n| allOdd && n % 2 == 1} to determine if every element in the array is odd. What would be the appropriate way to accomplish the same thing in Python? A: To determine if every element is odd, I'd use all() def is_odd(x): return x%2==1 result = all(is_odd(x) for x in [1,3,5,7]) In general, however, Ruby's inject is most like Python's reduce(): result = reduce(lambda x,y: x and y%2==1, [1,3,5,7], True) all() is preferred in this case because it will be able to escape the loop once it finds a False-like value, whereas the reduce solution would have to process the entire list to return an answer. A: Sounds like reduce in Python or fold(r|l)'?' from Haskell. reduce(lambda x, y: x and y % == 1, [1, 3, 5]) A: I think you probably want to use all, which is less general than inject. reduce is the Python equivalent of inject, though. all(n % 2 == 1 for n in [1, 3, 5, 7])
Equivalent for inject() in Python?
In Ruby, I'm used to using Enumerable#inject for going through a list or other structure and coming back with some conclusion about it. For example, [1,3,5,7].inject(true) {|allOdd, n| allOdd && n % 2 == 1} to determine if every element in the array is odd. What would be the appropriate way to accomplish the same thing in Python?
[ "To determine if every element is odd, I'd use all()\ndef is_odd(x): \n return x%2==1\n\nresult = all(is_odd(x) for x in [1,3,5,7])\n\nIn general, however, Ruby's inject is most like Python's reduce():\nresult = reduce(lambda x,y: x and y%2==1, [1,3,5,7], True)\n\nall() is preferred in this case because it will be able to escape the loop once it finds a False-like value, whereas the reduce solution would have to process the entire list to return an answer.\n", "Sounds like reduce in Python or fold(r|l)'?' from Haskell.\nreduce(lambda x, y: x and y % == 1, [1, 3, 5])\n\n", "I think you probably want to use all, which is less general than inject. reduce is the Python equivalent of inject, though.\nall(n % 2 == 1 for n in [1, 3, 5, 7])\n\n" ]
[ 25, 8, 4 ]
[]
[]
[ "functional_programming", "python" ]
stackoverflow_0001070926_functional_programming_python.txt
Q: Why does list comprehension using a zip object results in an empty list? f = lambda x : 2*x g = lambda x : x ** 2 h = lambda x : x ** x funcTriple = ( f, g, h ) myZip = ( zip ( funcTriple, (1, 3, 5) ) ) k = lambda pair : pair[0](pair[1]) # Why do Output # 1 (2, 9, 3125) and Output # 2 ( [ ] ) differ? print ("\n\nOutput # 1: for pair in myZip: k(pair) ...") for pair in myZip : print ( k(pair) ) print ("\n\nOutput # 2: [ k(pair) for pair in myZip ] ...") print ( [ k(pair) for pair in myZip ] ) # script output is ... # Output # 1: for pair in myZip: k(pair) ... # 2 # 9 # 3125 # # Output # 2: [ k(pair) for pair in myZip ] ... # [] A: Works perfectly in Python 2.6 but fails in Python 3.0 because zip returns a generator-style object and the first loop exhausts it. Make a list instead: myZip = list( zip ( funcTriple, (1, 3, 5) ) ) and it works in Python 3.0
Why does list comprehension using a zip object results in an empty list?
f = lambda x : 2*x g = lambda x : x ** 2 h = lambda x : x ** x funcTriple = ( f, g, h ) myZip = ( zip ( funcTriple, (1, 3, 5) ) ) k = lambda pair : pair[0](pair[1]) # Why do Output # 1 (2, 9, 3125) and Output # 2 ( [ ] ) differ? print ("\n\nOutput # 1: for pair in myZip: k(pair) ...") for pair in myZip : print ( k(pair) ) print ("\n\nOutput # 2: [ k(pair) for pair in myZip ] ...") print ( [ k(pair) for pair in myZip ] ) # script output is ... # Output # 1: for pair in myZip: k(pair) ... # 2 # 9 # 3125 # # Output # 2: [ k(pair) for pair in myZip ] ... # []
[ "Works perfectly in Python 2.6 but fails in Python 3.0 because zip returns a generator-style object and the first loop exhausts it. Make a list instead:\nmyZip = list( zip ( funcTriple, (1, 3, 5) ) )\n\nand it works in Python 3.0\n" ]
[ 18 ]
[]
[]
[ "list_comprehension", "python", "zip" ]
stackoverflow_0001071201_list_comprehension_python_zip.txt
Q: Detect URLs in a string and wrap with "<a href..." tag I am looking to write something that seems like it should be easy enough, but for whatever reason I'm having a tough time getting my head around it. I am looking to write a python function that, when passed a string, will pass that string back with HTML encoding around URLs. unencoded_string = "This is a link - http://google.com" def encode_string_with_links(unencoded_string): # some sort of regex magic occurs return encoded_string print encoded_string 'This is a link - <a href="http://google.com">http://google.com</a>' Thank you! A: Googled solutions: #---------- find_urls.py----------# # Functions to identify and extract URLs and email addresses import re def fix_urls(text): pat_url = re.compile( r''' (?x)( # verbose identify URLs within text (http|ftp|gopher) # make sure we find a resource type :// # ...needs to be followed by colon-slash-slash (\w+[:.]?){2,} # at least two domain groups, e.g. (gnosis.)(cx) (/?| # could be just the domain name (maybe w/ slash) [^ \n\r"]+ # or stuff then space, newline, tab, quote [\w/]) # resource name ends in alphanumeric or slash (?=[\s\.,>)'"\]]) # assert: followed by white or clause ending ) # end of match group ''') pat_email = re.compile(r''' (?xm) # verbose identify URLs in text (and multiline) (?=^.{11} # Mail header matcher (?<!Message-ID:| # rule out Message-ID's as best possible In-Reply-To)) # ...and also In-Reply-To (.*?)( # must grab to email to allow prior lookbehind ([A-Za-z0-9-]+\.)? # maybe an initial part: [email protected] [A-Za-z0-9-]+ # definitely some local user: [email protected] @ # ...needs an at sign in the middle (\w+\.?){2,} # at least two domain groups, e.g. (gnosis.)(cx) (?=[\s\.,>)'"\]]) # assert: followed by white or clause ending ) # end of match group ''') for url in re.findall(pat_url, text): text = text.replace(url[0], '<a href="%(url)s">%(url)s</a>' % {"url" : url[0]}) for email in re.findall(pat_email, text): text = text.replace(email[1], '<a href="mailto:%(email)s">%(email)s</a>' % {"email" : email[1]}) return text if __name__ == '__main__': print fix_urls("test http://google.com asdasdasd some more text") EDIT: Adjusted to your needs A: The "regex magic" you need is just sub (which does a substitution): def encode_string_with_links(unencoded_string): return URL_REGEX.sub(r'<a href="\1">\1</a>', unencoded_string) URL_REGEX could be something like: URL_REGEX = re.compile(r'''((?:mailto:|ftp://|http://)[^ <>'"{}|\\^`[\]]*)''') This is a pretty loose regex for URLs: it allows mailto, http and ftp schemes, and after that pretty much just keeps going until it runs into an "unsafe" character (except percent, which you want to allow for escapes). You could make it more strict if you need to. For example, you could require that percents are followed by a valid hex escape, or only allow one pound sign (for the fragment) or enforce the order between query parameters and fragments. This should be enough to get you started, though.
Detect URLs in a string and wrap with "<a href..." tag
I am looking to write something that seems like it should be easy enough, but for whatever reason I'm having a tough time getting my head around it. I am looking to write a python function that, when passed a string, will pass that string back with HTML encoding around URLs. unencoded_string = "This is a link - http://google.com" def encode_string_with_links(unencoded_string): # some sort of regex magic occurs return encoded_string print encoded_string 'This is a link - <a href="http://google.com">http://google.com</a>' Thank you!
[ "Googled solutions:\n#---------- find_urls.py----------#\n# Functions to identify and extract URLs and email addresses\n\nimport re\n\ndef fix_urls(text):\n pat_url = re.compile( r'''\n (?x)( # verbose identify URLs within text\n (http|ftp|gopher) # make sure we find a resource type\n :// # ...needs to be followed by colon-slash-slash\n (\\w+[:.]?){2,} # at least two domain groups, e.g. (gnosis.)(cx)\n (/?| # could be just the domain name (maybe w/ slash)\n [^ \\n\\r\"]+ # or stuff then space, newline, tab, quote\n [\\w/]) # resource name ends in alphanumeric or slash\n (?=[\\s\\.,>)'\"\\]]) # assert: followed by white or clause ending\n ) # end of match group\n ''')\n pat_email = re.compile(r'''\n (?xm) # verbose identify URLs in text (and multiline)\n (?=^.{11} # Mail header matcher\n (?<!Message-ID:| # rule out Message-ID's as best possible\n In-Reply-To)) # ...and also In-Reply-To\n (.*?)( # must grab to email to allow prior lookbehind\n ([A-Za-z0-9-]+\\.)? # maybe an initial part: [email protected]\n [A-Za-z0-9-]+ # definitely some local user: [email protected]\n @ # ...needs an at sign in the middle\n (\\w+\\.?){2,} # at least two domain groups, e.g. (gnosis.)(cx)\n (?=[\\s\\.,>)'\"\\]]) # assert: followed by white or clause ending\n ) # end of match group\n ''')\n\n for url in re.findall(pat_url, text):\n text = text.replace(url[0], '<a href=\"%(url)s\">%(url)s</a>' % {\"url\" : url[0]})\n\n for email in re.findall(pat_email, text):\n text = text.replace(email[1], '<a href=\"mailto:%(email)s\">%(email)s</a>' % {\"email\" : email[1]})\n\n return text\n\nif __name__ == '__main__':\n print fix_urls(\"test http://google.com asdasdasd some more text\")\n\nEDIT: Adjusted to your needs\n", "The \"regex magic\" you need is just sub (which does a substitution):\ndef encode_string_with_links(unencoded_string):\n return URL_REGEX.sub(r'<a href=\"\\1\">\\1</a>', unencoded_string)\n\nURL_REGEX could be something like:\nURL_REGEX = re.compile(r'''((?:mailto:|ftp://|http://)[^ <>'\"{}|\\\\^`[\\]]*)''')\n\nThis is a pretty loose regex for URLs: it allows mailto, http and ftp schemes, and after that pretty much just keeps going until it runs into an \"unsafe\" character (except percent, which you want to allow for escapes). You could make it more strict if you need to. For example, you could require that percents are followed by a valid hex escape, or only allow one pound sign (for the fragment) or enforce the order between query parameters and fragments. This should be enough to get you started, though.\n" ]
[ 11, 11 ]
[]
[]
[ "html", "python", "regex" ]
stackoverflow_0001071191_html_python_regex.txt
Q: Where can i get free GSM libraries/components for delphi or python? Where can i get good free GSM libraries for Delphi or Python? Libraries i can use to send and receive sms's on my application? Gath A: For free and open source AsyncPro> Not free but the components has active development nrComm Lib Another solution to use SMS gateway, such as ClickAtell, with solution you can send sms using a simple post command to the gateway url or webservices. A: Get it here - completely free (previously commercial components): A: Another SMS gateway with a Python interface is TextMagic. Read my response to a similar question here A: A PyPi search turns up several promising Python SMS libraries. Some of them talk to a GSM modem, others work through web SMS gateways, and there's even one to interface with Apple's Sudden Motion Sensor. A: I've always enjoyed this application with my Sony-Ericsson devices:
Where can i get free GSM libraries/components for delphi or python?
Where can i get good free GSM libraries for Delphi or Python? Libraries i can use to send and receive sms's on my application? Gath
[ "For free and open source AsyncPro>\nNot free but the components has active development nrComm Lib\nAnother solution to use SMS gateway, such as ClickAtell, with solution you can send sms using a simple post command to the gateway url or webservices.\n", "Get it here - completely free (previously commercial components):\n", "Another SMS gateway with a Python interface is TextMagic.\nRead my response to a similar question here\n", "A PyPi search turns up several promising Python SMS libraries. Some of them talk to a GSM modem, others work through web SMS gateways, and there's even one to interface with Apple's Sudden Motion Sensor.\n", "I've always enjoyed this application with my Sony-Ericsson devices:\n" ]
[ 2, 1, 1, 0, 0 ]
[]
[]
[ "delphi", "gsm", "python" ]
stackoverflow_0000657100_delphi_gsm_python.txt
Q: Some help understanding async USB operations with libusb-1.0 and ctypes Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list. Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. from ctypes import * VENDOR_ID = 0x04d8 PRODUCT_ID = 0xc002 _USBLCD_MAX_DATA_LEN = 24 LIBUSB_ENDPOINT_IN = 0x80 LIBUSB_ENDPOINT_OUT = 0x00 class EnumerationType(type(c_uint)): def __new__(metacls, name, bases, dict): if not "_members_" in dict: _members_ = {} for key,value in dict.items(): if not key.startswith("_"): _members_[key] = value dict["_members_"] = _members_ cls = type(c_uint).__new__(metacls, name, bases, dict) for key,value in cls._members_.items(): globals()[key] = value return cls def __contains__(self, value): return value in self._members_.values() def __repr__(self): return "<Enumeration %s>" % self.__name__ class Enumeration(c_uint): __metaclass__ = EnumerationType _members_ = {} def __init__(self, value): for k,v in self._members_.items(): if v == value: self.name = k break else: raise ValueError("No enumeration member with value %r" % value) c_uint.__init__(self, value) @classmethod def from_param(cls, param): if isinstance(param, Enumeration): if param.__class__ != cls: raise ValueError("Cannot mix enumeration members") else: return param else: return cls(param) def __repr__(self): return "<member %s=%d of %r>" % (self.name, self.value, self.__class__) class LIBUSB_TRANSFER_STATUS(Enumeration): _members_ = {'LIBUSB_TRANSFER_COMPLETED':0, 'LIBUSB_TRANSFER_ERROR':1, 'LIBUSB_TRANSFER_TIMED_OUT':2, 'LIBUSB_TRANSFER_CANCELLED':3, 'LIBUSB_TRANSFER_STALL':4, 'LIBUSB_TRANSFER_NO_DEVICE':5, 'LIBUSB_TRANSFER_OVERFLOW':6} class LIBUSB_TRANSFER_FLAGS(Enumeration): _members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0, 'LIBUSB_TRANSFER_FREE_BUFFER':1<<1, 'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2} class LIBUSB_TRANSFER_TYPE(Enumeration): _members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0, 'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1, 'LIBUSB_TRANSFER_TYPE_BULK':2, 'LIBUSB_TRANSFER_TYPE_INTERRUPT':3} class LIBUSB_CONTEXT(Structure): pass class LIBUSB_DEVICE(Structure): pass class LIBUSB_DEVICE_HANDLE(Structure): pass class LIBUSB_CONTROL_SETUP(Structure): _fields_ = [("bmRequestType", c_int), ("bRequest", c_int), ("wValue", c_int), ("wIndex", c_int), ("wLength", c_int)] class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure): _fields_ = [("length", c_int), ("actual_length", c_int), ("status", LIBUSB_TRANSFER_STATUS)] class LIBUSB_TRANSFER(Structure): pass LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER)) LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)), ("flags", c_ubyte), ("endpoint", c_ubyte), ("type", c_ubyte), ("timeout", c_uint), ("status", LIBUSB_TRANSFER_STATUS), ("length", c_int), ("actual_length", c_int), ("callback", LIBUSB_TRANSFER_CB_FN), ("user_data", c_void_p), ("buffer", POINTER(c_ubyte)), ("num_iso_packets", c_int), ("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))] class TIMEVAL(Structure): _fields_ = [('tv_sec', c_long), ('tv_usec', c_long)] lib = cdll.LoadLibrary("libusb-1.0.so") lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE) lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER) def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout): transfer[0].dev_handle = dev_handle transfer[0].endpoint = chr(endpoint) transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT) transfer[0].timeout = timeout transfer[0].buffer = buffer transfer[0].length = length transfer[0].user_data = user_data transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback) def cb_transfer(transfer): print "Transfer status %d" % transfer.status if __name__ == "__main__": context = POINTER(LIBUSB_CONTEXT)() lib.libusb_init(None) transfer = lib.libusb_alloc_transfer(0) handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID) size = _USBLCD_MAX_DATA_LEN buffer = c_char_p(size) libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0) r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0. if r < 0: print "libusb_submit_transfer failed", r while r >= 0: print "Poll before" tv = TIMEVAL(1, 0) r = lib.libusb_handle_events_timeout(None, byref(tv)) print "Poll after", r A: Have you checked to make sure the return values of libusb_alloc_transfer and libusb_open_device_with_vid_pid are valid? Have you tried annotating the library functions with the appropriate argtypes? You may run in to trouble with transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)—you're not keeping any references to the CFunctionType object returned from LIBUSB_TRANSFER_CB_FN(), and so that object might be getting released and overwritten. The next step, I suppose, would be to install a version of libusb with debugging symbols, boot up GDB, set a breakpoint at libusb_submit_transfer(), make sure the passed-in libusb_transfer is sane, and see what's triggering the error to be returned. A: Running it as root once fixed the busy flag.
Some help understanding async USB operations with libusb-1.0 and ctypes
Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list. Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. from ctypes import * VENDOR_ID = 0x04d8 PRODUCT_ID = 0xc002 _USBLCD_MAX_DATA_LEN = 24 LIBUSB_ENDPOINT_IN = 0x80 LIBUSB_ENDPOINT_OUT = 0x00 class EnumerationType(type(c_uint)): def __new__(metacls, name, bases, dict): if not "_members_" in dict: _members_ = {} for key,value in dict.items(): if not key.startswith("_"): _members_[key] = value dict["_members_"] = _members_ cls = type(c_uint).__new__(metacls, name, bases, dict) for key,value in cls._members_.items(): globals()[key] = value return cls def __contains__(self, value): return value in self._members_.values() def __repr__(self): return "<Enumeration %s>" % self.__name__ class Enumeration(c_uint): __metaclass__ = EnumerationType _members_ = {} def __init__(self, value): for k,v in self._members_.items(): if v == value: self.name = k break else: raise ValueError("No enumeration member with value %r" % value) c_uint.__init__(self, value) @classmethod def from_param(cls, param): if isinstance(param, Enumeration): if param.__class__ != cls: raise ValueError("Cannot mix enumeration members") else: return param else: return cls(param) def __repr__(self): return "<member %s=%d of %r>" % (self.name, self.value, self.__class__) class LIBUSB_TRANSFER_STATUS(Enumeration): _members_ = {'LIBUSB_TRANSFER_COMPLETED':0, 'LIBUSB_TRANSFER_ERROR':1, 'LIBUSB_TRANSFER_TIMED_OUT':2, 'LIBUSB_TRANSFER_CANCELLED':3, 'LIBUSB_TRANSFER_STALL':4, 'LIBUSB_TRANSFER_NO_DEVICE':5, 'LIBUSB_TRANSFER_OVERFLOW':6} class LIBUSB_TRANSFER_FLAGS(Enumeration): _members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0, 'LIBUSB_TRANSFER_FREE_BUFFER':1<<1, 'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2} class LIBUSB_TRANSFER_TYPE(Enumeration): _members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0, 'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1, 'LIBUSB_TRANSFER_TYPE_BULK':2, 'LIBUSB_TRANSFER_TYPE_INTERRUPT':3} class LIBUSB_CONTEXT(Structure): pass class LIBUSB_DEVICE(Structure): pass class LIBUSB_DEVICE_HANDLE(Structure): pass class LIBUSB_CONTROL_SETUP(Structure): _fields_ = [("bmRequestType", c_int), ("bRequest", c_int), ("wValue", c_int), ("wIndex", c_int), ("wLength", c_int)] class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure): _fields_ = [("length", c_int), ("actual_length", c_int), ("status", LIBUSB_TRANSFER_STATUS)] class LIBUSB_TRANSFER(Structure): pass LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER)) LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)), ("flags", c_ubyte), ("endpoint", c_ubyte), ("type", c_ubyte), ("timeout", c_uint), ("status", LIBUSB_TRANSFER_STATUS), ("length", c_int), ("actual_length", c_int), ("callback", LIBUSB_TRANSFER_CB_FN), ("user_data", c_void_p), ("buffer", POINTER(c_ubyte)), ("num_iso_packets", c_int), ("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))] class TIMEVAL(Structure): _fields_ = [('tv_sec', c_long), ('tv_usec', c_long)] lib = cdll.LoadLibrary("libusb-1.0.so") lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE) lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER) def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout): transfer[0].dev_handle = dev_handle transfer[0].endpoint = chr(endpoint) transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT) transfer[0].timeout = timeout transfer[0].buffer = buffer transfer[0].length = length transfer[0].user_data = user_data transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback) def cb_transfer(transfer): print "Transfer status %d" % transfer.status if __name__ == "__main__": context = POINTER(LIBUSB_CONTEXT)() lib.libusb_init(None) transfer = lib.libusb_alloc_transfer(0) handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID) size = _USBLCD_MAX_DATA_LEN buffer = c_char_p(size) libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0) r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0. if r < 0: print "libusb_submit_transfer failed", r while r >= 0: print "Poll before" tv = TIMEVAL(1, 0) r = lib.libusb_handle_events_timeout(None, byref(tv)) print "Poll after", r
[ "\nHave you checked to make sure the return values of libusb_alloc_transfer and libusb_open_device_with_vid_pid are valid?\nHave you tried annotating the library functions with the appropriate argtypes?\nYou may run in to trouble with transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)—you're not keeping any references to the CFunctionType object returned from LIBUSB_TRANSFER_CB_FN(), and so that object might be getting released and overwritten.\n\nThe next step, I suppose, would be to install a version of libusb with debugging symbols, boot up GDB, set a breakpoint at libusb_submit_transfer(), make sure the passed-in libusb_transfer is sane, and see what's triggering the error to be returned.\n", "Running it as root once fixed the busy flag.\n" ]
[ 2, 0 ]
[ "where is the initial declaration of transfer? I am not familiar with python, but is this ok to assign values to fields in your struct without defining what data type it should be?\n" ]
[ -1 ]
[ "ctypes", "libusb", "python", "usb" ]
stackoverflow_0001052135_ctypes_libusb_python_usb.txt
Q: python class variable not visible in __init__? This code produces an error message, which I found surprising: class Foo(object): custom = 1 def __init__(self, custom=Foo.custom): self._custom = custom x = Foo() Can anyone provide enlightenment? A: It's Foo that isn't visible, because you're in the middle of building it. But since you're in the same scope as custom, you can just say custom rather than Foo.custom: class Foo(object): custom = 1 def __init__(self, mycustom=custom): self._custom = mycustom But note that changing Foo.custom later on won't affect the value of custom that subsequently-created Foos see: class Foo(object): custom = 1 def __init__(self, mycustom=custom): self._custom = mycustom one = Foo() Foo.custom = 2 two = Foo() print (two._custom) # Prints 1 By using a sentinel default value instead, you can get what you want: class Foo(object): custom = 1 def __init__(self, mycustom=None): if mycustom is None: self._custom = Foo.custom else: self._custom = mycustom one = Foo() Foo.custom = 2 two = Foo() print (two._custom) # Prints 2 A: What we do instead is the following class Foo( object ): custom = 1 def __init__( self, arg=None ) self._custom = self.custom if arg is None else arg This bypasses the confusing issue of whether or not the name Foo has been defined yet. A: The class body is executed before the class its self is defined, so default argument values can't reference the class. Just making custom the default (without class qualification) should work. A: I get the following error: Traceback (most recent call last): Line 1, in <module> class Foo(object): Line 3, in Foo def __init__(self, custom=Foo.custom): NameError: name 'Foo' is not defined This is because the name Foo is in the process of being defined as the __init__ function is defined, and is not fully available at that time. The solution is to avoid using the name Foo in the function definition (I also renamed the custom paramter to acustom to distinguish it from Foo.custom): class Foo(object): custom = 1 def __init__(self, acustom=custom): self._custom = acustom x = Foo() print x._custom
python class variable not visible in __init__?
This code produces an error message, which I found surprising: class Foo(object): custom = 1 def __init__(self, custom=Foo.custom): self._custom = custom x = Foo() Can anyone provide enlightenment?
[ "It's Foo that isn't visible, because you're in the middle of building it. But since you're in the same scope as custom, you can just say custom rather than Foo.custom:\nclass Foo(object):\n custom = 1\n def __init__(self, mycustom=custom):\n self._custom = mycustom\n\nBut note that changing Foo.custom later on won't affect the value of custom that subsequently-created Foos see:\nclass Foo(object):\n custom = 1\n def __init__(self, mycustom=custom):\n self._custom = mycustom\n\none = Foo()\nFoo.custom = 2\ntwo = Foo()\nprint (two._custom) # Prints 1\n\nBy using a sentinel default value instead, you can get what you want:\nclass Foo(object):\n custom = 1\n def __init__(self, mycustom=None):\n if mycustom is None:\n self._custom = Foo.custom\n else:\n self._custom = mycustom\n\none = Foo()\nFoo.custom = 2\ntwo = Foo()\nprint (two._custom) # Prints 2\n\n", "What we do instead is the following\nclass Foo( object ):\n custom = 1\n def __init__( self, arg=None )\n self._custom = self.custom if arg is None else arg\n\nThis bypasses the confusing issue of whether or not the name Foo has been defined yet.\n", "The class body is executed before the class its self is defined, so default argument values can't reference the class. Just making custom the default (without class qualification) should work.\n", "I get the following error:\nTraceback (most recent call last):\n Line 1, in <module>\n class Foo(object):\n Line 3, in Foo\n def __init__(self, custom=Foo.custom):\nNameError: name 'Foo' is not defined\n\nThis is because the name Foo is in the process of being defined as the __init__ function is defined, and is not fully available at that time.\nThe solution is to avoid using the name Foo in the function definition (I also renamed the custom paramter to acustom to distinguish it from Foo.custom):\nclass Foo(object):\n custom = 1\n def __init__(self, acustom=custom):\n self._custom = acustom\nx = Foo()\nprint x._custom\n\n" ]
[ 16, 7, 3, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001071469_python.txt
Q: how to convert a python dict object to a java equivalent object? I need to convert a python code into an equivalent java code. Python makes life very easy for the developers by providing lots of shortcut functionalities. But now I need to migrate the same to Java. I was wondering what will the equivalent of dict objects in java? I have tried using HashMap but life is hell. For starters consider this, # Nodes is a dictionary -> Key : (Name, Strength) for node, (name, strength) in nodes.items(): nodes[node] = (name, new_strength) So how to go about converting this into Java? For starters I used HashMap object so, Map<Integer, List> nodesMap = new HashMap<Integer,List>(); /* For iterating over the map */ Iterator updateNodeStrengthIterator = nodesMap.entrySet().iterator(); while(updateNodeStrengthIterator.hasNext()){ } My problem is in getting the List part which contains Name & Strength & then updating the Strength part. Is there any feasible way to do this? Should I consider some different data structure? Please help. A: It's probably easiest to just create a class for the (Name, Strength) tuple: class NameStrength { public String name; public String strength; } Add getters, setters and a constructor if appropriate. Then you can use the new class in your map: Map<Integer, NameStrength> nodesMap = new HashMap<Integer, NameStrength>(); In Java 5 and up, you can iterate like this: for (NameStrength nameStrength : nodesMap.values()) {} or like this: for (Entry<Integer, NameStrength> entry : nodesMap.entrySet()) {} A: well there's always jython. here's a little bit from this article that offers a good side by side view of python/java The Jython analogues to Java's collection classes are much more tightly integrated into the core language, allowing for more concise descriptions and useful functionality. For example, notice the difference between the Java code: map = new HashMap(); map.put("one",new Integer(1)); map.put("two",new Integer(2)); map.put("three",new Integer(3)); System.out.println(map.get("one")); list = new LinkedList(); list.add(new Integer(1)); list.add(new Integer(2)); list.add(new Integer(3)); and the Jython code: map = {"one":1,"two":2,"three":3} print map ["one"] list = [1, 2, 3] edit: what's wrong with just using put() to replace the values? map.put(key,new_value); here's a small example program: static public void main(String[] args){ HashMap<String,Integer> map = new HashMap<String,Integer>(); //name, age map.put("billy", 21); map.put("bobby", 19); year(map); for(String i: map.keySet()){ System.out.println(i+ " " + map.get(i).toString()); } } // a year has passed static void year(HashMap<String,Integer> m){ for(String k: m.keySet()){ m.put(k, m.get(k)+1); } } A: Java doesn't have the equivalent of a tuple built-in. You would have to create a class that encapsulated the two together to mimic it.
how to convert a python dict object to a java equivalent object?
I need to convert a python code into an equivalent java code. Python makes life very easy for the developers by providing lots of shortcut functionalities. But now I need to migrate the same to Java. I was wondering what will the equivalent of dict objects in java? I have tried using HashMap but life is hell. For starters consider this, # Nodes is a dictionary -> Key : (Name, Strength) for node, (name, strength) in nodes.items(): nodes[node] = (name, new_strength) So how to go about converting this into Java? For starters I used HashMap object so, Map<Integer, List> nodesMap = new HashMap<Integer,List>(); /* For iterating over the map */ Iterator updateNodeStrengthIterator = nodesMap.entrySet().iterator(); while(updateNodeStrengthIterator.hasNext()){ } My problem is in getting the List part which contains Name & Strength & then updating the Strength part. Is there any feasible way to do this? Should I consider some different data structure? Please help.
[ "It's probably easiest to just create a class for the (Name, Strength) tuple:\nclass NameStrength {\n public String name;\n public String strength;\n}\n\nAdd getters, setters and a constructor if appropriate.\nThen you can use the new class in your map:\nMap<Integer, NameStrength> nodesMap = new HashMap<Integer, NameStrength>();\n\nIn Java 5 and up, you can iterate like this:\nfor (NameStrength nameStrength : nodesMap.values()) {}\n\nor like this:\nfor (Entry<Integer, NameStrength> entry : nodesMap.entrySet()) {}\n\n", "well there's always jython.\nhere's a little bit from this article that offers a good side by side view of python/java\n\nThe Jython analogues to Java's\n collection classes are much more\n tightly integrated into the core\n language, allowing for more concise\n descriptions and useful functionality.\n For example, notice the difference\n between the Java code:\nmap = new HashMap();\nmap.put(\"one\",new Integer(1));\nmap.put(\"two\",new Integer(2));\nmap.put(\"three\",new Integer(3));\n\nSystem.out.println(map.get(\"one\"));\n\nlist = new LinkedList();\nlist.add(new Integer(1));\nlist.add(new Integer(2));\nlist.add(new Integer(3));\n\nand the Jython code:\nmap = {\"one\":1,\"two\":2,\"three\":3}\nprint map [\"one\"]\nlist = [1, 2, 3]\n\n\n\nedit: what's wrong with just using put() to replace the values?\nmap.put(key,new_value);\n\nhere's a small example program:\nstatic public void main(String[] args){\n HashMap<String,Integer> map = new HashMap<String,Integer>();\n //name, age\n map.put(\"billy\", 21);\n map.put(\"bobby\", 19);\n year(map);\n for(String i: map.keySet()){\n System.out.println(i+ \" \" + map.get(i).toString());\n }\n}\n// a year has passed\nstatic void year(HashMap<String,Integer> m){\n for(String k: m.keySet()){\n m.put(k, m.get(k)+1);\n }\n}\n\n", "Java doesn't have the equivalent of a tuple built-in. You would have to create a class that encapsulated the two together to mimic it.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "dictionary", "hashmap", "java", "python" ]
stackoverflow_0001071793_dictionary_hashmap_java_python.txt
Q: Problem using os.system() with sed command I'm writing a small method to replace some text in a file. The only argument I need is the new text, as it is always the same file and text to be replaced. I'm having a problem using the os.system() call, when I try to use the argument of the method If I use a string like below, everything runs ok: stringId = "GRRRRRRRRR" cmd="sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" os.system(cmd) Now, if i try to give a string as a parameter like below, the command is not executed. I do a print to see if the command is correct, and it is. I can even execute it with success if I copy / paste to my shell import os def updateExportConfigId(id): stringId = "%s" % id cmd= "sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" print "command is " + cmd os.system(cmd) Does anyone knows what is wrong? Thanks A: Obligatory: don't use os.system - use the subprocess module: import subprocess def updateExportConfigId(m_id, source='path/file.old', destination='path/file.new'): if isinstance(m_id, unicode): m_id = m_id.encode('utf-8') cmd= [ "sed", ",$s/MANAGER_ID=[0-9]*/MANAGER_ID=%s/g" % m_id, source, ] subprocess.call(cmd, stdout=open(destination, 'w')) with this code you can pass the manager id, it can have spaces, quote chars, etc. The file names can also be passed to the function, and can also contain spaces and some other special chars. That's because your shell is not unnecessarly invoked, so one less process is started on your OS, and you don't have to worry on escaping special shell characters. Another option: Don't launch sed. Use python's re module. import re def updateExportConfigID(m_id, source, destination): if isinstance(m_id, unicode): m_id = m_id.encode('utf-8') for line in source: new_line = re.sub(r'MANAGER_ID=\d*', r'MANAGER_ID=' + re.escape(m_id), line) destination.write(new_line) and call it like this: updateExportConfigID('GRRRR', open('path/file.old'), open('path/file.new', 'w')) No new processes needed. A: To help you debug it, try adding: print repr(cmd) It might be that some special characters slipped into the command that normal print is hiding when you copy and paste it. A: Maybe some indentation problem? The following works correctly: import os def updateExportConfigId(id): stringId = "%s" % id cmd= "sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' test.dat > test.new" print "command is " + cmd os.system(cmd) updateExportConfigId("adsf") Also do not use reserved words (id) as variables. A: What is wrong is that there is some difference. Yeah, I know that's not helpful, but you need to figure out the difference. Try running this: import os def updateExportConfigId(id): stringId = "%s" % id cmd1 = "sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" stringId = "GRRRRRRRRR" cmd2 = "sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" print "cmd1:" , cmd1 print "cmd2:" , cmd2 print cmd1 == cmd2 updateExportConfigId("GRRRRRRRRR") The code should print: sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=GRRRRRRRRR/g' path/file.old > path/file.new sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=GRRRRRRRRR/g' path/file.old > path/file.new True Thereby showing that they are exactly the same. If the last line is "False" then they are not the same, and you should be able to see the difference. A: So from previous answers we now know that id is a Unicode string, which makes cmd1 a Unicode string, which os.system() is converting to a byte string for execution in the default encoding. a) I suggest using subprocess rather than os.system() b) I suggest not using the name of a built-in function as a variable (id). c) I suggest explicitly encoding the string to a byte string before executing: if isinstance(cmd,unicode): cmd = cmd.encode("UTF-8") d) For Lennart Regebro's suggestion add: assert type(cmd1) == type(cmd2) after print cmd1 == cmd2 A: Maybe it helps to use only raw strings. A: Finally, I found a way to run the os.system(cmd)! Simple trick, to "clean" the cmd string: os.system(str(cmd)) Now, I'm able to build the cmd with all arguments I need and at the end I just "clean" it with str() call before run it with os.system() call. Thanks a lot for your answers! swon
Problem using os.system() with sed command
I'm writing a small method to replace some text in a file. The only argument I need is the new text, as it is always the same file and text to be replaced. I'm having a problem using the os.system() call, when I try to use the argument of the method If I use a string like below, everything runs ok: stringId = "GRRRRRRRRR" cmd="sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" os.system(cmd) Now, if i try to give a string as a parameter like below, the command is not executed. I do a print to see if the command is correct, and it is. I can even execute it with success if I copy / paste to my shell import os def updateExportConfigId(id): stringId = "%s" % id cmd= "sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=" + stringId + "/g' path/file.old > path/file.new" print "command is " + cmd os.system(cmd) Does anyone knows what is wrong? Thanks
[ "Obligatory: don't use os.system - use the subprocess module:\nimport subprocess\n\ndef updateExportConfigId(m_id, source='path/file.old', \n destination='path/file.new'):\n if isinstance(m_id, unicode):\n m_id = m_id.encode('utf-8')\n cmd= [\n \"sed\",\n \",$s/MANAGER_ID=[0-9]*/MANAGER_ID=%s/g\" % m_id, \n source,\n ]\n subprocess.call(cmd, stdout=open(destination, 'w'))\n\nwith this code you can pass the manager id, it can have spaces, quote chars, etc. The file names can also be passed to the function, and can also contain spaces and some other special chars. That's because your shell is not unnecessarly invoked, so one less process is started on your OS, and you don't have to worry on escaping special shell characters.\nAnother option: Don't launch sed. Use python's re module. \nimport re\ndef updateExportConfigID(m_id, source, destination):\n if isinstance(m_id, unicode):\n m_id = m_id.encode('utf-8')\n for line in source:\n new_line = re.sub(r'MANAGER_ID=\\d*', \n r'MANAGER_ID=' + re.escape(m_id), \n line)\n destination.write(new_line)\n\nand call it like this:\nupdateExportConfigID('GRRRR', open('path/file.old'), open('path/file.new', 'w'))\n\nNo new processes needed.\n", "To help you debug it, try adding:\nprint repr(cmd)\n\nIt might be that some special characters slipped into the command that normal print is hiding when you copy and paste it.\n", "Maybe some indentation problem?\nThe following works correctly:\nimport os\n\ndef updateExportConfigId(id):\n stringId = \"%s\" % id\n cmd= \"sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=\" + stringId + \"/g' test.dat > test.new\"\n print \"command is \" + cmd\n os.system(cmd)\n\n\nupdateExportConfigId(\"adsf\")\n\nAlso do not use reserved words (id) as variables.\n", "What is wrong is that there is some difference. Yeah, I know that's not helpful, but you need to figure out the difference.\nTry running this:\nimport os\ndef updateExportConfigId(id):\n stringId = \"%s\" % id\n cmd1 = \"sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=\" + stringId + \"/g' path/file.old > path/file.new\"\n stringId = \"GRRRRRRRRR\"\n cmd2 = \"sed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=\" + stringId + \"/g' path/file.old > path/file.new\"\n\n print \"cmd1:\" , cmd1\n print \"cmd2:\" , cmd2\n print cmd1 == cmd2\n\nupdateExportConfigId(\"GRRRRRRRRR\")\n\nThe code should print:\nsed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=GRRRRRRRRR/g' path/file.old > path/file.new\nsed '1,$s/MANAGER_ID=[0-9]*/MANAGER_ID=GRRRRRRRRR/g' path/file.old > path/file.new\nTrue\n\nThereby showing that they are exactly the same. If the last line is \"False\" then they are not the same, and you should be able to see the difference.\n", "So from previous answers we now know that id is a Unicode string, which makes cmd1 a Unicode string, which os.system() is converting to a byte string for execution in the default encoding.\na) I suggest using subprocess rather than os.system()\nb) I suggest not using the name of a built-in function as a variable (id).\nc) I suggest explicitly encoding the string to a byte string before executing:\nif isinstance(cmd,unicode):\n cmd = cmd.encode(\"UTF-8\")\n\nd) For Lennart Regebro's suggestion add:\nassert type(cmd1) == type(cmd2)\n\nafter\nprint cmd1 == cmd2\n\n", "Maybe it helps to use only raw strings.\n", "Finally, I found a way to run the os.system(cmd)!\nSimple trick, to \"clean\" the cmd string:\nos.system(str(cmd))\n\nNow, I'm able to build the cmd with all arguments I need and at the end I just \"clean\" it with str() call before run it with os.system() call.\nThanks a lot for your answers!\nswon\n" ]
[ 5, 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001068812_python.txt
Q: Is this a correct way to return JSON in Python/GAE for parsing in JavaScript? I am making an API for some AJAX related things in my web app on GAE in Python. After setting the content-type to 'application/json' and accessing my url directly- http://mysite.com/api?method=theMethod&param=firstParam -I am being prompted with a 'save file' dialog box instead of seeing the JSON object displayed. The file contains the JSON object when opened in notepad. Is this a correct way to return JSON from Python using GAE to be parsed in JavaScript? from django.utils import simplejson self.response.headers['Content-Type'] = 'application/json' jsonData = {"foo" : "bar"} self.response.out.write(simplejson.dumps(jsonData)) I have noticed that when using another API from somewhere else such as Flickr, my browser displays the JSON object rather than asking for me to save the file. This behavior is what encouraged me to investigate my implementation. My only thought is that this is related to a JSONP implementation. Judging from rfc4627, I should be using 'application/json'. A: This is the right way, mime type for json is application/json not text/json and NEVER text/html. https://www.rfc-editor.org/rfc/rfc4627 starts with "The application/json Media Type for JavaScript Object Notation (JSON)" read this for more details/options A: I think the Flickr API returns the json as type 'text/plain' which then will be displayed as text. You might try 'text/json' as a halfway point. Being easily viewed might outweigh being correct in your case. Also consider that should any client require the content type to be 'application/json' and refuse to work with 'text/plain' that client should specifically request the type it wants without '/'. This then could be a case you look for when preparing the content type of your response, and you could document your service accordingly. See Request: http://www.flickr.com/services/rest/?method=flickr.test.echo&format=json&api_key=cecc9218c59188ebc6150eff9cd908dc Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Referer:http://www.flickr.com/services/api/response.json.html User-Agent:Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_7; en-us) AppleWebKit/530.18 (KHTML, like Gecko) Version/4.0.1 Safari/530.18 Response Headers Connection:close Content-Encoding:gzip Content-Length:134 Content-Type:text/plain; charset=utf-8 Date:Thu, 02 Jul 2009 03:19:34 GMT P3p:policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Vary:Accept-Encoding Content jsonFlickrApi({"method":{"_content":"flickr.test.echo"}, "format":{"_content":"json"}, "api_key":{"_content":"cecc9218c59188ebc6150eff9cd908dc"}, "stat":"ok"})
Is this a correct way to return JSON in Python/GAE for parsing in JavaScript?
I am making an API for some AJAX related things in my web app on GAE in Python. After setting the content-type to 'application/json' and accessing my url directly- http://mysite.com/api?method=theMethod&param=firstParam -I am being prompted with a 'save file' dialog box instead of seeing the JSON object displayed. The file contains the JSON object when opened in notepad. Is this a correct way to return JSON from Python using GAE to be parsed in JavaScript? from django.utils import simplejson self.response.headers['Content-Type'] = 'application/json' jsonData = {"foo" : "bar"} self.response.out.write(simplejson.dumps(jsonData)) I have noticed that when using another API from somewhere else such as Flickr, my browser displays the JSON object rather than asking for me to save the file. This behavior is what encouraged me to investigate my implementation. My only thought is that this is related to a JSONP implementation. Judging from rfc4627, I should be using 'application/json'.
[ "This is the right way, mime type for json is application/json not text/json and NEVER text/html.\nhttps://www.rfc-editor.org/rfc/rfc4627 starts with \"The application/json Media Type for JavaScript Object Notation (JSON)\"\nread this for more details/options\n", "I think the Flickr API returns the json as type 'text/plain' which then will be displayed as text. You might try 'text/json' as a halfway point. Being easily viewed might outweigh being correct in your case.\nAlso consider that should any client require the content type to be 'application/json' and refuse to work with 'text/plain' that client should specifically request the type it wants without '/'. This then could be a case you look for when preparing the content type of your response, and you could document your service accordingly.\nSee Request:\nhttp://www.flickr.com/services/rest/?method=flickr.test.echo&format=json&api_key=cecc9218c59188ebc6150eff9cd908dc\n\nRequest Headers\nAccept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\nReferer:http://www.flickr.com/services/api/response.json.html\nUser-Agent:Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_7; en-us) AppleWebKit/530.18 (KHTML, like Gecko) Version/4.0.1 Safari/530.18\n\nResponse Headers\nConnection:close\nContent-Encoding:gzip\nContent-Length:134\nContent-Type:text/plain; charset=utf-8\nDate:Thu, 02 Jul 2009 03:19:34 GMT\nP3p:policyref=\"http://p3p.yahoo.com/w3c/p3p.xml\", CP=\"CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV\"\nVary:Accept-Encoding\n\nContent\njsonFlickrApi({\"method\":{\"_content\":\"flickr.test.echo\"}, \"format\":{\"_content\":\"json\"}, \"api_key\":{\"_content\":\"cecc9218c59188ebc6150eff9cd908dc\"}, \"stat\":\"ok\"})\n\n" ]
[ 4, 1 ]
[]
[]
[ "ajax", "api", "google_app_engine", "json", "python" ]
stackoverflow_0001072281_ajax_api_google_app_engine_json_python.txt