content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Django: Easily add extra manager for child class, still use default manager from AbstractBase This question is about the last example on Custom managers and model inheritance. I want to be able to do something similar to the following: class ExtraManagerModel(models.Model): # OtherManager class supplied by argument shall be set as manager here class Meta: abstract = True class ChildC(AbstractBase, ExtraManagerModel(OtherManager)): # That doesn't work, something like that ... # Default manager is CustomManager, but OtherManager is # also available via the "extra_manager" attribute. The whole purpose of this is that I don't want to write an ExtraManagerModel class for every overwritten manager in order to keep the default manager of the parent class (AbstractBase). Any ideas how this can be acheived? A: I am absolutely not sure that I understand your question. Your code snippet seems to be contradicting the comment underneath it. Your code snippet looks like you want to be able to have different ExtraManagerModel classes. If that is the case, you can use an abstract class that is implemented by those ExtraManagerModels, and you inherit the abstract parent of those classes for the childC class. Hope this helps.
Django: Easily add extra manager for child class, still use default manager from AbstractBase
This question is about the last example on Custom managers and model inheritance. I want to be able to do something similar to the following: class ExtraManagerModel(models.Model): # OtherManager class supplied by argument shall be set as manager here class Meta: abstract = True class ChildC(AbstractBase, ExtraManagerModel(OtherManager)): # That doesn't work, something like that ... # Default manager is CustomManager, but OtherManager is # also available via the "extra_manager" attribute. The whole purpose of this is that I don't want to write an ExtraManagerModel class for every overwritten manager in order to keep the default manager of the parent class (AbstractBase). Any ideas how this can be acheived?
[ "I am absolutely not sure that I understand your question. Your code snippet seems to be contradicting the comment underneath it.\nYour code snippet looks like you want to be able to have different ExtraManagerModel classes. If that is the case, you can use an abstract class that is implemented by those ExtraManagerModels, and you inherit the abstract parent of those classes for the childC class.\nHope this helps.\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001188899_django_django_models_python.txt
Q: What are the Python thread + Unix signals semantics? What are the rules surrounding Python threads and how Unix signals are handled? Is KeyboardInterrupt, which is triggered by SIGINT but handled internally by the Python runtime, handled differently? A: First, when setting up signal handlers using the signal module, you must create them in the main thread. You will receive an exception if you try to create them in a separate thread. Signal handlers registered via the signal.signal() function will always be called in the main thread. On architectures which support sending signals to threads, at the C level I believe the Python runtime ignores all signals on threads and has a signal handler on the main thread, which it uses to dispatch to your Python-code signal handler. The documentation for the thread module states that the KeyboardInterrupt exception (which is ordinarily triggered by SIGINT) can be delivered to an arbitrary thread unless you have the signal module available to you, which all Unix systems should have. In that case, it's delivered to the main thread. If you're on a system without signal, you'll have to catch KeyboardInterrupt in your thread and call thread.interrupt_main() to re-raise it in the main thread. More information can be found in the Python docs for the thread and signal modules. A: From the signal documentation: Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication. Use locks instead.
What are the Python thread + Unix signals semantics?
What are the rules surrounding Python threads and how Unix signals are handled? Is KeyboardInterrupt, which is triggered by SIGINT but handled internally by the Python runtime, handled differently?
[ "First, when setting up signal handlers using the signal module, you must create them in the main thread. You will receive an exception if you try to create them in a separate thread.\nSignal handlers registered via the signal.signal() function will always be called in the main thread. On architectures which support sending signals to threads, at the C level I believe the Python runtime ignores all signals on threads and has a signal handler on the main thread, which it uses to dispatch to your Python-code signal handler.\nThe documentation for the thread module states that the KeyboardInterrupt exception (which is ordinarily triggered by SIGINT) can be delivered to an arbitrary thread unless you have the signal module available to you, which all Unix systems should have. In that case, it's delivered to the main thread. If you're on a system without signal, you'll have to catch KeyboardInterrupt in your thread and call thread.interrupt_main() to re-raise it in the main thread.\nMore information can be found in the Python docs for the thread and signal modules.\n", "From the signal documentation:\n\nSome care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication. Use locks instead.\n\n" ]
[ 9, 4 ]
[]
[]
[ "multithreading", "posix", "python", "signals", "unix" ]
stackoverflow_0001189072_multithreading_posix_python_signals_unix.txt
Q: Reason for unintuitive UnboundLocalError behaviour Note: There is a very similar question here. Bear with me, however; my question is not "Why does the error happen," but "Why was Python implemented as to throw an error in this case." I just stumbled over this: a = 5 def x() print a a = 6 x() throws an UnboundLocalException. Now, I do know why that happens (later in this scope, a is bound, so a is considered local throughout the scope). In this case: a = 5 def x() print b b = 6 x() this makes very much sense. But the first case has an intuitive logic to it, which is to mean this: a = 5 def x() print globals()["a"] a = 6 # local assignment x() I guess there's a reason why the "intutive" version is not allowed, but what is it? Although this might be a case of "Explicit is better than implicit," fiddling around with the globals() always feels a bit unclean to me. To put this into perspective, the actual case where this happened to me was someone else's script I had to change for one moment. In my (short-lived) change, I did some file renaming while the script was running, so I inserted import os os.rename("foo", "bar") into the script. This inserting happend inside a function. The module already imported os at the top level (I didn't check that), and some os.somefunction calls where made inside the function, but before my insert. These calls obviously triggered an UnboundLocalException. So, can someone explain the reasoning behind this implementation to me? Is it to prevent the user from making mistakes? Would the "intuitive" way just make things more complicated for the bytecode compiler? Or is there a possible ambiguity that I didn't think of? A: Having the same, identical name refer to completely different variables within the same flow of linear code is such a mind-boggling complexity that it staggers the mind. Consider: def aaaargh(alist): for x in alist: print a a = 23 what is THIS code supposed to do in your desired variant on Python? Have the a in the very same print statement refer to completely different and unrelated variables on the first leg of the loop vs the second one (assuming there IS a second one)? Have it work differently even for a one-item alist than the non-looping code would? Seriously, this way madness lies -- not even thinking of the scary implementation issues, just trying to document and teach this is something that would probably make me switch languages. And what would be the underpinning motivation for the language, its implementers, its teachers, its learners, its practitioners, to shoulder all of this conceptual burden -- to support and encourage the semi-hidden, non-explicit use of GLOBAL VARIABLES?! That hardly seems a worthwhile goal, does it now?! A: I believe there is a possible ambiguity. a = 5 def x(): print a a = 6 # could be local or trying to update the global variable x() It could be as you assumed: a = 5 def x(): print globals()["a"] a = 6 # local assignment x() Or it could be they want to update the global variable to 6: a = 5 def x(): global a print a a = 6 x() A: This is a basic side effect of scoping. The python developers decided that global variables shouldn't be available in the scope you are trying to use it in. Take this for example: a = 5 def x(): a = 6 print a x() print a This outputs 6 5. It is generally considered bad practice to have global variables anyway, so the python developers restricted this. You have to explicitly make the global variable accessible in order to access it. This is actually to prevent ambiguity. Consider this: a = 5 def x(): a = 6 print a y() def y(): global a a = a + 1 print a x() print a If x() considered a to be local, and did the assignment, this would output 6 6 7. Whoever wrote x() may not have considered that y() would use a global variable named a. Thus causing y() to act abnormally. Luckily the python scopping makes it so that the developer of x() doesn't have to worry about how the developer of y() implemented y(), only that it does what it is supposed to. As a result, this outputs 6 6 6 (figures), like it is supposed to. As a result, the UnboundLocalException is perfectly intuitive.
Reason for unintuitive UnboundLocalError behaviour
Note: There is a very similar question here. Bear with me, however; my question is not "Why does the error happen," but "Why was Python implemented as to throw an error in this case." I just stumbled over this: a = 5 def x() print a a = 6 x() throws an UnboundLocalException. Now, I do know why that happens (later in this scope, a is bound, so a is considered local throughout the scope). In this case: a = 5 def x() print b b = 6 x() this makes very much sense. But the first case has an intuitive logic to it, which is to mean this: a = 5 def x() print globals()["a"] a = 6 # local assignment x() I guess there's a reason why the "intutive" version is not allowed, but what is it? Although this might be a case of "Explicit is better than implicit," fiddling around with the globals() always feels a bit unclean to me. To put this into perspective, the actual case where this happened to me was someone else's script I had to change for one moment. In my (short-lived) change, I did some file renaming while the script was running, so I inserted import os os.rename("foo", "bar") into the script. This inserting happend inside a function. The module already imported os at the top level (I didn't check that), and some os.somefunction calls where made inside the function, but before my insert. These calls obviously triggered an UnboundLocalException. So, can someone explain the reasoning behind this implementation to me? Is it to prevent the user from making mistakes? Would the "intuitive" way just make things more complicated for the bytecode compiler? Or is there a possible ambiguity that I didn't think of?
[ "Having the same, identical name refer to completely different variables within the same flow of linear code is such a mind-boggling complexity that it staggers the mind. Consider:\ndef aaaargh(alist):\n for x in alist:\n print a\n a = 23\n\nwhat is THIS code supposed to do in your desired variant on Python? Have the a in the very same print statement refer to completely different and unrelated variables on the first leg of the loop vs the second one (assuming there IS a second one)? Have it work differently even for a one-item alist than the non-looping code would? Seriously, this way madness lies -- not even thinking of the scary implementation issues, just trying to document and teach this is something that would probably make me switch languages.\nAnd what would be the underpinning motivation for the language, its implementers, its teachers, its learners, its practitioners, to shoulder all of this conceptual burden -- to support and encourage the semi-hidden, non-explicit use of GLOBAL VARIABLES?! That hardly seems a worthwhile goal, does it now?!\n", "I believe there is a possible ambiguity.\na = 5\ndef x():\n print a\n a = 6 # could be local or trying to update the global variable\nx()\n\nIt could be as you assumed:\na = 5\ndef x():\n print globals()[\"a\"]\n a = 6 # local assignment\nx()\n\nOr it could be they want to update the global variable to 6:\na = 5\ndef x():\n global a\n print a\n a = 6\nx()\n\n", "This is a basic side effect of scoping. The python developers decided that global variables shouldn't be available in the scope you are trying to use it in. Take this for example:\na = 5\ndef x():\n a = 6\n print a\nx()\nprint a\n\nThis outputs 6 5.\nIt is generally considered bad practice to have global variables anyway, so the python developers restricted this. You have to explicitly make the global variable accessible in order to access it. This is actually to prevent ambiguity. Consider this:\na = 5\ndef x():\n a = 6\n print a\n y()\ndef y():\n global a\n a = a + 1\n print a\nx()\nprint a\n\nIf x() considered a to be local, and did the assignment, this would output 6 6 7. Whoever wrote x() may not have considered that y() would use a global variable named a. Thus causing y() to act abnormally. Luckily the python scopping makes it so that the developer of x() doesn't have to worry about how the developer of y() implemented y(), only that it does what it is supposed to. As a result, this outputs 6 6 6 (figures), like it is supposed to.\nAs a result, the UnboundLocalException is perfectly intuitive.\n" ]
[ 6, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001188944_python.txt
Q: Good or bad practice in Python: import in the middle of a file Suppose I have a relatively long module, but need an external module or method only once. Is it considered OK to import that method or module in the middle of the module? Or should imports only be in the first part of the module. Example: import string, pythis, pythat ... ... ... ... def func(): blah blah blah from pysomething import foo foo() etc etc etc ... ... ... Please justify your answer and add links to PEPs or relevant sources A: PEP 8 authoritatively states: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. PEP 8 should be the basis of any "in-house" style guide, since it summarizes what the core Python team has found to be the most effective style, overall (and with individual dissent of course, as on any other language, but consensus and the BDFL agree on PEP 8). A: There was a detailed discussion of this topic on the Python mailing list in 2001: https://mail.python.org/pipermail/python-list/2001-July/071567.html Here are some of the reasons discussed in that thread. From Peter Hansen, here are three reasons not to have imports all at the top of the file: Possible reasons to import in a function: Readability: if the import is needed in only one function and that's very unlikely ever to change, it might be clearer and cleaner to put it there only. Startup time: if you don't have the import outside of the function definitions, it will not execute when your module is first imported by another, but only when one of the functions is called. This delays the overhead of the import (or avoids it if the functions might never be called). There is always one more reason than the ones we've thought of until now. Just van Rossum chimed in with a fourth: Overhead: if the module imports a lot of modules, and there's a good chance only a few will actually be used. This is similar to the "Startup time" reason, but goes a little further. If a script using your module only uses a small subset of the functionality it can save quite some time, especially if the imports that can be avoided also import a lot of modules. A fifth was offered as local imports are a way to avoid the problem of circular imports. Feel free to read through that thread for the full discussion. A: Everyone else has already mentioned the PEPs, but also take care to not have import statements in the middle of critical code. At least under Python 2.6, there are several more bytecode instructions required when a function has an import statement. >>> def f(): from time import time print time() >>> dis.dis(f) 2 0 LOAD_CONST 1 (-1) 3 LOAD_CONST 2 (('time',)) 6 IMPORT_NAME 0 (time) 9 IMPORT_FROM 0 (time) 12 STORE_FAST 0 (time) 15 POP_TOP 3 16 LOAD_FAST 0 (time) 19 CALL_FUNCTION 0 22 PRINT_ITEM 23 PRINT_NEWLINE 24 LOAD_CONST 0 (None) 27 RETURN_VALUE >>> def g(): print time() >>> dis.dis(g) 2 0 LOAD_GLOBAL 0 (time) 3 CALL_FUNCTION 0 6 PRINT_ITEM 7 PRINT_NEWLINE 8 LOAD_CONST 0 (None) 11 RETURN_VALUE A: If the imported module is infrequently used and the import is expensive, the in-the-middle-import is OK. Otherwise, is it wise to follow Alex Martelli's suggestion. A: It's generally considered bad practice, but sometimes it's unavoidable (say when you have to avoid a circular import). An example of a time when it is necessary: I use Waf to build all our code. The system is split into tools, and each tool is implemented in it's own module. Each tool module can implent a detect() method to detect if the pre-requisites are present. An example of one of these may do the following: def detect(self): import foobar If this works correctly, the tool is usable. Then later in the same module the foobar module may be needed, so you would have to import it again, at function level scope. Clearly if it was imported at module level things would blow up completely. A: It is considered "Good Form" to group all imports together at the start of the file. Modules can import other modules. It is customary but not required to place all import statements at the beginning of a module (or script, for that matter). The imported module names are placed in the importing module’s global symbol table. From here: http://docs.python.org/tutorial/modules.html A: 95% of the time, you should put all your imports at the top of the file. One case where you might want to do a function-local import is if you have to do it in order to avoid circular imports. Say foo.py imports bar.py, and a function in bar.py needs to import something from foo.py. If you put all your imports at the top, you could have unexpected problems importing files that rely on information that hasn't been compiled yet. In this case, having a function local import can allow your code to hold off on importing the other module until its code has been fully compiled, and you call the relevant function. However, it looks like your use-case is more about making it clear where foo() is coming from. In this case, I would far prefer one of two things: First, rather than from prerequisite import foo import prerequisite directly, and later on refer to it as prerequisite.foo. The added verbosity pays itself back in spades through increased code transparency. Alternatively, (or in conjunction with the above) if it's really such a long distance between your import and the place it's being used, it may be that your module is too big. The need for an import that nothing else uses might be an indication of a place where your code could stand to be refactored into a more manageably-sized chunk. A: PEP8: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. It is not bad practice to have scopped imports. So that the import applies only to the function you used it in. I think the code would be more readable though if the imports where grouped together at the top of the block or if you want it globally at the top of the file. A: Well, I think it is a good practice to group all imports together at start of file since everyone knows where to look if want to know which libs are loaded
Good or bad practice in Python: import in the middle of a file
Suppose I have a relatively long module, but need an external module or method only once. Is it considered OK to import that method or module in the middle of the module? Or should imports only be in the first part of the module. Example: import string, pythis, pythat ... ... ... ... def func(): blah blah blah from pysomething import foo foo() etc etc etc ... ... ... Please justify your answer and add links to PEPs or relevant sources
[ "PEP 8 authoritatively states:\n\nImports are always put at the top of\n the file, just after any module\n comments and docstrings, and before module globals and constants.\n\nPEP 8 should be the basis of any \"in-house\" style guide, since it summarizes what the core Python team has found to be the most effective style, overall (and with individual dissent of course, as on any other language, but consensus and the BDFL agree on PEP 8).\n", "There was a detailed discussion of this topic on the Python mailing list in 2001:\nhttps://mail.python.org/pipermail/python-list/2001-July/071567.html\nHere are some of the reasons discussed in that thread. From Peter Hansen, here are three reasons not to have imports all at the top of the file:\n\nPossible reasons to import in a function:\n\nReadability: if the import is needed in only one\n function and that's very unlikely ever to change,\n it might be clearer and cleaner to put it there only.\nStartup time: if you don't have the import outside\n of the function definitions, it will not execute\n when your module is first imported by another, but\n only when one of the functions is called. This\n delays the overhead of the import (or avoids it\n if the functions might never be called).\nThere is always one more reason than the ones\n we've thought of until now.\n\n\nJust van Rossum chimed in with a fourth:\n\n\nOverhead: if the module imports a lot of modules,\n and there's a good chance only a few will actually\n be used. This is similar to the \"Startup time\"\n reason, but goes a little further. If a script\n using your module only uses a small subset of the\n functionality it can save quite some time, especially\n if the imports that can be avoided also import a lot\n of modules.\n\n\nA fifth was offered as local imports are a way to avoid the problem of circular imports.\nFeel free to read through that thread for the full discussion.\n", "Everyone else has already mentioned the PEPs, but also take care to not have import statements in the middle of critical code. At least under Python 2.6, there are several more bytecode instructions required when a function has an import statement.\n>>> def f():\n from time import time\n print time()\n\n>>> dis.dis(f)\n 2 0 LOAD_CONST 1 (-1)\n 3 LOAD_CONST 2 (('time',))\n 6 IMPORT_NAME 0 (time)\n 9 IMPORT_FROM 0 (time)\n 12 STORE_FAST 0 (time)\n 15 POP_TOP \n\n 3 16 LOAD_FAST 0 (time)\n 19 CALL_FUNCTION 0\n 22 PRINT_ITEM \n 23 PRINT_NEWLINE \n 24 LOAD_CONST 0 (None)\n 27 RETURN_VALUE\n\n>>> def g():\n print time()\n\n>>> dis.dis(g)\n 2 0 LOAD_GLOBAL 0 (time)\n 3 CALL_FUNCTION 0\n 6 PRINT_ITEM \n 7 PRINT_NEWLINE \n 8 LOAD_CONST 0 (None)\n 11 RETURN_VALUE \n\n", "If the imported module is infrequently used and the import is expensive, the in-the-middle-import is OK.\nOtherwise, is it wise to follow Alex Martelli's suggestion.\n", "It's generally considered bad practice, but sometimes it's unavoidable (say when you have to avoid a circular import).\nAn example of a time when it is necessary: I use Waf to build all our code. The system is split into tools, and each tool is implemented in it's own module. Each tool module can implent a detect() method to detect if the pre-requisites are present. An example of one of these may do the following:\ndef detect(self):\n import foobar\n\nIf this works correctly, the tool is usable. Then later in the same module the foobar module may be needed, so you would have to import it again, at function level scope. Clearly if it was imported at module level things would blow up completely.\n", "It is considered \"Good Form\" to group all imports together at the start of the file.\n\nModules can import other modules. It is customary but not required to place all import statements at the beginning of a module (or script, for that matter). The imported module names are placed in the importing module’s global symbol table.\n\nFrom here: http://docs.python.org/tutorial/modules.html\n", "95% of the time, you should put all your imports at the top of the file. One case where you might want to do a function-local import is if you have to do it in order to avoid circular imports. Say foo.py imports bar.py, and a function in bar.py needs to import something from foo.py. If you put all your imports at the top, you could have unexpected problems importing files that rely on information that hasn't been compiled yet. In this case, having a function local import can allow your code to hold off on importing the other module until its code has been fully compiled, and you call the relevant function.\nHowever, it looks like your use-case is more about making it clear where foo() is coming from. In this case, I would far prefer one of two things: \nFirst, rather than\nfrom prerequisite import foo\n\nimport prerequisite directly, and later on refer to it as prerequisite.foo. The added verbosity pays itself back in spades through increased code transparency.\nAlternatively, (or in conjunction with the above) if it's really such a long distance between your import and the place it's being used, it may be that your module is too big. The need for an import that nothing else uses might be an indication of a place where your code could stand to be refactored into a more manageably-sized chunk.\n", "PEP8:\n\nImports are always put at the top of\n the file, just after any module\n comments and docstrings, and before module globals and constants.\n\nIt is not bad practice to have scopped imports. So that the import applies only to the function you used it in.\nI think the code would be more readable though if the imports where grouped together at the top of the block or if you want it globally at the top of the file. \n", "Well, I think it is a good practice to group all imports together at start of file since everyone knows where to look if want to know which libs are loaded\n" ]
[ 65, 31, 20, 12, 8, 7, 6, 3, 2 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0001188640_python_python_import.txt
Q: wxPython RichTextCtrl much slower than tkInter Text? I've made a small tool that parses a chunk of text, does some simple processing (retrieves values from a dictionary, a few regex, etc.) and then spits the results. In order to make easier to read the results, I made two graphic ports, one with tkInter and other with wxPython, so the output is nicely displayed in a Text Area with some words having different colours. The tkInter implementation uses Tkinter.Text object and to apply the colours to the words uses tags (configured with the method Tkinter.Text.tag_config and passing them to Tkinter.Text.insert), and the measured while outputting about 400 different coloured words is < 0.02s. The wxPython implementation uses wx.richtext.RichTextCtrl and to apply the colours to the words uses wx.richtext.RichTextCtrl.BeginTextColour and then wx.richtext.RichTextCtrl.AppendText; the performance is ridiculous, it takes abut 1.4s to do the same job that only took 0.02s to the tkInter port. Is this an intrinsic problem of the RichTextCtrl widget, the wxPython bindings, or there is some way to speed it up? Thanks! A: I'm copying here the comment that solved the problem: Have you tried using Freeze() and Thaw() to only update the display after you are done appending the coloured text? – mghie Jun 30 at 7:20 A: It kind of avoids the question slightly, but could you use wxStyledTextCtrl instead?
wxPython RichTextCtrl much slower than tkInter Text?
I've made a small tool that parses a chunk of text, does some simple processing (retrieves values from a dictionary, a few regex, etc.) and then spits the results. In order to make easier to read the results, I made two graphic ports, one with tkInter and other with wxPython, so the output is nicely displayed in a Text Area with some words having different colours. The tkInter implementation uses Tkinter.Text object and to apply the colours to the words uses tags (configured with the method Tkinter.Text.tag_config and passing them to Tkinter.Text.insert), and the measured while outputting about 400 different coloured words is < 0.02s. The wxPython implementation uses wx.richtext.RichTextCtrl and to apply the colours to the words uses wx.richtext.RichTextCtrl.BeginTextColour and then wx.richtext.RichTextCtrl.AppendText; the performance is ridiculous, it takes abut 1.4s to do the same job that only took 0.02s to the tkInter port. Is this an intrinsic problem of the RichTextCtrl widget, the wxPython bindings, or there is some way to speed it up? Thanks!
[ "I'm copying here the comment that solved the problem:\n\nHave you tried using Freeze() and\n Thaw() to only update the display\n after you are done appending the\n coloured text? – mghie Jun 30 at 7:20\n\n", "It kind of avoids the question slightly, but could you use wxStyledTextCtrl instead?\n" ]
[ 1, 0 ]
[]
[]
[ "performance", "python", "richtextediting", "tkinter", "wxpython" ]
stackoverflow_0001059214_performance_python_richtextediting_tkinter_wxpython.txt
Q: Why python compile the source to bytecode before interpreting? Why python compile the source to bytecode before interpreting? Why not interpret from the source directly? A: Nearly no interpreter really interprets code directly, line by line – it's simply too inefficient. Almost all interpreters use some intermediate representation which can be executed easily. Also, small optimizations can be performed on this intermediate code. Python furthermore stores this code which has a huge advantage for the next time this code gets executed: Python doesn't have to parse the code anymore; parsing is the slowest part in the compile process. Thus, a bytecode representation reduces execution overhead quite substantially. A: Because you can compile to a .pyc once and interpret from it many times. So if you're running a script many times you only have the overhead of parsing the source code once. A: Because interpretting from bytecode directly is faster. It avoids the need to do lexing, for one thing. A: Re-lexing and parsing the source code over and over, rather than doing it just once (most often on the first import), would obviously be a silly and pointless waste of effort. A: Although there is a small efficiency aspect to it (you can store the bytecode on disk or in memory), its mostly engineering: it allows you separate parsing from interpreting. Parsers can often be nasty creatures, full of edge-cases and having to conform to esoteric rules like using just the right amount of lookahead and resolving shift-reduce problems. By contrast, interpreting is really simple: its just a big switch statement using the bytecode's opcode. A: I doubt very much that the reason is performance, albeit be it a nice side effect. I would say that it's only natural to think a VM built around some high-level assembly language would be more practical than to find and replace text in some source code string. Edit: Okay, clearly, who ever put a -1 vote on my post without leaving a reasonable comment to explain knows very little about virtual machines (run-time environments). http://channel9.msdn.com/shows/Going+Deep/Expert-to-Expert-Erik-Meijer-and-Lars-Bak-Inside-V8-A-Javascript-Virtual-Machine/
Why python compile the source to bytecode before interpreting?
Why python compile the source to bytecode before interpreting? Why not interpret from the source directly?
[ "Nearly no interpreter really interprets code directly, line by line – it's simply too inefficient. Almost all interpreters use some intermediate representation which can be executed easily. Also, small optimizations can be performed on this intermediate code.\nPython furthermore stores this code which has a huge advantage for the next time this code gets executed: Python doesn't have to parse the code anymore; parsing is the slowest part in the compile process. Thus, a bytecode representation reduces execution overhead quite substantially.\n", "Because you can compile to a .pyc once and interpret from it many times.\nSo if you're running a script many times you only have the overhead of parsing the source code once.\n", "Because interpretting from bytecode directly is faster. It avoids the need to do lexing, for one thing.\n", "Re-lexing and parsing the source code over and over, rather than doing it just once (most often on the first import), would obviously be a silly and pointless waste of effort.\n", "Although there is a small efficiency aspect to it (you can store the bytecode on disk or in memory), its mostly engineering: it allows you separate parsing from interpreting. Parsers can often be nasty creatures, full of edge-cases and having to conform to esoteric rules like using just the right amount of lookahead and resolving shift-reduce problems. By contrast, interpreting is really simple: its just a big switch statement using the bytecode's opcode.\n", "I doubt very much that the reason is performance, albeit be it a nice side effect. I would say that it's only natural to think a VM built around some high-level assembly language would be more practical than to find and replace text in some source code string.\nEdit:\nOkay, clearly, who ever put a -1 vote on my post without leaving a reasonable comment to explain knows very little about virtual machines (run-time environments).\nhttp://channel9.msdn.com/shows/Going+Deep/Expert-to-Expert-Erik-Meijer-and-Lars-Bak-Inside-V8-A-Javascript-Virtual-Machine/\n" ]
[ 39, 8, 7, 6, 3, 0 ]
[]
[]
[ "bytecode", "compiler_construction", "interpreter", "python" ]
stackoverflow_0000888100_bytecode_compiler_construction_interpreter_python.txt
Q: Bizzare eclipse-pydev console behavior Stumbled upon some seemingly random character mangling in eclipse-pydev console: specific characters are read from stdout as '\xd0?' (first byte correct, second "?") Is there some solution to this? (PyDEV 1.4.6, Python 2.6, console encoding - inherited UTF-8, Eclipse 3.5, WinXP with UK locale) Code: import sys if __name__ == "__main__": for l in sys.stdin: print 'Byte: ', repr(l) try: u = repr(unicode(l)) print 'Unicode:', u except Exception, e: print 'Fail: ', e Input: йцукенгшщзхъ фывапролджэ ячсмитьбю ЙЦУКЕНГШЩЗХЪ ФЫВАПРОЛДЖЭ ЯЧСМИТЬБЮ and output: Byte: '\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd\xd0\xb3\xd1\x88\xd1\x89\xd0\xb7\xd1\x85\xd1\x8a\r\n' Unicode: u'\u0439\u0446\u0443\u043a\u0435\u043d\u0433\u0448\u0449\u0437\u0445\u044a\r\n' Byte: '\xd1\x84\xd1\x8b\xd0\xb2\xd0\xb0\xd0\xbf\xd1\x80\xd0\xbe\xd0\xbb\xd0\xb4\xd0\xb6\xd1?\r\n' Fail: 'utf8' codec can't decode bytes in position 20-21: invalid data Byte: '\xd1?\xd1\x87\xd1?\xd0\xbc\xd0\xb8\xd1\x82\xd1\x8c\xd0\xb1\xd1\x8e\r\n' Fail: 'utf8' codec can't decode bytes in position 0-1: invalid data Byte: '\xd0\x99\xd0\xa6\xd0\xa3\xd0\x9a\xd0\x95\xd0?\xd0\x93\xd0\xa8\xd0\xa9\xd0\x97\xd0\xa5\xd0\xaa\r\n' Fail: 'utf8' codec can't decode bytes in position 10-11: invalid data Byte: '\xd0\xa4\xd0\xab\xd0\x92\xd0?\xd0\x9f\xd0\xa0\xd0\x9e\xd0\x9b\xd0\x94\xd0\x96\xd0\xad\r\n' Fail: 'utf8' codec can't decode bytes in position 6-7: invalid data Byte: '\xd0\xaf\xd0\xa7\xd0\xa1\xd0\x9c\xd0\x98\xd0\xa2\xd0\xac\xd0\x91\xd0\xae\r\n' Unicode: u'\u042f\u0427\u0421\u041c\u0418\u0422\u042c\u0411\u042e\r\n' A: Well, I don't know how to fix it, but I have deduced the pattern in what goes wrong. The bytes that get replaced with "?" are precisely those bytes that are not defined in windows-1252 - that is, bytes 0x81, 0x8d, 0x8f, 0x90, and 0x9d. What this looks like to me is that somehow you're getting this series of translations: unicode input -> series of bytes in utf-8 utf-8 bytes -> read by something that expects the input to be Windows-1252, and so translates impossible bytes to "?" the characters in converted back to bytes via windows-1252, and fed into your variable l. Does this version of pydev give sys.stdin.encoding a decent value? And how does sys.stdin.encoding compare to the result of sys.getdefaultencoding()? A: I'm not too sure about input encoding, but I've found that with output encoding to tty streams, an explicit encoding step was needed for Python 2.x but not for Python 3.x. So for input you may need an explicit decode step using e.g. l.decode(sys.stdin.encoding). Does it work OK in a vanilla Python console?
Bizzare eclipse-pydev console behavior
Stumbled upon some seemingly random character mangling in eclipse-pydev console: specific characters are read from stdout as '\xd0?' (first byte correct, second "?") Is there some solution to this? (PyDEV 1.4.6, Python 2.6, console encoding - inherited UTF-8, Eclipse 3.5, WinXP with UK locale) Code: import sys if __name__ == "__main__": for l in sys.stdin: print 'Byte: ', repr(l) try: u = repr(unicode(l)) print 'Unicode:', u except Exception, e: print 'Fail: ', e Input: йцукенгшщзхъ фывапролджэ ячсмитьбю ЙЦУКЕНГШЩЗХЪ ФЫВАПРОЛДЖЭ ЯЧСМИТЬБЮ and output: Byte: '\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd\xd0\xb3\xd1\x88\xd1\x89\xd0\xb7\xd1\x85\xd1\x8a\r\n' Unicode: u'\u0439\u0446\u0443\u043a\u0435\u043d\u0433\u0448\u0449\u0437\u0445\u044a\r\n' Byte: '\xd1\x84\xd1\x8b\xd0\xb2\xd0\xb0\xd0\xbf\xd1\x80\xd0\xbe\xd0\xbb\xd0\xb4\xd0\xb6\xd1?\r\n' Fail: 'utf8' codec can't decode bytes in position 20-21: invalid data Byte: '\xd1?\xd1\x87\xd1?\xd0\xbc\xd0\xb8\xd1\x82\xd1\x8c\xd0\xb1\xd1\x8e\r\n' Fail: 'utf8' codec can't decode bytes in position 0-1: invalid data Byte: '\xd0\x99\xd0\xa6\xd0\xa3\xd0\x9a\xd0\x95\xd0?\xd0\x93\xd0\xa8\xd0\xa9\xd0\x97\xd0\xa5\xd0\xaa\r\n' Fail: 'utf8' codec can't decode bytes in position 10-11: invalid data Byte: '\xd0\xa4\xd0\xab\xd0\x92\xd0?\xd0\x9f\xd0\xa0\xd0\x9e\xd0\x9b\xd0\x94\xd0\x96\xd0\xad\r\n' Fail: 'utf8' codec can't decode bytes in position 6-7: invalid data Byte: '\xd0\xaf\xd0\xa7\xd0\xa1\xd0\x9c\xd0\x98\xd0\xa2\xd0\xac\xd0\x91\xd0\xae\r\n' Unicode: u'\u042f\u0427\u0421\u041c\u0418\u0422\u042c\u0411\u042e\r\n'
[ "Well, I don't know how to fix it, but I have deduced the pattern in what goes wrong.\nThe bytes that get replaced with \"?\" are precisely those bytes that are not defined in windows-1252 - that is, bytes 0x81, 0x8d, 0x8f, 0x90, and 0x9d.\nWhat this looks like to me is that somehow you're getting this series of translations:\n\nunicode input -> series of bytes in utf-8\nutf-8 bytes -> read by something that expects the input to be Windows-1252, and so translates impossible bytes to \"?\"\nthe characters in converted back to bytes via windows-1252, and fed into your variable l.\n\nDoes this version of pydev give sys.stdin.encoding a decent value? And how does sys.stdin.encoding compare to the result of sys.getdefaultencoding()?\n", "I'm not too sure about input encoding, but I've found that with output encoding to tty streams, an explicit encoding step was needed for Python 2.x but not for Python 3.x.\nSo for input you may need an explicit decode step using e.g. l.decode(sys.stdin.encoding).\nDoes it work OK in a vanilla Python console?\n" ]
[ 2, 0 ]
[]
[]
[ "pydev", "python", "utf_8" ]
stackoverflow_0001188103_pydev_python_utf_8.txt
Q: Does YouTube's API allow uploading and setting of thumbnails? I haven't found anything in their documentation or on the web that says yes or no. Using the Python library. A: Apparently thumbnails can't be updated, per the docs -- media:thumbnail is not listed among the tags you can set on update. On uploading the video, yes, you can have media:thumbnail tags as part of your media:group tag which gives the video's metadata.
Does YouTube's API allow uploading and setting of thumbnails?
I haven't found anything in their documentation or on the web that says yes or no. Using the Python library.
[ "Apparently thumbnails can't be updated, per the docs -- media:thumbnail is not listed among the tags you can set on update. On uploading the video, yes, you can have media:thumbnail tags as part of your media:group tag which gives the video's metadata.\n" ]
[ 2 ]
[]
[]
[ "api", "python", "youtube" ]
stackoverflow_0001189365_api_python_youtube.txt
Q: How can I launch a background process in Pylons? I am trying to write an application that will allow a user to launch a fairly long-running process (5-30 seconds). It should then allow the user to check the output of the process as it is generated. The output will only be needed for the user's current session so nothing needs to be stored long-term. I have two questions regarding how to accomplish this while taking advantage of the Pylons framework: What is the best way to launch a background process such as this with a Pylons controller? What is the best way to get the output of the background process back to the user? (Should I store the output in a database, in session data, etc.?) Edit: The problem is if I launch a command using subprocess in a controller, the controller waits for the subprocess to finish before continuing, showing the user a blank page that is just loading until the process is complete. I want to be able to redirect the user to a status page immediately after starting the subprocess, allowing it to complete on its own. A: I've handled this problem in the past (long-running process invoked over HTTP) by having my invoked 2nd process daemonize. Your Pylons controller makes a system call to your 2nd process (passing whatever data is needed) and the 2nd process immediately becomes a daemon. This ends the system call and your controller can return. My web-apps usually then issue AJAX requests to "check-on" the daemon process until it has completed. I've used both tmp files (cPickle works well) and databases to share information between the daemon and the web-app. Excellent python daemon recipe: http://code.activestate.com/recipes/278731/ A: I think this has little to do with pylons. I would do it (in whatever framework) in these steps: generate some ID for the new job, and add a record in the database. create a new process, e.g. through the subprocess module, and pass the ID on the command line (*). have the process write its output to /tmp/project/ID in pylons, implement URLs of the form /job/ID or /job?id=ID. That will look into the database whether the job is completed or not, and merge the temporary output into the page. (*) It might be better for the subprocess to create another process immediately, and have the pylons process wait for the first child, so that there will be no zombie processes.
How can I launch a background process in Pylons?
I am trying to write an application that will allow a user to launch a fairly long-running process (5-30 seconds). It should then allow the user to check the output of the process as it is generated. The output will only be needed for the user's current session so nothing needs to be stored long-term. I have two questions regarding how to accomplish this while taking advantage of the Pylons framework: What is the best way to launch a background process such as this with a Pylons controller? What is the best way to get the output of the background process back to the user? (Should I store the output in a database, in session data, etc.?) Edit: The problem is if I launch a command using subprocess in a controller, the controller waits for the subprocess to finish before continuing, showing the user a blank page that is just loading until the process is complete. I want to be able to redirect the user to a status page immediately after starting the subprocess, allowing it to complete on its own.
[ "I've handled this problem in the past (long-running process invoked over HTTP) by having my invoked 2nd process daemonize. Your Pylons controller makes a system call to your 2nd process (passing whatever data is needed) and the 2nd process immediately becomes a daemon. This ends the system call and your controller can return. \nMy web-apps usually then issue AJAX requests to \"check-on\" the daemon process until it has completed. I've used both tmp files (cPickle works well) and databases to share information between the daemon and the web-app.\nExcellent python daemon recipe: http://code.activestate.com/recipes/278731/\n", "I think this has little to do with pylons. I would do it (in whatever framework) in these steps:\n\ngenerate some ID for the new job, and add a record in the database.\ncreate a new process, e.g. through the subprocess module, and pass the ID on the command line (*).\nhave the process write its output to /tmp/project/ID\nin pylons, implement URLs of the form /job/ID or /job?id=ID. That will look into the database whether the job is completed or not, and merge the temporary output into the page.\n\n(*) It might be better for the subprocess to create another process immediately, and have the pylons process wait for the first child, so that there will be no zombie processes.\n" ]
[ 6, 1 ]
[]
[]
[ "background", "pylons", "python" ]
stackoverflow_0001182587_background_pylons_python.txt
Q: Event Handling in Chaco When hovering over a data point in Chaco, I would like a small text box to appear, with the text I desire. Also, when I click on a data point (or close enough), I would like my program to take a certain action. I have seen relevant parts of the Chaco documentation, but implementing them has proved to be difficult. Any help would be appreciated. A: Focusing first on the first (hover->textbox) issue, can you better explain what you've tried so far and how it's not working? E.g., from enthought.enable.tools import hover_tool tool = hover_tool.HoverTool(theplot, callback=showtext) etc? There's a more complex example of hover-tool use here (shows a PlotToolbar rather than just a textbox) which you might be able to adapt.
Event Handling in Chaco
When hovering over a data point in Chaco, I would like a small text box to appear, with the text I desire. Also, when I click on a data point (or close enough), I would like my program to take a certain action. I have seen relevant parts of the Chaco documentation, but implementing them has proved to be difficult. Any help would be appreciated.
[ "Focusing first on the first (hover->textbox) issue, can you better explain what you've tried so far and how it's not working? E.g.,\nfrom enthought.enable.tools import hover_tool\n\ntool = hover_tool.HoverTool(theplot, callback=showtext)\n\netc? There's a more complex example of hover-tool use here (shows a PlotToolbar rather than just a textbox) which you might be able to adapt.\n" ]
[ 0 ]
[]
[]
[ "chaco", "python" ]
stackoverflow_0001189864_chaco_python.txt
Q: Empty XML element handling in Python I'm puzzled by minidom parser handling of empty element, as shown in following code section. import xml.dom.minidom doc = xml.dom.minidom.parseString('<value></value>') print doc.firstChild.nodeValue.__repr__() # Out: None print doc.firstChild.toxml() # Out: <value/> doc = xml.dom.minidom.Document() v = doc.appendChild(doc.createElement('value')) v.appendChild(doc.createTextNode('')) print v.firstChild.nodeValue.__repr__() # Out: '' print doc.firstChild.toxml() # Out: <value></value> How can I get consistent behavior? I'd like to receive empty string as value of empty element (which IS what I put in XML structure in the first place). A: Cracking open xml.dom.minidom and searching for "/>", we find this: # Method of the Element(Node) class. def writexml(self, writer, indent="", addindent="", newl=""): # [snip] if self.childNodes: writer.write(">%s"%(newl)) for node in self.childNodes: node.writexml(writer,indent+addindent,addindent,newl) writer.write("%s</%s>%s" % (indent,self.tagName,newl)) else: writer.write("/>%s"%(newl)) We can deduce from this that the short-end-tag form only occurs when childNodes is an empty list. Indeed, this seems to be true: >>> doc = Document() >>> v = doc.appendChild(doc.createElement('v')) >>> v.toxml() '<v/>' >>> v.childNodes [] >>> v.appendChild(doc.createTextNode('')) <DOM Text node "''"> >>> v.childNodes [<DOM Text node "''">] >>> v.toxml() '<v></v>' As pointed out by Lloyd, the XML spec makes no distinction between the two. If your code does make the distinction, that means you need to rethink how you want to serialize your data. xml.dom.minidom simply displays something differently because it's easier to code. You can, however, get consistent output. Simply inherit the Element class and override the toxml method such that it will print out the short-end-tag form when there are no child nodes with non-empty text content. Then monkeypatch the module to use your new Element class. A: value = thing.firstChild.nodeValue or '' A: Xml spec does not distinguish these two cases.
Empty XML element handling in Python
I'm puzzled by minidom parser handling of empty element, as shown in following code section. import xml.dom.minidom doc = xml.dom.minidom.parseString('<value></value>') print doc.firstChild.nodeValue.__repr__() # Out: None print doc.firstChild.toxml() # Out: <value/> doc = xml.dom.minidom.Document() v = doc.appendChild(doc.createElement('value')) v.appendChild(doc.createTextNode('')) print v.firstChild.nodeValue.__repr__() # Out: '' print doc.firstChild.toxml() # Out: <value></value> How can I get consistent behavior? I'd like to receive empty string as value of empty element (which IS what I put in XML structure in the first place).
[ "Cracking open xml.dom.minidom and searching for \"/>\", we find this:\n# Method of the Element(Node) class.\ndef writexml(self, writer, indent=\"\", addindent=\"\", newl=\"\"):\n # [snip]\n if self.childNodes:\n writer.write(\">%s\"%(newl))\n for node in self.childNodes:\n node.writexml(writer,indent+addindent,addindent,newl)\n writer.write(\"%s</%s>%s\" % (indent,self.tagName,newl))\n else:\n writer.write(\"/>%s\"%(newl))\n\nWe can deduce from this that the short-end-tag form only occurs when childNodes is an empty list. Indeed, this seems to be true:\n>>> doc = Document()\n>>> v = doc.appendChild(doc.createElement('v'))\n>>> v.toxml()\n'<v/>'\n>>> v.childNodes\n[]\n>>> v.appendChild(doc.createTextNode(''))\n<DOM Text node \"''\">\n>>> v.childNodes\n[<DOM Text node \"''\">]\n>>> v.toxml()\n'<v></v>'\n\nAs pointed out by Lloyd, the XML spec makes no distinction between the two. If your code does make the distinction, that means you need to rethink how you want to serialize your data. \nxml.dom.minidom simply displays something differently because it's easier to code. You can, however, get consistent output. Simply inherit the Element class and override the toxml method such that it will print out the short-end-tag form when there are no child nodes with non-empty text content. Then monkeypatch the module to use your new Element class.\n", "value = thing.firstChild.nodeValue or ''\n\n", "Xml spec does not distinguish these two cases.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "python", "string", "xml" ]
stackoverflow_0001187718_python_string_xml.txt
Q: How can I get nose to find class attributes defined on a base test class? I'm getting some integration tests running against the database, and I'd like to have a structure that looks something like this: class OracleMixin(object): oracle = True # ... set up the oracle connection class SqlServerMixin(object): sql_server = True # ... set up the sql server connection class SomeTests(object): integration = True # ... define test methods here class test_OracleSomeTests(SomeTests, OracleMixin): pass class test_SqlServerSomeTests(SomeTests, SqlServerMixin): pass This way, I can run SQL Server tests and Oracle tests separately like this: nosetests -a oracle nosetests -a sql_server Or all integration tests like this: nosetests -a integration However, it appears that nose will only look for attributes on the subclass, not on the base class. Thus I have to define the test classes like this or the tests won't run: class test_OracleSomeTests(SomeTests, OracleMixin): oracle = True integration = True class test_SqlServerSomeTests(SomeTests, SqlServerMixin): sql_server = True integration = True This is a bit tedious to maintain. Any ideas how to get around this? If I was just dealing with one base class, I'd just use a metaclass and define the attributes on each class. But I get an uneasy feeling about having a metaclass for the test class, a metaclass for Oracle, and a metaclass for SQL Server. A: I do not think you can without making your own plugin. The the code in the attrib plugin only looks at the classes __dict__. Here is the code def wantClass(self, cls): """Accept the class if the class or any method is wanted. """ cls_attr = cls.__dict__ if self.validateAttrib(cls_attr) is not False: return None ... You could hack the plugin to do something like (not tested). def wantClass(self, cls): """Accept the class if the class or any method is wanted. """ for class_ in cls.__mro__: cls_attr = class_.__dict__ if self.validateAttrib(cls_attr) is not False: return None cls_attr = cls.__dict__ ... However, I am not sure that this is better or worse that the metaclass option. A: If you want to find an attribute defined on a parent class, and you have an attribute of the same name in the subclass you will need to add the name of the parent class to access the scope you want I believe this is what you want: class Parent: prop = 'a property' def self_prop(self): print self.prop # will always print 'a property' def parent_prop(self): print Parent.prop class Child(Parent): prop = 'child property' def access_eclipsed(self): print Parent.prop class Other(Child): pass >>> Parent().self_prop() "a property" >>> Parent().parent_prop() "a property" >>> Child().self_prop() "child property" >>> Child().parent_prop() "a property" >>> Child().access_eclipsed() "a property" >>> Other().self_prop() "child property" >>> Other().parent_prop() "a property" >>> Other().access_eclipsed() "a property" and in your case it looks like you have two different classes which define different variables so you can just have a try: catch: at the top of your test functions or maybe in the initializer and say try: isSQLServer = self.sql_server except AttributeError: isSQLServer = False (though really they should be defining the same variables so that the test class doesn't have to know anything about the subclasses)
How can I get nose to find class attributes defined on a base test class?
I'm getting some integration tests running against the database, and I'd like to have a structure that looks something like this: class OracleMixin(object): oracle = True # ... set up the oracle connection class SqlServerMixin(object): sql_server = True # ... set up the sql server connection class SomeTests(object): integration = True # ... define test methods here class test_OracleSomeTests(SomeTests, OracleMixin): pass class test_SqlServerSomeTests(SomeTests, SqlServerMixin): pass This way, I can run SQL Server tests and Oracle tests separately like this: nosetests -a oracle nosetests -a sql_server Or all integration tests like this: nosetests -a integration However, it appears that nose will only look for attributes on the subclass, not on the base class. Thus I have to define the test classes like this or the tests won't run: class test_OracleSomeTests(SomeTests, OracleMixin): oracle = True integration = True class test_SqlServerSomeTests(SomeTests, SqlServerMixin): sql_server = True integration = True This is a bit tedious to maintain. Any ideas how to get around this? If I was just dealing with one base class, I'd just use a metaclass and define the attributes on each class. But I get an uneasy feeling about having a metaclass for the test class, a metaclass for Oracle, and a metaclass for SQL Server.
[ "I do not think you can without making your own plugin. The the code in the attrib plugin only looks at the classes __dict__. Here is the code \ndef wantClass(self, cls):\n \"\"\"Accept the class if the class or any method is wanted.\n \"\"\"\n cls_attr = cls.__dict__\n if self.validateAttrib(cls_attr) is not False:\n return None\n ...\n\nYou could hack the plugin to do something like (not tested).\ndef wantClass(self, cls):\n \"\"\"Accept the class if the class or any method is wanted.\n \"\"\"\n for class_ in cls.__mro__: \n cls_attr = class_.__dict__\n if self.validateAttrib(cls_attr) is not False:\n return None\n cls_attr = cls.__dict__\n ...\n\nHowever, I am not sure that this is better or worse that the metaclass option.\n", "If you want to find an attribute defined on a parent class, and you have an attribute of the same name in the subclass you will need to add the name of the parent class to access the scope you want\nI believe this is what you want:\nclass Parent:\n prop = 'a property'\n\n def self_prop(self):\n print self.prop\n\n # will always print 'a property'\n def parent_prop(self):\n print Parent.prop\n\nclass Child(Parent):\n prop = 'child property'\n\n def access_eclipsed(self):\n print Parent.prop\n\nclass Other(Child):\n pass\n\n>>> Parent().self_prop()\n\"a property\"\n>>> Parent().parent_prop()\n\"a property\"\n>>> Child().self_prop()\n\"child property\"\n>>> Child().parent_prop()\n\"a property\"\n>>> Child().access_eclipsed()\n\"a property\"\n>>> Other().self_prop()\n\"child property\"\n>>> Other().parent_prop()\n\"a property\"\n>>> Other().access_eclipsed()\n\"a property\"\n\nand in your case it looks like you have two different classes which define different variables so you can just have a try: catch: at the top of your test functions or maybe in the initializer\nand say\ntry:\n isSQLServer = self.sql_server\nexcept AttributeError:\n isSQLServer = False\n\n(though really they should be defining the same variables so that the test class doesn't have to know anything about the subclasses)\n" ]
[ 4, 0 ]
[]
[]
[ "integration_testing", "mixins", "multiple_inheritance", "nose", "python" ]
stackoverflow_0001188922_integration_testing_mixins_multiple_inheritance_nose_python.txt
Q: In Python, what's the correct way to instantiate a class from a variable? Suppose that I have class C. I can write o = C() to create an instance of C and assign it to o. However, what if I want to assign the class itself into a variable and then instantiate it? For example, suppose that I have two classes, such as C1 and C2, and I want to do something like: if (something): classToUse = C1 else: classToUse = C2 o = classToUse.instantiate() What's the syntax for the actual instantiate()? Is a call to __new__() enough? A: o = C2() This will accomplish what you want. Or, in case you meant to use classToUse, simply use: o = classToUse() Hope this helps. A: You're almost there. Instead of calling an instantiate() method, just call the variable directly. It's assigned to the class, and classes are callable: if (something): classToUse = C1 else: classToUse = C2 o = classToUse() A: It's simple, Python don't recognize where a varible is a class or function. It's just call that value. class A: pass B=A b=B() A: A class is an object just like anything else, like an instance, a function, a string... a class is an instance too. So you can store it in a variable (or anywhere else that you can store stuff), and call it with () no matter where it comes from. def f(): print "foo" class C: pass x = f x() # prints foo x = C instance = x() # instanciates C
In Python, what's the correct way to instantiate a class from a variable?
Suppose that I have class C. I can write o = C() to create an instance of C and assign it to o. However, what if I want to assign the class itself into a variable and then instantiate it? For example, suppose that I have two classes, such as C1 and C2, and I want to do something like: if (something): classToUse = C1 else: classToUse = C2 o = classToUse.instantiate() What's the syntax for the actual instantiate()? Is a call to __new__() enough?
[ "o = C2()\n\nThis will accomplish what you want. Or, in case you meant to use classToUse, simply use:\no = classToUse()\n\nHope this helps.\n", "You're almost there. Instead of calling an instantiate() method, just call the variable directly. It's assigned to the class, and classes are callable:\nif (something):\n classToUse = C1\nelse:\n classToUse = C2\n\no = classToUse()\n\n", "It's simple, Python don't recognize where a varible is a class or function. It's just call that value.\nclass A:\n pass\nB=A\nb=B()\n\n", "A class is an object just like anything else, like an instance, a function, a string... a class is an instance too. So you can store it in a variable (or anywhere else that you can store stuff), and call it with () no matter where it comes from. \ndef f(): print \"foo\"\n\nclass C: pass\n\nx = f\nx() # prints foo\n\nx = C\ninstance = x() # instanciates C\n\n" ]
[ 18, 12, 2, 0 ]
[]
[]
[ "instantiation", "python" ]
stackoverflow_0001189649_instantiation_python.txt
Q: Executing current Python script in Emacs on Windows I've just started learning Emacs, and decided to start writing Python in it. I tried using C-c C-c to execute the current buffer, but I get the message Searching for program: no such file or directory, python. I've looked on google, but I'm none the wiser as to how to sort this out (bear in mind I know next to nothing about Emacs!) A: I managed to work it out, following the instructions here. I used python-mode.el, when before I had been using Emacs' built-in python.el, but according to emacswiki, "The version in Emacs 22 has a bunch of problems". Hope someone else running Emacs 22 on Windows XP finds this useful one day! A: Try adding C:\Python26 (or whatever Python you have installed) to the PATH environment variable. I find python-mode and yasnippet to be useful for writing Python in emacs.
Executing current Python script in Emacs on Windows
I've just started learning Emacs, and decided to start writing Python in it. I tried using C-c C-c to execute the current buffer, but I get the message Searching for program: no such file or directory, python. I've looked on google, but I'm none the wiser as to how to sort this out (bear in mind I know next to nothing about Emacs!)
[ "I managed to work it out, following the instructions here. I used python-mode.el, when before I had been using Emacs' built-in python.el, but according to emacswiki, \"The version in Emacs 22 has a bunch of problems\". Hope someone else running Emacs 22 on Windows XP finds this useful one day!\n", "Try adding C:\\Python26 (or whatever Python you have installed) to the PATH environment variable. \nI find python-mode and yasnippet to be useful for writing Python in emacs.\n" ]
[ 3, 2 ]
[]
[]
[ "emacs", "python" ]
stackoverflow_0001190595_emacs_python.txt
Q: jQuery getJSON callback does not work - even with valid JSON - and seems to be using "OPTION" request not "GET" The background is that I've got a celery distributed job server configured with a Django view that returns the status of a running job in JSON. The job server is located at celeryserver.mydomain.com and the page I'm executing the jQuery from is www.mydomain.com so I shouldn't need to consider JSONP for this should I, as the requests aren't being made to different domains? Watching my server logs I see that jQuery is executing the getJSON call every 3 seconds as it should (with the Javascript setInterval). It does seem to use an OPTION request, but I've confirmed using curl that the JSON is still returned for these request types. The issue is that the console.log() Firebug call in the jQuery below doesn't ever seem to run! The one before the getJSON call does. Not having a callback work is a problem for me because I was hoping to poll for a celery job status in this manner and do various things based on the status of the job. <script type="text/javascript"> var job_id = 'a8f25420-1faf-4084-bf45-fe3f82200ccb'; // wait for the DOM to be loaded then start polling for conversion status $(document).ready(function() { var getConvertStatus = function(){ console.log('getting some status'); $.getJSON("https://celeryserver.mydomain.com/done/" + job_id, function(data){ console.log('callback works'); }); } setInterval(getConvertStatus, 3000); }); </script> I've used curl to make sure of what I'm receiving from the server: $ curl -D - -k -X GET https://celeryserver.mydomain.com/done/a8f25420-1faf-4084-bf45-fe3f82200ccb HTTP/1.1 200 OK Server: nginx/0.6.35 Date: Mon, 27 Jul 2009 06:08:42 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: close {"task": {"executed": true, "id": "a8f25420-1faf-4084-bf45-fe3f82200ccb"}} That JSON looks fine to me and JSONlint.com validates it for me right now... I also simulated the jQuery query with -X OPTION and got exactly the same data back from the server as with a GET (content-type of application/json etc.) I've been staring at this for ages now, any help greatly appreciated. I'm a pretty new jQuery user but this seems like it should be pretty problem-free so I have no idea what I'm doing wrong! A: I think you have a cross-subdomain issue, sub.domain.tld and domain.ltd are not the same. I recommend you to install Firebug and check if your code is throwing an Permission denied Exception when the request starts, if it's the case, go for JSONP... A: change your url to something like: "https://celeryserver.mydomain.com/done/" + job_id + "?callback=?" and then on your django view result should be something to the effect of: '{callback}({json})'.format(callback=request.GET['callback'], json=MyJSON) ...there are probably plenty of ways of doing that last line, but basically read in the callback parameter (or name it whatever you want) and then return it as calling your json object (jQuery takes care of creating a callback function (it replaces '?' with the generated function)) A: As several people stated, sub-domains count as domains and I had a cross-domain issue :) I solved it by creating a little piece of Django Middleware that changes the response from my views if they're returning JSON and the request had a callback attached. class JSONPMiddleware: def process_response(self, request, response): ctype = response.get('content-type', None) cback = request.GET.get('callback', None) if ctype == 'application/json' and cback: jsonp = '{callback}({json})'.format(callback=cback, json=response.content) return HttpResponse(content=jsonp, mimetype='application/javascript') return response All is now working as planned. Thanks! A: Are you fetching the JSON from another domain? If so, you're most likely running into cross-domain issues. You'll need to use JSONP. jQuery does this automatically, but the server needs to know that. See: http://www.ibm.com/developerworks/library/wa-aj-jsonp1/
jQuery getJSON callback does not work - even with valid JSON - and seems to be using "OPTION" request not "GET"
The background is that I've got a celery distributed job server configured with a Django view that returns the status of a running job in JSON. The job server is located at celeryserver.mydomain.com and the page I'm executing the jQuery from is www.mydomain.com so I shouldn't need to consider JSONP for this should I, as the requests aren't being made to different domains? Watching my server logs I see that jQuery is executing the getJSON call every 3 seconds as it should (with the Javascript setInterval). It does seem to use an OPTION request, but I've confirmed using curl that the JSON is still returned for these request types. The issue is that the console.log() Firebug call in the jQuery below doesn't ever seem to run! The one before the getJSON call does. Not having a callback work is a problem for me because I was hoping to poll for a celery job status in this manner and do various things based on the status of the job. <script type="text/javascript"> var job_id = 'a8f25420-1faf-4084-bf45-fe3f82200ccb'; // wait for the DOM to be loaded then start polling for conversion status $(document).ready(function() { var getConvertStatus = function(){ console.log('getting some status'); $.getJSON("https://celeryserver.mydomain.com/done/" + job_id, function(data){ console.log('callback works'); }); } setInterval(getConvertStatus, 3000); }); </script> I've used curl to make sure of what I'm receiving from the server: $ curl -D - -k -X GET https://celeryserver.mydomain.com/done/a8f25420-1faf-4084-bf45-fe3f82200ccb HTTP/1.1 200 OK Server: nginx/0.6.35 Date: Mon, 27 Jul 2009 06:08:42 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: close {"task": {"executed": true, "id": "a8f25420-1faf-4084-bf45-fe3f82200ccb"}} That JSON looks fine to me and JSONlint.com validates it for me right now... I also simulated the jQuery query with -X OPTION and got exactly the same data back from the server as with a GET (content-type of application/json etc.) I've been staring at this for ages now, any help greatly appreciated. I'm a pretty new jQuery user but this seems like it should be pretty problem-free so I have no idea what I'm doing wrong!
[ "I think you have a cross-subdomain issue, sub.domain.tld and domain.ltd are not the same.\nI recommend you to install Firebug and check if your code is throwing an Permission denied Exception when the request starts, if it's the case, go for JSONP...\n", "change your url to something like:\n\"https://celeryserver.mydomain.com/done/\" + job_id + \"?callback=?\"\nand then on your django view result should be something to the effect of:\n'{callback}({json})'.format(callback=request.GET['callback'], json=MyJSON)\n\n...there are probably plenty of ways of doing that last line, but\nbasically read in the callback parameter (or name it whatever you want)\nand then return it as calling your json object\n(jQuery takes care of creating a callback function (it replaces '?' with the generated function))\n", "As several people stated, sub-domains count as domains and I had a cross-domain issue :)\nI solved it by creating a little piece of Django Middleware that changes the response from my views if they're returning JSON and the request had a callback attached.\nclass JSONPMiddleware:\n def process_response(self, request, response):\n ctype = response.get('content-type', None)\n cback = request.GET.get('callback', None)\n\n if ctype == 'application/json' and cback:\n jsonp = '{callback}({json})'.format(callback=cback, json=response.content)\n return HttpResponse(content=jsonp, mimetype='application/javascript')\n return response\n\nAll is now working as planned. Thanks!\n", "Are you fetching the JSON from another domain? If so, you're most likely running into cross-domain issues. You'll need to use JSONP. jQuery does this automatically, but the server needs to know that. \nSee:\nhttp://www.ibm.com/developerworks/library/wa-aj-jsonp1/\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "django", "jquery", "json", "python" ]
stackoverflow_0001186827_django_jquery_json_python.txt
Q: Hello World from cython wiki not working I'm trying to follow this tutorial from Cython: http://docs.cython.org/docs/tutorial.html#the-basics-of-cython and I'm having a problem. The files are very simple. I have a helloworld.pyx: print "Hello World" and a setup.py: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext setup( cmdclass = {'build_ext': build_ext}, ext_modules = [Extension("helloworld", ["helloworld.pyx"])] ) and I compile it with the standard command: python setup.py build_ext --inplace I got the following error: running build running build_ext building 'helloworld' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c helloworld.c -o build/temp.linux-x86_64-2.6/helloworld.o helloworld.c:4:20: error: Python.h: No such file or directory helloworld.c:5:26: error: structmember.h: No such file or directory helloworld.c:34: error: expected specifier-qualifier-list before ‘PyObject’ helloworld.c:121: error: expected specifier-qualifier-list before ‘PyObject’ helloworld.c:139: error: expected ‘)’ before ‘*’ token helloworld.c:140: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’ helloworld.c:141: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’ helloworld.c:142: error: expected ‘)’ before ‘*’ token helloworld.c:147: error: expected ‘)’ before ‘*’ token helloworld.c:148: error: expected ‘)’ before ‘*’ token helloworld.c:149: error: expected ‘)’ before ‘*’ token helloworld.c:150: error: expected ‘)’ before ‘*’ token helloworld.c:151: error: expected ‘)’ before ‘*’ token helloworld.c:152: error: expected ‘)’ before ‘*’ token helloworld.c:153: error: expected ‘)’ before ‘*’ token helloworld.c:154: error: expected ‘)’ before ‘*’ token helloworld.c:155: error: expected ‘)’ before ‘*’ token helloworld.c:156: error: expected ‘)’ before ‘*’ token helloworld.c:157: error: expected ‘)’ before ‘*’ token helloworld.c:172: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:173: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:174: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:181: error: expected ‘)’ before ‘*’ token helloworld.c:198: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:200: error: array type has incomplete element type helloworld.c:221: error: ‘__pyx_kp_1’ undeclared here (not in a function) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:237: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’ helloworld.c:238: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’ helloworld.c:305: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:313: error: expected ‘)’ before ‘*’ token helloworld.c:379:21: error: compile.h: No such file or directory helloworld.c:380:25: error: frameobject.h: No such file or directory helloworld.c:381:23: error: traceback.h: No such file or directory helloworld.c: In function ‘__Pyx_AddTraceback’: helloworld.c:384: error: ‘PyObject’ undeclared (first use in this function) helloworld.c:384: error: (Each undeclared identifier is reported only once helloworld.c:384: error: for each function it appears in.) helloworld.c:384: error: ‘py_srcfile’ undeclared (first use in this function) helloworld.c:385: error: ‘py_funcname’ undeclared (first use in this function) helloworld.c:386: error: ‘py_globals’ undeclared (first use in this function) helloworld.c:387: error: ‘empty_string’ undeclared (first use in this function) helloworld.c:388: error: ‘PyCodeObject’ undeclared (first use in this function) helloworld.c:388: error: ‘py_code’ undeclared (first use in this function) helloworld.c:389: error: ‘PyFrameObject’ undeclared (first use in this function) helloworld.c:389: error: ‘py_frame’ undeclared (first use in this function) helloworld.c:392: warning: implicit declaration of function ‘PyString_FromString’ helloworld.c:399: warning: implicit declaration of function ‘PyString_FromFormat’ helloworld.c:412: warning: implicit declaration of function ‘PyModule_GetDict’ helloworld.c:412: error: ‘__pyx_m’ undeclared (first use in this function) helloworld.c:415: warning: implicit declaration of function ‘PyString_FromStringAndSize’ helloworld.c:420: warning: implicit declaration of function ‘PyCode_New’ helloworld.c:429: error: ‘__pyx_empty_tuple’ undeclared (first use in this function) helloworld.c:440: warning: implicit declaration of function ‘PyFrame_New’ helloworld.c:441: warning: implicit declaration of function ‘PyThreadState_GET’ helloworld.c:448: warning: implicit declaration of function ‘PyTraceBack_Here’ helloworld.c:450: warning: implicit declaration of function ‘Py_XDECREF’ helloworld.c: In function ‘__Pyx_InitStrings’: helloworld.c:458: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_unicode’ helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_identifier’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:461: warning: implicit declaration of function ‘PyUnicode_DecodeUTF8’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’ helloworld.c:461: error: ‘NULL’ undeclared (first use in this function) helloworld.c:462: error: ‘__Pyx_StringTabEntry’ has no member named ‘intern’ helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:463: warning: implicit declaration of function ‘PyString_InternFromString’ helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’ helloworld.c:476: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c: At top level: helloworld.c:485: error: expected ‘)’ before ‘*’ token helloworld.c:494: error: expected ‘)’ before ‘*’ token helloworld.c:500: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’ helloworld.c:516: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’ helloworld.c:538: error: expected ‘)’ before ‘*’ token helloworld.c:553: error: expected ‘)’ before ‘*’ token helloworld.c:568: error: expected ‘)’ before ‘*’ token helloworld.c:583: error: expected ‘)’ before ‘*’ token helloworld.c:598: error: expected ‘)’ before ‘*’ token helloworld.c:613: error: expected ‘)’ before ‘*’ token helloworld.c:628: error: expected ‘)’ before ‘*’ token helloworld.c:643: error: expected ‘)’ before ‘*’ token helloworld.c:658: error: expected ‘)’ before ‘*’ token helloworld.c:673: error: expected ‘)’ before ‘*’ token helloworld.c:688: error: expected ‘)’ before ‘*’ token error: command 'gcc' failed with exit status 1 I have python and cython installed from Ubuntu 9.04 repositories. I can't figure why the compiler can't find Python.h. I tried doing: cython helloworld.pyx and then compiling the result manually with gcc: gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python2.5 -o helloworld.so helloworld.c and got the same exact error message. Any clues? A: Looks like you're missing some package like python_dev or the like -- Debian and derivatives (including Ubuntu) have long preferred to isolate everything that could possibly be of "developer"'s use from the parts of a package that are for "everybody"... a philosophical stance I could debate against (and have debated against, without much practical success, in mahy fora), but one that, sadly, can't just be ignored:-( A: Oh damn... forget it... I forgot to install the dev packages. Duh. Stupid. Sorry guys.
Hello World from cython wiki not working
I'm trying to follow this tutorial from Cython: http://docs.cython.org/docs/tutorial.html#the-basics-of-cython and I'm having a problem. The files are very simple. I have a helloworld.pyx: print "Hello World" and a setup.py: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext setup( cmdclass = {'build_ext': build_ext}, ext_modules = [Extension("helloworld", ["helloworld.pyx"])] ) and I compile it with the standard command: python setup.py build_ext --inplace I got the following error: running build running build_ext building 'helloworld' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.6 -c helloworld.c -o build/temp.linux-x86_64-2.6/helloworld.o helloworld.c:4:20: error: Python.h: No such file or directory helloworld.c:5:26: error: structmember.h: No such file or directory helloworld.c:34: error: expected specifier-qualifier-list before ‘PyObject’ helloworld.c:121: error: expected specifier-qualifier-list before ‘PyObject’ helloworld.c:139: error: expected ‘)’ before ‘*’ token helloworld.c:140: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’ helloworld.c:141: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’ helloworld.c:142: error: expected ‘)’ before ‘*’ token helloworld.c:147: error: expected ‘)’ before ‘*’ token helloworld.c:148: error: expected ‘)’ before ‘*’ token helloworld.c:149: error: expected ‘)’ before ‘*’ token helloworld.c:150: error: expected ‘)’ before ‘*’ token helloworld.c:151: error: expected ‘)’ before ‘*’ token helloworld.c:152: error: expected ‘)’ before ‘*’ token helloworld.c:153: error: expected ‘)’ before ‘*’ token helloworld.c:154: error: expected ‘)’ before ‘*’ token helloworld.c:155: error: expected ‘)’ before ‘*’ token helloworld.c:156: error: expected ‘)’ before ‘*’ token helloworld.c:157: error: expected ‘)’ before ‘*’ token helloworld.c:172: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:173: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:174: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:181: error: expected ‘)’ before ‘*’ token helloworld.c:198: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:200: error: array type has incomplete element type helloworld.c:221: error: ‘__pyx_kp_1’ undeclared here (not in a function) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:221: warning: excess elements in struct initializer helloworld.c:221: warning: (near initialization for ‘__pyx_string_tab[0]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:222: warning: excess elements in struct initializer helloworld.c:222: warning: (near initialization for ‘__pyx_string_tab[1]’) helloworld.c:237: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’ helloworld.c:238: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘inithelloworld’ helloworld.c:305: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token helloworld.c:313: error: expected ‘)’ before ‘*’ token helloworld.c:379:21: error: compile.h: No such file or directory helloworld.c:380:25: error: frameobject.h: No such file or directory helloworld.c:381:23: error: traceback.h: No such file or directory helloworld.c: In function ‘__Pyx_AddTraceback’: helloworld.c:384: error: ‘PyObject’ undeclared (first use in this function) helloworld.c:384: error: (Each undeclared identifier is reported only once helloworld.c:384: error: for each function it appears in.) helloworld.c:384: error: ‘py_srcfile’ undeclared (first use in this function) helloworld.c:385: error: ‘py_funcname’ undeclared (first use in this function) helloworld.c:386: error: ‘py_globals’ undeclared (first use in this function) helloworld.c:387: error: ‘empty_string’ undeclared (first use in this function) helloworld.c:388: error: ‘PyCodeObject’ undeclared (first use in this function) helloworld.c:388: error: ‘py_code’ undeclared (first use in this function) helloworld.c:389: error: ‘PyFrameObject’ undeclared (first use in this function) helloworld.c:389: error: ‘py_frame’ undeclared (first use in this function) helloworld.c:392: warning: implicit declaration of function ‘PyString_FromString’ helloworld.c:399: warning: implicit declaration of function ‘PyString_FromFormat’ helloworld.c:412: warning: implicit declaration of function ‘PyModule_GetDict’ helloworld.c:412: error: ‘__pyx_m’ undeclared (first use in this function) helloworld.c:415: warning: implicit declaration of function ‘PyString_FromStringAndSize’ helloworld.c:420: warning: implicit declaration of function ‘PyCode_New’ helloworld.c:429: error: ‘__pyx_empty_tuple’ undeclared (first use in this function) helloworld.c:440: warning: implicit declaration of function ‘PyFrame_New’ helloworld.c:441: warning: implicit declaration of function ‘PyThreadState_GET’ helloworld.c:448: warning: implicit declaration of function ‘PyTraceBack_Here’ helloworld.c:450: warning: implicit declaration of function ‘Py_XDECREF’ helloworld.c: In function ‘__Pyx_InitStrings’: helloworld.c:458: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_unicode’ helloworld.c:460: error: ‘__Pyx_StringTabEntry’ has no member named ‘is_identifier’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:461: warning: implicit declaration of function ‘PyUnicode_DecodeUTF8’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:461: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’ helloworld.c:461: error: ‘NULL’ undeclared (first use in this function) helloworld.c:462: error: ‘__Pyx_StringTabEntry’ has no member named ‘intern’ helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:463: warning: implicit declaration of function ‘PyString_InternFromString’ helloworld.c:463: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘s’ helloworld.c:465: error: ‘__Pyx_StringTabEntry’ has no member named ‘n’ helloworld.c:476: error: ‘__Pyx_StringTabEntry’ has no member named ‘p’ helloworld.c: At top level: helloworld.c:485: error: expected ‘)’ before ‘*’ token helloworld.c:494: error: expected ‘)’ before ‘*’ token helloworld.c:500: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsLongLong’ helloworld.c:516: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__pyx_PyInt_AsUnsignedLongLong’ helloworld.c:538: error: expected ‘)’ before ‘*’ token helloworld.c:553: error: expected ‘)’ before ‘*’ token helloworld.c:568: error: expected ‘)’ before ‘*’ token helloworld.c:583: error: expected ‘)’ before ‘*’ token helloworld.c:598: error: expected ‘)’ before ‘*’ token helloworld.c:613: error: expected ‘)’ before ‘*’ token helloworld.c:628: error: expected ‘)’ before ‘*’ token helloworld.c:643: error: expected ‘)’ before ‘*’ token helloworld.c:658: error: expected ‘)’ before ‘*’ token helloworld.c:673: error: expected ‘)’ before ‘*’ token helloworld.c:688: error: expected ‘)’ before ‘*’ token error: command 'gcc' failed with exit status 1 I have python and cython installed from Ubuntu 9.04 repositories. I can't figure why the compiler can't find Python.h. I tried doing: cython helloworld.pyx and then compiling the result manually with gcc: gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python2.5 -o helloworld.so helloworld.c and got the same exact error message. Any clues?
[ "Looks like you're missing some package like python_dev or the like -- Debian and derivatives (including Ubuntu) have long preferred to isolate everything that could possibly be of \"developer\"'s use from the parts of a package that are for \"everybody\"... a philosophical stance I could debate against (and have debated against, without much practical success, in mahy fora), but one that, sadly, can't just be ignored:-(\n", "Oh damn... forget it... \nI forgot to install the dev packages. \nDuh. Stupid. Sorry guys.\n" ]
[ 4, 1 ]
[]
[]
[ "c++", "cython", "python" ]
stackoverflow_0001191600_c++_cython_python.txt
Q: Threading in Python What are the modules used to write multi-threaded applications in Python? I'm aware of the basic concurrency mechanisms provided by the language and also of Stackless Python, but what are their respective strengths and weaknesses? A: In order of increasing complexity: Use the threading module Pros: It's really easy to run any function (any callable in fact) in its own thread. Sharing data is if not easy (locks are never easy :), at least simple. Cons: As mentioned by Juergen Python threads cannot actually concurrently access state in the interpreter (there's one big lock, the infamous Global Interpreter Lock.) What that means in practice is that threads are useful for I/O bound tasks (networking, writing to disk, and so on), but not at all useful for doing concurrent computation. Use the multiprocessing module In the simple use case this looks exactly like using threading except each task is run in its own process not its own thread. (Almost literally: If you take Eli's example, and replace threading with multiprocessing, Thread, with Process, and Queue (the module) with multiprocessing.Queue, it should run just fine.) Pros: Actual concurrency for all tasks (no Global Interpreter Lock). Scales to multiple processors, can even scale to multiple machines. Cons: Processes are slower than threads. Data sharing between processes is trickier than with threads. Memory is not implicitly shared. You either have to explicitly share it or you have to pickle variables and send them back and forth. This is safer, but harder. (If it matters increasingly the Python developers seem to be pushing people in this direction.) Use an event model, such as Twisted Pros: You get extremely fine control over priority, over what executes when. Cons: Even with a good library, asynchronous programming is usually harder than threaded programming, hard both in terms of understanding what's supposed to happen and in terms of debugging what actually is happening. In all cases I'm assuming you already understand many of the issues involved with multitasking, specifically the tricky issue of how to share data between tasks. If for some reason you don't know when and how to use locks and conditions you have to start with those. Multitasking code is full of subtleties and gotchas, and it's really best to have a good understanding of concepts before you start. A: You've already gotten a fair variety of answers, from "fake threads" all the way to external frameworks, but I've seen nobody mention Queue.Queue -- the "secret sauce" of CPython threading. To expand: as long as you don't need to overlap pure-Python CPU-heavy processing (in which case you need multiprocessing -- but it comes with its own Queue implementation, too, so you can with some needed cautions apply the general advice I'm giving;-), Python's built-in threading will do... but it will do it much better if you use it advisedly, e.g., as follows. "Forget" shared memory, supposedly the main plus of threading vs multiprocessing -- it doesn't work well, it doesn't scale well, never has, never will. Use shared memory only for data structures that are set up once before you spawn sub-threads and never changed afterwards -- for everything else, make a single thread responsible for that resource, and communicate with that thread via Queue. Devote a specialized thread to every resource you'd normally think to protect by locks: a mutable data structure or cohesive group thereof, a connection to an external process (a DB, an XMLRPC server, etc), an external file, etc, etc. Get a small thread pool going for general purpose tasks that don't have or need a dedicated resource of that kind -- don't spawn threads as and when needed, or the thread-switching overhead will overwhelm you. Communication between two threads is always via Queue.Queue -- a form of message passing, the only sane foundation for multiprocessing (besides transactional-memory, which is promising but for which I know of no production-worthy implementations except In Haskell). Each dedicated thread managing a single resource (or small cohesive set of resources) listens for requests on a specific Queue.Queue instance. Threads in a pool wait on a single shared Queue.Queue (Queue is solidly threadsafe and won't fail you in this). Threads that just need to queue up a request on some queue (shared or dedicated) do so without waiting for results, and move on. Threads that eventually DO need a result or confirmation for a request queue a pair (request, receivingqueue) with an instance of Queue.Queue they just made, and eventually, when the response or confirmation is indispensable in order to proceed, they get (waiting) from their receivingqueue. Be sure you're ready to get error-responses as well as real responses or confirmations (Twisted's deferreds are great at organizing this kind of structured response, BTW!). You can also use Queue to "park" instances of resources which can be used by any one thread but never be shared among multiple threads at one time (DB connections with some DBAPI compoents, cursors with others, etc) -- this lets you relax the dedicated-thread requirement in favor of more pooling (a pool thread that gets from the shared queue a request needing a queueable resource will get that resource from the apppropriate queue, waiting if necessary, etc etc). Twisted is actually a good way to organize this minuet (or square dance as the case may be), not just thanks to deferreds but because of its sound, solid, highly scalable base architecture: you may arrange things to use threads or subprocesses only when truly warranted, while doing most things normally considered thread-worthy in a single event-driven thread. But, I realize Twisted is not for everybody -- the "dedicate or pool resources, use Queue up the wazoo, never do anything needing a Lock or, Guido forbid, any synchronization procedure even more advanced, such as semaphore or condition" approach can still be used even if you just can't wrap your head around async event-driven methodologies, and will still deliver more reliability and performance than any other widely-applicable threading approach I've ever stumbled upon. A: It depends on what you're trying to do, but I'm partial to just using the threading module in the standard library because it makes it really easy to take any function and just run it in a separate thread. from threading import Thread def f(): ... def g(arg1, arg2, arg3=None): .... Thread(target=f).start() Thread(target=g, args=[5, 6], kwargs={"arg3": 12}).start() And so on. I often have a producer/consumer setup using a synchronized queue provided by the Queue module from Queue import Queue from threading import Thread q = Queue() def consumer(): while True: print sum(q.get()) def producer(data_source): for line in data_source: q.put( map(int, line.split()) ) Thread(target=producer, args=[SOME_INPUT_FILE_OR_SOMETHING]).start() for i in range(10): Thread(target=consumer).start() A: Kamaelia is a python framework for building applications with lots of communicating processes. (source: kamaelia.org) Kamaelia - Concurrency made useful, fun In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :) What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :) Here's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia. Easy Concurrency with Kamaelia - Part 1 (59:08) Easy Concurrency with Kamaelia - Part 2 (18:15) A: Regarding Kamaelia, the answer above doesn't really cover the benefit here. Kamaelia's approach provides a unified interface, which is pragmatic not perfect, for dealing with threads, generators & processes in a single system for concurrency. Fundamentally it provides a metaphor of a running thing which has inboxes, and outboxes. You send messages to outboxes, and when wired together, messages flow from outboxes to inboxes. This metaphor/API remains the same whether you're using generators, threads or processes, or speaking to other systems. The "not perfect" part is due to syntactic sugar not being added as yet for inboxes and outboxes (though this is under discussion) - there is a focus on safety/usability in the system. Taking the producer consumer example using bare threading above, this becomes this in Kamaelia: Pipeline(Producer(), Consumer() ) In this example it doesn't matter if these are threaded components or otherwise, the only difference is between them from a usage perspective is the baseclass for the component. Generator components communicate using lists, threaded components using Queue.Queues and process based using os.pipes. The reason behind this approach though is to make it harder to make hard to debug bugs. In threading - or any shared memory concurrency you have, the number one problem you face is accidentally broken shared data updates. By using message passing you eliminate one class of bugs. If you use bare threading and locks everywhere you're generally working on the assumption that when you write code that you won't make any mistakes. Whilst we all aspire to that, it's very rare that will happen. By wrapping up the locking behaviour in one place you simplify where things can go wrong. (Context handlers help, but don't help with accidental updates outside the context handler) Obviously not every piece of code can be written as message passing and shared style which is why Kamaelia also has a simple software transactional memory (STM), which is a really neat idea with a nasty name - it's more like version control for variables - ie check out some variables, update them and commit back. If you get a clash you rinse and repeat. Relevant links: Europython 09 tutorial Monthly releases Mailing list Examples Example Apps Reusable components (generator & thread) Anyway, I hope that's a useful answer. FWIW, the core reason behind Kamaelia's setup is to make concurrency safer & easier to use in python systems, without the tail wagging the dog. (ie the big bucket of components I can understand why the other Kamaelia answer was modded down, since even to me it looks more like an ad than an answer. As the author of Kamaelia it's nice to see enthusiasm though I hope this contains a bit more relevant content :-) And that's my way of saying, please take the caveat that this answer is by definition biased, but for me, Kamaelia's aim is to try and wrap what is IMO best practice. I'd suggest trying a few systems out, and seeing which works for you. (also if this is inappropriate for stack overflow, sorry - I'm new to this forum :-) A: I would use the Microthreads (Tasklets) of Stackless Python, if I had to use threads at all. A whole online game (massivly multiplayer) is build around Stackless and its multithreading principle -- since the original is just to slow for the massivly multiplayer property of the game. Threads in CPython are widely discouraged. One reason is the GIL -- a global interpreter lock -- that serializes threading for many parts of the execution. My experiance is, that it is really difficult to create fast applications this way. My example codings where all slower with threading -- with one core (but many waits for input should have made some performance boosts possible). With CPython, rather use seperate processes if possible. A: If you really want to get your hands dirty, you can try using generators to fake coroutines. It probably isn't the most efficient in terms of work involved, but coroutines do offer you very fine control of co-operative multitasking rather than pre-emptive multitasking you'll find elsewhere. One advantage you'll find is that by and large, you will not need locks or mutexes when using co-operative multitasking, but the more important advantage for me was the nearly-zero switching speed between "threads". Of course, Stackless Python is said to be very good for that as well; and then there's Erlang, if it doesn't have to be Python. Probably the biggest disadvantage in co-operative multitasking is the general lack of workaround for blocking I/O. And in the faked coroutines, you'll also encounter the issue that you can't switch "threads" from anything but the top level of the stack within a thread. After you've made an even slightly complex application with fake coroutines, you'll really begin to appreciate the work that goes into process scheduling at the OS level.
Threading in Python
What are the modules used to write multi-threaded applications in Python? I'm aware of the basic concurrency mechanisms provided by the language and also of Stackless Python, but what are their respective strengths and weaknesses?
[ "In order of increasing complexity:\nUse the threading module\nPros:\n\nIt's really easy to run any function (any callable in fact) in its\nown thread.\nSharing data is if not easy (locks are never easy :), at\nleast simple.\n\nCons:\n\nAs mentioned by Juergen Python threads cannot actually concurrently access state in the interpreter (there's one big lock, the infamous Global Interpreter Lock.) What that means in practice is that threads are useful for I/O bound tasks (networking, writing to disk, and so on), but not at all useful for doing concurrent computation.\n\nUse the multiprocessing module\nIn the simple use case this looks exactly like using threading except each task is run in its own process not its own thread. (Almost literally: If you take Eli's example, and replace threading with multiprocessing, Thread, with Process, and Queue (the module) with multiprocessing.Queue, it should run just fine.)\nPros:\n\nActual concurrency for all tasks (no Global Interpreter Lock).\nScales to multiple processors, can even scale to multiple machines.\n\nCons:\n\nProcesses are slower than threads.\nData sharing between processes is trickier than with threads.\nMemory is not implicitly shared. You either have to explicitly share it or you have to pickle variables and send them back and forth. This is safer, but harder. (If it matters increasingly the Python developers seem to be pushing people in this direction.)\n\nUse an event model, such as Twisted\nPros:\n\nYou get extremely fine control over priority, over what executes when.\n\nCons:\n\nEven with a good library, asynchronous programming is usually harder than threaded programming, hard both in terms of understanding what's supposed to happen and in terms of debugging what actually is happening.\n\n\nIn all cases I'm assuming you already understand many of the issues involved with multitasking, specifically the tricky issue of how to share data between tasks. If for some reason you don't know when and how to use locks and conditions you have to start with those. Multitasking code is full of subtleties and gotchas, and it's really best to have a good understanding of concepts before you start.\n", "You've already gotten a fair variety of answers, from \"fake threads\" all the way to external frameworks, but I've seen nobody mention Queue.Queue -- the \"secret sauce\" of CPython threading.\nTo expand: as long as you don't need to overlap pure-Python CPU-heavy processing (in which case you need multiprocessing -- but it comes with its own Queue implementation, too, so you can with some needed cautions apply the general advice I'm giving;-), Python's built-in threading will do... but it will do it much better if you use it advisedly, e.g., as follows.\n\"Forget\" shared memory, supposedly the main plus of threading vs multiprocessing -- it doesn't work well, it doesn't scale well, never has, never will. Use shared memory only for data structures that are set up once before you spawn sub-threads and never changed afterwards -- for everything else, make a single thread responsible for that resource, and communicate with that thread via Queue.\nDevote a specialized thread to every resource you'd normally think to protect by locks: a mutable data structure or cohesive group thereof, a connection to an external process (a DB, an XMLRPC server, etc), an external file, etc, etc. Get a small thread pool going for general purpose tasks that don't have or need a dedicated resource of that kind -- don't spawn threads as and when needed, or the thread-switching overhead will overwhelm you.\nCommunication between two threads is always via Queue.Queue -- a form of message passing, the only sane foundation for multiprocessing (besides transactional-memory, which is promising but for which I know of no production-worthy implementations except In Haskell).\nEach dedicated thread managing a single resource (or small cohesive set of resources) listens for requests on a specific Queue.Queue instance. Threads in a pool wait on a single shared Queue.Queue (Queue is solidly threadsafe and won't fail you in this).\nThreads that just need to queue up a request on some queue (shared or dedicated) do so without waiting for results, and move on. Threads that eventually DO need a result or confirmation for a request queue a pair (request, receivingqueue) with an instance of Queue.Queue they just made, and eventually, when the response or confirmation is indispensable in order to proceed, they get (waiting) from their receivingqueue. Be sure you're ready to get error-responses as well as real responses or confirmations (Twisted's deferreds are great at organizing this kind of structured response, BTW!).\nYou can also use Queue to \"park\" instances of resources which can be used by any one thread but never be shared among multiple threads at one time (DB connections with some DBAPI compoents, cursors with others, etc) -- this lets you relax the dedicated-thread requirement in favor of more pooling (a pool thread that gets from the shared queue a request needing a queueable resource will get that resource from the apppropriate queue, waiting if necessary, etc etc).\nTwisted is actually a good way to organize this minuet (or square dance as the case may be), not just thanks to deferreds but because of its sound, solid, highly scalable base architecture: you may arrange things to use threads or subprocesses only when truly warranted, while doing most things normally considered thread-worthy in a single event-driven thread.\nBut, I realize Twisted is not for everybody -- the \"dedicate or pool resources, use Queue up the wazoo, never do anything needing a Lock or, Guido forbid, any synchronization procedure even more advanced, such as semaphore or condition\" approach can still be used even if you just can't wrap your head around async event-driven methodologies, and will still deliver more reliability and performance than any other widely-applicable threading approach I've ever stumbled upon.\n", "It depends on what you're trying to do, but I'm partial to just using the threading module in the standard library because it makes it really easy to take any function and just run it in a separate thread.\nfrom threading import Thread\n\ndef f():\n ...\n\ndef g(arg1, arg2, arg3=None):\n ....\n\nThread(target=f).start()\nThread(target=g, args=[5, 6], kwargs={\"arg3\": 12}).start()\n\nAnd so on. I often have a producer/consumer setup using a synchronized queue provided by the Queue module\nfrom Queue import Queue\nfrom threading import Thread\n\nq = Queue()\ndef consumer():\n while True:\n print sum(q.get())\n\ndef producer(data_source):\n for line in data_source:\n q.put( map(int, line.split()) )\n\nThread(target=producer, args=[SOME_INPUT_FILE_OR_SOMETHING]).start()\nfor i in range(10):\n Thread(target=consumer).start()\n\n", "Kamaelia is a python framework for building applications with lots of communicating processes.\n\n\n(source: kamaelia.org) Kamaelia - Concurrency made useful, fun \nIn Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :) \nWhat sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :) \n\n\nHere's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia.\n\nEasy Concurrency with Kamaelia - Part 1 (59:08)\nEasy Concurrency with Kamaelia - Part 2 (18:15) \n", "Regarding Kamaelia, the answer above doesn't really cover the benefit here. Kamaelia's approach provides a unified interface, which is pragmatic not perfect, for dealing with threads, generators & processes in a single system for concurrency.\nFundamentally it provides a metaphor of a running thing which has inboxes, and outboxes. You send messages to outboxes, and when wired together, messages flow from outboxes to inboxes. This metaphor/API remains the same whether you're using generators, threads or processes, or speaking to other systems.\nThe \"not perfect\" part is due to syntactic sugar not being added as yet for inboxes and outboxes (though this is under discussion) - there is a focus on safety/usability in the system.\nTaking the producer consumer example using bare threading above, this becomes this in Kamaelia:\nPipeline(Producer(), Consumer() )\n\nIn this example it doesn't matter if these are threaded components or otherwise, the only difference is between them from a usage perspective is the baseclass for the component. Generator components communicate using lists, threaded components using Queue.Queues and process based using os.pipes.\nThe reason behind this approach though is to make it harder to make hard to debug bugs. In threading - or any shared memory concurrency you have, the number one problem you face is accidentally broken shared data updates. By using message passing you eliminate one class of bugs.\nIf you use bare threading and locks everywhere you're generally working on the assumption that when you write code that you won't make any mistakes. Whilst we all aspire to that, it's very rare that will happen. By wrapping up the locking behaviour in one place you simplify where things can go wrong. (Context handlers help, but don't help with accidental updates outside the context handler)\nObviously not every piece of code can be written as message passing and shared style which is why Kamaelia also has a simple software transactional memory (STM), which is a really neat idea with a nasty name - it's more like version control for variables - ie check out some variables, update them and commit back. If you get a clash you rinse and repeat.\nRelevant links:\n\nEuropython 09 tutorial\nMonthly releases\nMailing list\nExamples\nExample Apps\nReusable components (generator & thread)\n\nAnyway, I hope that's a useful answer. FWIW, the core reason behind Kamaelia's setup is to make concurrency safer & easier to use in python systems, without the tail wagging the dog. (ie the big bucket of components\nI can understand why the other Kamaelia answer was modded down, since even to me it looks more like an ad than an answer. As the author of Kamaelia it's nice to see enthusiasm though I hope this contains a bit more relevant content :-)\nAnd that's my way of saying, please take the caveat that this answer is by definition biased, but for me, Kamaelia's aim is to try and wrap what is IMO best practice. I'd suggest trying a few systems out, and seeing which works for you. (also if this is inappropriate for stack overflow, sorry - I'm new to this forum :-)\n", "I would use the Microthreads (Tasklets) of Stackless Python, if I had to use threads at all.\nA whole online game (massivly multiplayer) is build around Stackless and its multithreading principle -- since the original is just to slow for the massivly multiplayer property of the game.\nThreads in CPython are widely discouraged. One reason is the GIL -- a global interpreter lock -- that serializes threading for many parts of the execution. My experiance is, that it is really difficult to create fast applications this way. My example codings where all slower with threading -- with one core (but many waits for input should have made some performance boosts possible).\nWith CPython, rather use seperate processes if possible.\n", "If you really want to get your hands dirty, you can try using generators to fake coroutines. It probably isn't the most efficient in terms of work involved, but coroutines do offer you very fine control of co-operative multitasking rather than pre-emptive multitasking you'll find elsewhere. \nOne advantage you'll find is that by and large, you will not need locks or mutexes when using co-operative multitasking, but the more important advantage for me was the nearly-zero switching speed between \"threads\". Of course, Stackless Python is said to be very good for that as well; and then there's Erlang, if it doesn't have to be Python.\nProbably the biggest disadvantage in co-operative multitasking is the general lack of workaround for blocking I/O. And in the faked coroutines, you'll also encounter the issue that you can't switch \"threads\" from anything but the top level of the stack within a thread.\nAfter you've made an even slightly complex application with fake coroutines, you'll really begin to appreciate the work that goes into process scheduling at the OS level.\n" ]
[ 120, 104, 22, 13, 6, 4, 3 ]
[]
[]
[ "multithreading", "python", "python_stackless" ]
stackoverflow_0001190206_multithreading_python_python_stackless.txt
Q: Python classes for simple GTD app I'm trying to code a very rudimentary GTD app for myself, not only to get organized, but to get better at coding and get better at Python. I'm having a bit of trouble with the classes however. Here are the classes I have so far: class Project: def __init__(self, name, actions=[]): self.name = name self.actions = actions def add(self, action): self.actions.append(action) class Action: def __init__(self, do='', context=''): self.do = do self.context = context Each project has actions to it, however I want to make it so that projects can also consist of other projects. Say daily I wanted to print out a list of everything. I'm having trouble coming up with how I would construct a list that looked like this > Project A > Actions for Project A > Project B > Sub project A > Actions for Sub project A > Sub project B > Actions for Sub project B > Sub project C > Sub sub project A > Actions for sub sub project A > Sub sub project B > Actions for sub sub project B > Actions for Sub project C > Actions for Project B It's quite clear to me that recursion is going to be used. I'm struggling with whether to create another class called SubProject and subclass Project to it. Something there just makes my brain raise an exception. I have been able to take projects and add them to the actions attribute in the Project class, however then I run into where MegaProject.actions.action.actions.action situations start popping up. If anyone could help out with the class structures, it would be greatly appreciated! A: You could create a subprojects member, similar to your actions list, and assign projects to it in a similar way. No subclassing of Project is necessary. class Project: def __init__(self, name, actions=[], subprojects=[]): self.name = name self.actions = actions self.subprojects = subprojects def add(self, action): self.actions.append(action) def add_project(self, project) self.subprojects.append(project) Even better, you may want to implement a composite pattern, where Projects are composites and Actions are leaves. class Project: def __init__(self, name, children=[]): self.name = name self.children = children def add(self, object): self.children.append(object) def mark_done(self): for c in self.children: c.mark_done() class Action: def __init__(self, do): self.do = do self.done = False def mark_done(self): self.done = True They key here is that the projects have the same interface as the actions (with the exception of the add/delete methods). This allows to to call methods on entire tree recursively. If you had a complex nested structure, you can call a method on the top level, and have it filter down to the bottom. If you'd like a method to get a flat list of all leaf nodes in the tree (Actions) you can implement a method like this in the Project class. def get_action_list(self): actions = [] for c in self.children: if c.__class__ == self.__class__: actions += c.get_action_list() else: actions.append(c) return actions A: I suggest you look at the composite pattern which can be applied to the "Project" class. If you make your structure correctly, you should be able to make action be a leaf of that tree, pretty much like you described in your example. You could, for instance, do a Project class (abstract), a ProjectComposite class (concrete) and your action class as a leaf. A: Have you evaluated existing GTD tools? I'd look at file formats used by existing GTD tools, esp. those that save to XML. That would give you an idea about which ways to organize this kind of data tend to work. Structure ranges from rather simplistic (TaskPaper) to baroque (OmniFocus). Look around and learn what others did before you.
Python classes for simple GTD app
I'm trying to code a very rudimentary GTD app for myself, not only to get organized, but to get better at coding and get better at Python. I'm having a bit of trouble with the classes however. Here are the classes I have so far: class Project: def __init__(self, name, actions=[]): self.name = name self.actions = actions def add(self, action): self.actions.append(action) class Action: def __init__(self, do='', context=''): self.do = do self.context = context Each project has actions to it, however I want to make it so that projects can also consist of other projects. Say daily I wanted to print out a list of everything. I'm having trouble coming up with how I would construct a list that looked like this > Project A > Actions for Project A > Project B > Sub project A > Actions for Sub project A > Sub project B > Actions for Sub project B > Sub project C > Sub sub project A > Actions for sub sub project A > Sub sub project B > Actions for sub sub project B > Actions for Sub project C > Actions for Project B It's quite clear to me that recursion is going to be used. I'm struggling with whether to create another class called SubProject and subclass Project to it. Something there just makes my brain raise an exception. I have been able to take projects and add them to the actions attribute in the Project class, however then I run into where MegaProject.actions.action.actions.action situations start popping up. If anyone could help out with the class structures, it would be greatly appreciated!
[ "You could create a subprojects member, similar to your actions list, and assign projects to it in a similar way. No subclassing of Project is necessary.\nclass Project:\n def __init__(self, name, actions=[], subprojects=[]):\n self.name = name\n self.actions = actions\n self.subprojects = subprojects\n\n def add(self, action):\n self.actions.append(action)\n\n def add_project(self, project)\n self.subprojects.append(project)\n\nEven better, you may want to implement a composite pattern, where Projects are composites and Actions are leaves.\nclass Project:\n def __init__(self, name, children=[]):\n self.name = name\n self.children = children\n\n def add(self, object):\n self.children.append(object)\n\n def mark_done(self):\n for c in self.children:\n c.mark_done()\n\nclass Action:\n def __init__(self, do):\n self.do = do\n self.done = False\n\n def mark_done(self):\n self.done = True\n\nThey key here is that the projects have the same interface as the actions (with the exception of the add/delete methods). This allows to to call methods on entire tree recursively. If you had a complex nested structure, you can call a method on the top level, and have it filter down to the bottom.\nIf you'd like a method to get a flat list of all leaf nodes in the tree (Actions) you can implement a method like this in the Project class.\ndef get_action_list(self):\n actions = []\n for c in self.children:\n if c.__class__ == self.__class__:\n actions += c.get_action_list()\n else:\n actions.append(c)\n return actions\n\n", "I suggest you look at the composite pattern which can be applied to the \"Project\" class. If you make your structure correctly, you should be able to make action be a leaf of that tree, pretty much like you described in your example.\nYou could, for instance, do a Project class (abstract), a ProjectComposite class (concrete) and your action class as a leaf.\n", "Have you evaluated existing GTD tools? I'd look at file formats used by existing GTD tools, esp. those that save to XML. That would give you an idea about which ways to organize this kind of data tend to work.\nStructure ranges from rather simplistic (TaskPaper) to baroque (OmniFocus). Look around and learn what others did before you.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "gtd", "python", "recursion" ]
stackoverflow_0001175110_gtd_python_recursion.txt
Q: What's a good way to replace international characters with their base Latin counterparts using Python? Say I have the string "blöt träbåt" which has a few a and o with umlaut and ring above. I want it to become "blot trabat" as simply as possibly. I've done some digging and found the following method: import unicodedata unicode_string = unicodedata.normalize('NFKD', unicode(string)) This will give me the string in unicode format with the international characters split into base letter and combining character (\u0308 for umlauts.) Now to get this back to an ASCII string I could do ascii_string = unicode_string.encode('ASCII', 'ignore') and it'll just ignore the combining characters, resulting in the string "blot trabat". The question here is: is there a better way to do this? It feels like a roundabout way, and I was thinking there might be something I don't know about. I could of course wrap it up in a helper function, but I'd rather check if this doesn't exist in Python already. A: It would be better if you created an explicit table, and then used the unicode.translate method. The advantage would be that transliteration is more precise, e.g. transliterating "ö" to "oe" and "ß" to "ss", as should be done in German. There are several transliteration packages on PyPI: translitcodec, Unidecode, and trans.
What's a good way to replace international characters with their base Latin counterparts using Python?
Say I have the string "blöt träbåt" which has a few a and o with umlaut and ring above. I want it to become "blot trabat" as simply as possibly. I've done some digging and found the following method: import unicodedata unicode_string = unicodedata.normalize('NFKD', unicode(string)) This will give me the string in unicode format with the international characters split into base letter and combining character (\u0308 for umlauts.) Now to get this back to an ASCII string I could do ascii_string = unicode_string.encode('ASCII', 'ignore') and it'll just ignore the combining characters, resulting in the string "blot trabat". The question here is: is there a better way to do this? It feels like a roundabout way, and I was thinking there might be something I don't know about. I could of course wrap it up in a helper function, but I'd rather check if this doesn't exist in Python already.
[ "It would be better if you created an explicit table, and then used the unicode.translate method. The advantage would be that transliteration is more precise, e.g. transliterating \"ö\" to \"oe\" and \"ß\" to \"ss\", as should be done in German.\nThere are several transliteration packages on PyPI: translitcodec, Unidecode, and trans.\n" ]
[ 7 ]
[]
[]
[ "internationalization", "python", "string" ]
stackoverflow_0001192367_internationalization_python_string.txt
Q: To set up environmental variables for a Python web application I need to set up the following env variables such that I can a database program which use PostgreSQL export PGDATA="/home/masi/postgres/var" export PGPORT="12428" I know that the problem may be solved by adding the files to .zshrc. However, I am not sure whether it is the right way to go. How can you add env variables? A: You only need to set the PGDATA variable in the script that starts the server. The client only cares about the port. You do have to set the port value if you must run it on a non-standard port. I assume you have a good reason to not just run it on the default port? If you do run it on the default port (5432), it will just work without any parameters for it at all. If you are running it on a different port, you should make two changes: In postgresql.conf, set the port= value to the new port you want, and restart the database server. In your settings.py in django, set the DATABASE_PORT value to the new port you want. You should definitely not need to use environment variables for simple configuration options like these - avoiding them will make your life easier. A: Put this somewhere in the main page of your app : import os os.environ["PGDATA"] = "/home/masi/postgres/var" os.environ["PGPORT"] = 12428 however, isn't there a better way to set that in the framework you use?
To set up environmental variables for a Python web application
I need to set up the following env variables such that I can a database program which use PostgreSQL export PGDATA="/home/masi/postgres/var" export PGPORT="12428" I know that the problem may be solved by adding the files to .zshrc. However, I am not sure whether it is the right way to go. How can you add env variables?
[ "You only need to set the PGDATA variable in the script that starts the server. The client only cares about the port.\nYou do have to set the port value if you must run it on a non-standard port. I assume you have a good reason to not just run it on the default port? If you do run it on the default port (5432), it will just work without any parameters for it at all.\nIf you are running it on a different port, you should make two changes:\n\nIn postgresql.conf, set the port= value to the new port you want, and restart the database server.\nIn your settings.py in django, set the DATABASE_PORT value to the new port you want.\n\nYou should definitely not need to use environment variables for simple configuration options like these - avoiding them will make your life easier.\n", "Put this somewhere in the main page of your app :\nimport os\nos.environ[\"PGDATA\"] = \"/home/masi/postgres/var\"\nos.environ[\"PGPORT\"] = 12428\n\nhowever, isn't there a better way to set that in the framework you use?\n" ]
[ 4, 3 ]
[]
[]
[ "postgresql", "python" ]
stackoverflow_0001187716_postgresql_python.txt
Q: Unicode to UTF8 for CSV Files - Python via xlrd I'm trying to translate an Excel spreadsheet to CSV using the Python xlrd and csv modules, but am getting hung up on encoding issues. Xlrd produces output from Excel in Unicode, and the CSV module requires UTF-8. I imaging that this has nothing to do with the xlrd module: everything works fine outputing to stdout or other outputs that don't require a specific encoding. The worksheet is encoded as UTF-16-LE, according to book.encoding The simplified version of what I'm doing is: from xlrd import * import csv b = open_workbook('file.xls') s = b.sheet_by_name('Export') bc = open('file.csv','w') bcw = csv.writer(bc,csv.excel,b.encoding) for row in range(s.nrows): this_row = [] for col in range(s.ncols): this_row.append(s.cell_value(row,col)) bcw.writerow(this_row) This produces the following error, about 740 lines in: UnicodeEncodeError: 'ascii' codec can't encode character u'\xed' in position 5: ordinal not in range(128) The value is seems to be getting hung up on is "516-777316" -- the text in the original Excel sheet is "516-7773167" (with a 7 on the end) I'll be the first to admit that I have only a vague sense of how character encoding works, so most of what I've tried so far are various fumbling permutations of .encode and .decode on the s.cell_value(row,col) If someone could suggest a solution I would appreciate it -- even better if you could provide an explanation of what's not working and why, so that I can more easily debug these problems myself in the future. Thanks in advance! EDIT: Thanks for the comments so far. When I user this_row.append(s.cell(row,col)) (e.g. s.cell instead of s.cell_value) the entire document writes without errors. The output isn't particularly desirable (text:u'516-7773167'), but it avoids the error even though the offending characters are still in the output. This makes me think that the challenge might be in xlrd after all. Thoughts? A: I expect the cell_value return value is the unicode string that's giving you problems (please print its type() to confirm that), in which case you should be able to solve it by changing this one line: this_row.append(s.cell_value(row,col)) to: this_row.append(s.cell_value(row,col).encode('utf8')) If cell_value is returning multiple different types, then you need to encode if and only if it's returning a unicode string; so you'd split this line into a few lines: val = s.cell_value(row, col) if isinstance(val, unicode): val = val.encode('utf8') this_row.append(val) A: You asked for explanations, but some of the phenomena are inexplicable without your help. (A) Strings in XLS files created by Excel 97 onwards are encoded in Latin1 if possible otherwise in UTF16LE. Each string carries a flag telling which was used. Earlier Excels encoded strings according to the user's "codepage". In any case, xlrd produces unicode objects. The file encoding is of interest only when the XLS file has been created by 3rd party software which either omits the codepage or lies about it. See the Unicode section up the front of the xlrd docs. (B) Unexplained phenomenon: This code: bcw = csv.writer(bc,csv.excel,b.encoding) causes the following error with Python 2.5, 2.6 and 3.1: TypeError: expected at most 2 arguments, got 3 -- this is about what I'd expect given the docs on csv.writer; it's expecting a filelike object followed by either (1) nothing (2) a dialect or (3) one or more formatting parameters. You gave it a dialect, and csv.writer has no encoding argument, so splat. What version of Python are you using? Or did you not copy/paste the script that you actually ran? (C) Unexplained phenomena around traceback and what the actual offending data was: "the_script.py", line 40, in <module> this_row.append(str(s.cell_value(row,col))) UnicodeEncodeError: 'ascii' codec can't encode character u'\xed' in position 5: ordinal not in range(128) FIRSTLY, there's a str() in the offending code line that wasn't in the simplified script -- did you not copy/paste the script that you actually ran? In any case, you shouldn't use str in general -- you won't get the full precision on your floats; just let the csv module convert them. SECONDLY, you say """The value is seems to be getting hung up on is "516-777316" -- the text in the original Excel sheet is "516-7773167" (with a 7 on the end)""" --- it's difficult to imagine how the 7 gets lost off the end. I'd use something like this to find out exactly what the problematic data was: try: str_value = str(s.cell_value(row, col)) except: print "row=%d col=%d cell_value=%r" % (row, col, s.cell_value(row, col)) raise That %r saves you from typing cell_value=%s ... repr(s.cell_value(row, col)) ... the repr() produces an unambiguous representation of your data. Learn it. Use it. How did you arrive at "516-777316"? THIRDLY, the error message is actually complaining about a unicode character u'\xed' at offset 5 (i.e. the sixth character). U+00ED is LATIN SMALL LETTER I WITH ACUTE, and there's nothing like that at all in "516-7773167" FOURTHLY, the error location seems to be a moving target -- you said in a comment on one of the solutions: "The error is on bcw.writerow." Huh? (D) Why you got that error message (with str()): str(a_unicode_object) attempts to convert the unicode object to a str object and in the absence of any encoding information uses ascii, but you have non-ascii data, so splat. Note that your object is to produce a csv file encoded in utf8, but your simplified script doesn't mention utf8 anywhere. (E) """... s.cell(row,col)) (e.g. s.cell instead of s.cell_value) the entire document writes without errors. The output isn't particularly desirable (text:u'516-7773167')""" That's happening because the csv writer is calling the __str__ method of your Cell object, and this produces <type>:<repr(value)> which may be useful for debugging but as you say not so great in your csv file. (F) Alex Martelli's solution is great in that it got you going. However you should read the section on the Cell class in the xlrd docs: types of cell are text, number, boolean, date, error, blank and empty. If you have dates, you are going to want to format them as dates not numbers, so you can't use isinstance() (and you may not want the function call overhead anyway) ... this is what the Cell.ctype attribute and Sheet.cell_type() and Sheet.row_types() methods are for. (G) UTF8 is not Unicode. UTF16LE is not Unicode. UTF16 is not Unicode ... and the idea that individual strings would waste 2 bytes each on a UTF16 BOM is too preposterous for even MS to contemplate :-) (H) Further reading (apart from the xlrd docs): http://www.joelonsoftware.com/articles/Unicode.html http://www.amk.ca/python/howto/unicode A: There appear to be two possibilities. One is that you have not perhaps opened the output file correctly: "If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference." ( http://docs.python.org/library/csv.html#module-csv ) If that is not the problem, then another option for you is to use codecs.EncodedFile(file, input[, output[, errors]]) as a wrapper to output your .csv: http://docs.python.org/library/codecs.html#module-codecs This will allow you to have the file object filter from incoming UTF16 to UTF8. While both of them are technically "unicode", the way they encode is very different. Something like this: rbc = open('file.csv','w') bc = codecs.EncodedFile(rbc, "UTF16", "UTF8") bcw = csv.writer(bc,csv.excel) may resolve the problem for you, assuming I understood the problem right, and assuming that the error is thrown when writing to the file. A: Looks like you've got 2 problems. There's something screwed up in that cell - '7' should be encoded as u'x37' I think, since it's within the ASCII-range. More importantly though, the fact that you're getting an error message specifying that the ascii codec can't be used suggests something's wrong with your encoding into unicode - it thinks you're trying to encode a value 0xed that can't be represented in ASCII, but you said you're trying to represent it in unicode. I'm not smart enough to work out what particular line is causing the problem - if you edit your question to tell me what line's causing that error message I might be able to help a bit more (I guess it's either this_row.append(s.cell_value(row,col)) or bcw.writerow(this_row), but would appreciate you confirming).
Unicode to UTF8 for CSV Files - Python via xlrd
I'm trying to translate an Excel spreadsheet to CSV using the Python xlrd and csv modules, but am getting hung up on encoding issues. Xlrd produces output from Excel in Unicode, and the CSV module requires UTF-8. I imaging that this has nothing to do with the xlrd module: everything works fine outputing to stdout or other outputs that don't require a specific encoding. The worksheet is encoded as UTF-16-LE, according to book.encoding The simplified version of what I'm doing is: from xlrd import * import csv b = open_workbook('file.xls') s = b.sheet_by_name('Export') bc = open('file.csv','w') bcw = csv.writer(bc,csv.excel,b.encoding) for row in range(s.nrows): this_row = [] for col in range(s.ncols): this_row.append(s.cell_value(row,col)) bcw.writerow(this_row) This produces the following error, about 740 lines in: UnicodeEncodeError: 'ascii' codec can't encode character u'\xed' in position 5: ordinal not in range(128) The value is seems to be getting hung up on is "516-777316" -- the text in the original Excel sheet is "516-7773167" (with a 7 on the end) I'll be the first to admit that I have only a vague sense of how character encoding works, so most of what I've tried so far are various fumbling permutations of .encode and .decode on the s.cell_value(row,col) If someone could suggest a solution I would appreciate it -- even better if you could provide an explanation of what's not working and why, so that I can more easily debug these problems myself in the future. Thanks in advance! EDIT: Thanks for the comments so far. When I user this_row.append(s.cell(row,col)) (e.g. s.cell instead of s.cell_value) the entire document writes without errors. The output isn't particularly desirable (text:u'516-7773167'), but it avoids the error even though the offending characters are still in the output. This makes me think that the challenge might be in xlrd after all. Thoughts?
[ "I expect the cell_value return value is the unicode string that's giving you problems (please print its type() to confirm that), in which case you should be able to solve it by changing this one line:\nthis_row.append(s.cell_value(row,col))\n\nto:\nthis_row.append(s.cell_value(row,col).encode('utf8'))\n\nIf cell_value is returning multiple different types, then you need to encode if and only if it's returning a unicode string; so you'd split this line into a few lines:\nval = s.cell_value(row, col)\nif isinstance(val, unicode):\n val = val.encode('utf8')\nthis_row.append(val)\n\n", "You asked for explanations, but some of the phenomena are inexplicable without your help.\n(A) Strings in XLS files created by Excel 97 onwards are encoded in Latin1 if possible otherwise in UTF16LE. Each string carries a flag telling which was used. Earlier Excels encoded strings according to the user's \"codepage\". In any case, xlrd produces unicode objects. The file encoding is of interest only when the XLS file has been created by 3rd party software which either omits the codepage or lies about it. See the Unicode section up the front of the xlrd docs.\n(B) Unexplained phenomenon:\nThis code:\nbcw = csv.writer(bc,csv.excel,b.encoding)\n\ncauses the following error with Python 2.5, 2.6 and 3.1: TypeError: expected at most 2 arguments, got 3 -- this is about what I'd expect given the docs on csv.writer; it's expecting a filelike object followed by either (1) nothing (2) a dialect or (3) one or more formatting parameters. You gave it a dialect, and csv.writer has no encoding argument, so splat. What version of Python are you using? Or did you not copy/paste the script that you actually ran?\n(C) Unexplained phenomena around traceback and what the actual offending data was:\n\"the_script.py\", line 40, in <module>\nthis_row.append(str(s.cell_value(row,col)))\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xed' in position 5: ordinal not in range(128) \n\nFIRSTLY, there's a str() in the offending code line that wasn't in the simplified script -- did you not copy/paste the script that you actually ran? In any case, you shouldn't use str in general -- you won't get the full precision on your floats; just let the csv module convert them.\nSECONDLY, you say \"\"\"The value is seems to be getting hung up on is \"516-777316\" -- the text in the original Excel sheet is \"516-7773167\" (with a 7 on the end)\"\"\" --- it's difficult to imagine how the 7 gets lost off the end. I'd use something like this to find out exactly what the problematic data was:\ntry:\n str_value = str(s.cell_value(row, col))\nexcept:\n print \"row=%d col=%d cell_value=%r\" % (row, col, s.cell_value(row, col))\n raise\n\nThat %r saves you from typing cell_value=%s ... repr(s.cell_value(row, col)) ... the repr() produces an unambiguous representation of your data. Learn it. Use it.\nHow did you arrive at \"516-777316\"?\nTHIRDLY, the error message is actually complaining about a unicode character u'\\xed' at offset 5 (i.e. the sixth character). U+00ED is LATIN SMALL LETTER I WITH ACUTE, and there's nothing like that at all in \"516-7773167\"\nFOURTHLY, the error location seems to be a moving target -- you said in a comment on one of the solutions: \"The error is on bcw.writerow.\" Huh?\n(D) Why you got that error message (with str()): str(a_unicode_object) attempts to convert the unicode object to a str object and in the absence of any encoding information uses ascii, but you have non-ascii data, so splat. Note that your object is to produce a csv file encoded in utf8, but your simplified script doesn't mention utf8 anywhere.\n(E) \"\"\"... s.cell(row,col)) (e.g. s.cell instead of s.cell_value) the entire document writes without errors. The output isn't particularly desirable (text:u'516-7773167')\"\"\"\nThat's happening because the csv writer is calling the __str__ method of your Cell object, and this produces <type>:<repr(value)> which may be useful for debugging but as you say not so great in your csv file.\n(F) Alex Martelli's solution is great in that it got you going. However you should read the section on the Cell class in the xlrd docs: types of cell are text, number, boolean, date, error, blank and empty. If you have dates, you are going to want to format them as dates not numbers, so you can't use isinstance() (and you may not want the function call overhead anyway) ... this is what the Cell.ctype attribute and Sheet.cell_type() and Sheet.row_types() methods are for.\n(G) UTF8 is not Unicode. UTF16LE is not Unicode. UTF16 is not Unicode ... and the idea that individual strings would waste 2 bytes each on a UTF16 BOM is too preposterous for even MS to contemplate :-)\n(H) Further reading (apart from the xlrd docs):\nhttp://www.joelonsoftware.com/articles/Unicode.html\nhttp://www.amk.ca/python/howto/unicode\n\n", "There appear to be two possibilities. One is that you have not perhaps opened the output file correctly:\n\"If csvfile is a file object, it must be opened with the ‘b’ flag on platforms where that makes a difference.\" ( http://docs.python.org/library/csv.html#module-csv )\nIf that is not the problem, then another option for you is to use codecs.EncodedFile(file, input[, output[, errors]]) as a wrapper to output your .csv:\nhttp://docs.python.org/library/codecs.html#module-codecs\nThis will allow you to have the file object filter from incoming UTF16 to UTF8. While both of them are technically \"unicode\", the way they encode is very different. \nSomething like this:\nrbc = open('file.csv','w')\nbc = codecs.EncodedFile(rbc, \"UTF16\", \"UTF8\")\nbcw = csv.writer(bc,csv.excel)\n\nmay resolve the problem for you, assuming I understood the problem right, and assuming that the error is thrown when writing to the file.\n", "Looks like you've got 2 problems.\nThere's something screwed up in that cell - '7' should be encoded as u'x37' I think, since it's within the ASCII-range.\nMore importantly though, the fact that you're getting an error message specifying that the ascii codec can't be used suggests something's wrong with your encoding into unicode - it thinks you're trying to encode a value 0xed that can't be represented in ASCII, but you said you're trying to represent it in unicode.\nI'm not smart enough to work out what particular line is causing the problem - if you edit your question to tell me what line's causing that error message I might be able to help a bit more (I guess it's either this_row.append(s.cell_value(row,col)) or bcw.writerow(this_row), but would appreciate you confirming).\n" ]
[ 26, 9, 0, 0 ]
[]
[]
[ "csv", "encoding", "python", "unicode", "xlrd" ]
stackoverflow_0001189111_csv_encoding_python_unicode_xlrd.txt
Q: User in Form-Class I have a form like this: class MyForm(forms.Form): [...] which is rendered in my view: if request.method == 'GET': form = MyForm(request.GET) Now, i want to add a form field which contains a set of values in a select-field, and the queryset must be filtered by the currently logged in user. So I changed the method signature so that it contains the request-object: class MyForm(forms.Form): def __init__(self, user, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) self.myfield = forms.ChoiceField([('%s' % d.id, '%s' % d.name) for d in MyModel.objects.filter(owners = user)]) but all I get after rendering the form is an object-reference as string instead of the select-widget. I seems that a form field, once declared, can not be modified. Any ideas? A: Replace your last line of code with this: self.fields['myfield'].choices = [('%s' % d.id, '%s' % d.name) for d in MyModel.objects.filter(owners = user)] A: I think here's what you're after: class MyForm(forms.Form): def __init__(self, user, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) self.fields['myfield'] = forms.ModelChoiceField(MyModel.objects.filter(owners=user)) That's assuming that the __unicode__ method of your MyModel returns self.name. If this isn't the case, then you can subclass ModelChoiceField and override the label_from_instance method.
User in Form-Class
I have a form like this: class MyForm(forms.Form): [...] which is rendered in my view: if request.method == 'GET': form = MyForm(request.GET) Now, i want to add a form field which contains a set of values in a select-field, and the queryset must be filtered by the currently logged in user. So I changed the method signature so that it contains the request-object: class MyForm(forms.Form): def __init__(self, user, *args, **kwargs): super(MyForm, self).__init__(*args, **kwargs) self.myfield = forms.ChoiceField([('%s' % d.id, '%s' % d.name) for d in MyModel.objects.filter(owners = user)]) but all I get after rendering the form is an object-reference as string instead of the select-widget. I seems that a form field, once declared, can not be modified. Any ideas?
[ "Replace your last line of code with this:\nself.fields['myfield'].choices = [('%s' % d.id, '%s' % d.name) for d in MyModel.objects.filter(owners = user)]\n\n", "I think here's what you're after:\nclass MyForm(forms.Form):\n def __init__(self, user, *args, **kwargs):\n super(MyForm, self).__init__(*args, **kwargs)\n self.fields['myfield'] = forms.ModelChoiceField(MyModel.objects.filter(owners=user))\n\nThat's assuming that the __unicode__ method of your MyModel returns self.name.\nIf this isn't the case, then you can subclass ModelChoiceField and override the label_from_instance method.\n" ]
[ 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001193084_django_python.txt
Q: Python's unittest and dynamic creation of test cases Possible Duplicate: How do you generate dynamic (parameterized) unit tests in Python? Is there a way to dynamically create unittest test cases? I have tried the following... class test_filenames(unittest.TestCase): def setUp(self): for category, testcases in files.items(): for testindex, curtest in enumerate(testcases): def thetest(): parser = FileParser(curtest['input']) theep = parser.parse() self.assertEquals(theep.episodenumber, curtest['episodenumber']) setattr(self, 'test_%s_%02d' % (category, testindex), thetest) ..which creates all the methods correctly (they show up in dir() and are callable), but unittest's test detector, nor nosetest executes them ("Ran 0 tests in ...") Since I may be asking the wrong question - what I am trying to achieve: I have a file containing test data, a list of input filenames, and expected data (simplified to episodenumber in the above code), stored in a Python dictionary. The key is the category, the value is a list of test cases, for example... test_cases = {} test_cases['example_1'] = [ {'input': 'test.01', 'episodenumber': 1}, {'input': 'test.02', 'episodenumber': 2} ] test_cases['example_2'] = [ {'input': 'another.123', 'episodenumber': 123}, {'input': 'test.e42', 'episodenumber': 32} ] Currently I just loop over all the data, call self.assertEquals on each test. The problem is, if one fails, I don't see the rest of the failures as they are also grouped into one test, which aborts when an assertion fails. The way around this, I thought, would be to (dynamically) create a function for each test case, perhaps there is a better way? A: In the following solution, the class Tests contains the helper method check and no test cases statically defined. Then, to dynamically add a test cases, I use setattr to define functions in the class. In the following example, I generate test cases test_<i>_<j> with i and j spanning [1,3] and [2,5] respectively, which use the helper method check with different values of i and j. class Tests(unittest.TestCase): def check(self, i, j): self.assertNotEquals(0, i-j) for i in xrange(1, 4): for j in xrange(2, 6): def ch(i, j): return lambda self: self.check(i, j) setattr(Tests, "test_%r_%r" % (i, j), ch(i, j)) A: For this you should use test generators in nose. All you need to do is yield a tuple, with the first being a function and the rest being the args. From the docs here is the example. def test_evens(): for i in range(0, 5): yield check_even, i, i*3 def check_even(n, nn): assert n % 2 == 0 or nn % 2 == 0
Python's unittest and dynamic creation of test cases
Possible Duplicate: How do you generate dynamic (parameterized) unit tests in Python? Is there a way to dynamically create unittest test cases? I have tried the following... class test_filenames(unittest.TestCase): def setUp(self): for category, testcases in files.items(): for testindex, curtest in enumerate(testcases): def thetest(): parser = FileParser(curtest['input']) theep = parser.parse() self.assertEquals(theep.episodenumber, curtest['episodenumber']) setattr(self, 'test_%s_%02d' % (category, testindex), thetest) ..which creates all the methods correctly (they show up in dir() and are callable), but unittest's test detector, nor nosetest executes them ("Ran 0 tests in ...") Since I may be asking the wrong question - what I am trying to achieve: I have a file containing test data, a list of input filenames, and expected data (simplified to episodenumber in the above code), stored in a Python dictionary. The key is the category, the value is a list of test cases, for example... test_cases = {} test_cases['example_1'] = [ {'input': 'test.01', 'episodenumber': 1}, {'input': 'test.02', 'episodenumber': 2} ] test_cases['example_2'] = [ {'input': 'another.123', 'episodenumber': 123}, {'input': 'test.e42', 'episodenumber': 32} ] Currently I just loop over all the data, call self.assertEquals on each test. The problem is, if one fails, I don't see the rest of the failures as they are also grouped into one test, which aborts when an assertion fails. The way around this, I thought, would be to (dynamically) create a function for each test case, perhaps there is a better way?
[ "In the following solution, the class Tests contains the helper method check and no test cases statically defined. Then, to dynamically add a test cases, I use setattr to define functions in the class. In the following example, I generate test cases test_<i>_<j> with i and j spanning [1,3] and [2,5] respectively, which use the helper method check with different values of i and j.\nclass Tests(unittest.TestCase):\n def check(self, i, j):\n self.assertNotEquals(0, i-j)\n\n\n\nfor i in xrange(1, 4):\n for j in xrange(2, 6):\n def ch(i, j):\n return lambda self: self.check(i, j)\n setattr(Tests, \"test_%r_%r\" % (i, j), ch(i, j))\n\n", "For this you should use test generators in nose. All you need to do is yield a tuple, with the first being a function and the rest being the args. From the docs here is the example.\ndef test_evens():\n for i in range(0, 5):\n yield check_even, i, i*3\n\ndef check_even(n, nn):\n assert n % 2 == 0 or nn % 2 == 0\n\n" ]
[ 25, 12 ]
[]
[]
[ "dynamic", "python", "unit_testing" ]
stackoverflow_0001193909_dynamic_python_unit_testing.txt
Q: Nested Set Model and SQLAlchemy -- Adding New Nodes How should new nodes be added with SQLAlchemy to a tree implemented using the Nested Set Model? class Category(Base): __tablename__ = 'categories' id = Column(Integer, primary_key=True) name = Column(String(128), nullable=False) lft = Column(Integer, nullable=False, unique=True) rgt = Column(Integer, nullable=False, unique=True) I would need a trigger on the table to assign lft and rgt for the new node and update all other affected nodes, but what is the best way to define the position of the node? I can pass the parent_id of the new node to the constructor, but how would I then communicate the parent_id to the trigger? A: You might want to look at the nested sets example in the examples directory of SQLAlchemy. This implements the model at the Python level. Doing it at the database level with triggers would need some way to communicate the desired parent, either as an extra column or as a stored procedure.
Nested Set Model and SQLAlchemy -- Adding New Nodes
How should new nodes be added with SQLAlchemy to a tree implemented using the Nested Set Model? class Category(Base): __tablename__ = 'categories' id = Column(Integer, primary_key=True) name = Column(String(128), nullable=False) lft = Column(Integer, nullable=False, unique=True) rgt = Column(Integer, nullable=False, unique=True) I would need a trigger on the table to assign lft and rgt for the new node and update all other affected nodes, but what is the best way to define the position of the node? I can pass the parent_id of the new node to the constructor, but how would I then communicate the parent_id to the trigger?
[ "You might want to look at the nested sets example in the examples directory of SQLAlchemy. This implements the model at the Python level.\nDoing it at the database level with triggers would need some way to communicate the desired parent, either as an extra column or as a stored procedure.\n" ]
[ 6 ]
[]
[]
[ "nested_sets", "python", "sql", "sqlalchemy", "tree" ]
stackoverflow_0001186086_nested_sets_python_sql_sqlalchemy_tree.txt
Q: SQLAlchemy: Operating on results I'm trying to do something relatively simple, spit out the column names and respective column values, and possibly filter out some columns so they aren't shown. This is what I attempted ( after the initial connection of course ): metadata = MetaData(engine) users_table = Table('fusion_users', metadata, autoload=True) s = users_table.select(users_table.c.user_name == username) results = s.execute() if results.rowcount != 1: return 'Sorry, user not found.' else: for result in results: for x, y in result.items() print x, y I looked at the API on SQLAlchemy ( v.5 ) but was rather confused. my 'result' in 'results' is a RowProxy, yet I don't think it's returning the right object for the .items() invocation. Let's say my table structure is so: user_id user_name user_password user_country 0 john a9fu93f39uf usa i want to filter and specify the column names to show ( i dont want to show the user_password obviously ) - how can I accomplish this? A: A SQLAlchemy RowProxy object has dict-like methods -- .items() to get all name/value pairs, .keys() to get just the names (e.g. to display them as a header line, then use .values() for the corresponding values or use each key to index into the RowProxy object, etc, etc -- so it being a "smart object" rather than a plain dict shouldn't inconvenience you unduly. A: You can use results instantly as an iterator. results = s.execute() for row in results: print row Selecting specific columns is done the following way: from sqlalchemy.sql import select s = select([users_table.c.user_name, users_table.c.user_country], users_table.c.user_name == username) for user_name, user_country in s.execute(): print user_name, user_country To print the column names additional to the values the way you have done it in your question should be the best because RowProxy is really nothing more than a ordered dictionary. IMO the API documentation for SqlAlchemy is not really helpfull to learn how to use it. I would suggest you to read the SQL Expression Language Tutorial. It contains the most vital information about basic querying with SqlAlchemy.
SQLAlchemy: Operating on results
I'm trying to do something relatively simple, spit out the column names and respective column values, and possibly filter out some columns so they aren't shown. This is what I attempted ( after the initial connection of course ): metadata = MetaData(engine) users_table = Table('fusion_users', metadata, autoload=True) s = users_table.select(users_table.c.user_name == username) results = s.execute() if results.rowcount != 1: return 'Sorry, user not found.' else: for result in results: for x, y in result.items() print x, y I looked at the API on SQLAlchemy ( v.5 ) but was rather confused. my 'result' in 'results' is a RowProxy, yet I don't think it's returning the right object for the .items() invocation. Let's say my table structure is so: user_id user_name user_password user_country 0 john a9fu93f39uf usa i want to filter and specify the column names to show ( i dont want to show the user_password obviously ) - how can I accomplish this?
[ "A SQLAlchemy RowProxy object has dict-like methods -- .items() to get all name/value pairs, .keys() to get just the names (e.g. to display them as a header line, then use .values() for the corresponding values or use each key to index into the RowProxy object, etc, etc -- so it being a \"smart object\" rather than a plain dict shouldn't inconvenience you unduly.\n", "You can use results instantly as an iterator.\nresults = s.execute()\n\nfor row in results:\n print row\n\nSelecting specific columns is done the following way:\nfrom sqlalchemy.sql import select\n\ns = select([users_table.c.user_name, users_table.c.user_country], users_table.c.user_name == username)\n\nfor user_name, user_country in s.execute():\n print user_name, user_country\n\nTo print the column names additional to the values the way you have done it in your question should be the best because RowProxy is really nothing more than a ordered dictionary.\nIMO the API documentation for SqlAlchemy is not really helpfull to learn how to use it. I would suggest you to read the SQL Expression Language Tutorial. It contains the most vital information about basic querying with SqlAlchemy.\n" ]
[ 16, 15 ]
[]
[]
[ "python", "sql", "sqlalchemy" ]
stackoverflow_0001192269_python_sql_sqlalchemy.txt
Q: What's wrong with this bit of python code using lambda? Some python code that keeps throwing up an invalid syntax error: stat.sort(lambda x1, y1: 1 if x1.created_at < y1.created_at else -1) A: This is a better solution: stat.sort(key=lambda x: x.created_at, reverse=True) Or, to avoid the lambda altogether: from operator import attrgetter stat.sort(key=attrgetter('created_at'), reverse=True) A: Try the and-or trick: lambda x1, y1: x1.created_at < y1.created_at and 1 or -1
What's wrong with this bit of python code using lambda?
Some python code that keeps throwing up an invalid syntax error: stat.sort(lambda x1, y1: 1 if x1.created_at < y1.created_at else -1)
[ "This is a better solution:\nstat.sort(key=lambda x: x.created_at, reverse=True)\n\nOr, to avoid the lambda altogether:\nfrom operator import attrgetter\nstat.sort(key=attrgetter('created_at'), reverse=True)\n\n", "Try the and-or trick:\nlambda x1, y1: x1.created_at < y1.created_at and 1 or -1\n\n" ]
[ 8, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001194543_python.txt
Q: Forward declaration - no admin page in django? This is probably a db design issue, but I couldn't figure out any better. Among several others, I have these models: class User(models.Model): name = models.CharField( max_length=40 ) # some fields omitted bands = models.ManyToManyField( Band ) and class Band(models.Model): creator = models.ForeignKey( User ) # some fields omitted name = models.CharField( max_length=40 ) So basically, I've got a user entity, which has many to many relation with a band entity. The twist is that I want a special user, who "created" the band on the site to have special editing capabilities. So I went forward, and added a ForeignKey called creator. The code could not run, because Band came after User in the source. So I forward declared class Band(models.Model): pass. Sadly, this does not seem to be exatly a good idea, because now Band is the only model that doesn't show up any interface elements in the django admin (Bands model is there, it just can't be edited). My question is that what change should I make in the models to get this work properly? (if any) A: See: http://docs.djangoproject.com/en/dev/ref/models/fields/#foreignkey, which says: If you need to create a relationship on a model that has not yet been defined, you can use the name of the model, rather than the model object itself: class Car(models.Model): manufacturer = models.ForeignKey('Manufacturer') # ... class Manufacturer(models.Model): # ...
Forward declaration - no admin page in django?
This is probably a db design issue, but I couldn't figure out any better. Among several others, I have these models: class User(models.Model): name = models.CharField( max_length=40 ) # some fields omitted bands = models.ManyToManyField( Band ) and class Band(models.Model): creator = models.ForeignKey( User ) # some fields omitted name = models.CharField( max_length=40 ) So basically, I've got a user entity, which has many to many relation with a band entity. The twist is that I want a special user, who "created" the band on the site to have special editing capabilities. So I went forward, and added a ForeignKey called creator. The code could not run, because Band came after User in the source. So I forward declared class Band(models.Model): pass. Sadly, this does not seem to be exatly a good idea, because now Band is the only model that doesn't show up any interface elements in the django admin (Bands model is there, it just can't be edited). My question is that what change should I make in the models to get this work properly? (if any)
[ "See: http://docs.djangoproject.com/en/dev/ref/models/fields/#foreignkey, which says:\n\nIf you need to create a relationship on a model that has not \n yet been defined, you can use the name of the model, rather \n than the model object itself:\n\n class Car(models.Model):\n manufacturer = models.ForeignKey('Manufacturer')\n # ...\n\n class Manufacturer(models.Model):\n # ...\n\n" ]
[ 12 ]
[]
[]
[ "circular_dependency", "database_design", "django", "forward_declaration", "python" ]
stackoverflow_0001194723_circular_dependency_database_design_django_forward_declaration_python.txt
Q: Python scoping problem I have a trivial example: def func1(): local_var = None def func(args): print args, print "local_var:", local_var local_var = "local" func("first") func("second") func1() I expect the output to be: first local_var: None second local_var: local However, my actual output is: first local_var: Traceback (most recent call last): File "test.py", line 13, in func1() File "test.py", line 10, in func1 func("first") File "test.py", line 6, in func print "local_var:", local_var UnboundLocalError: local variable 'local_var' referenced before assignment My understanding of python scoping rules dictate that this should work as expected. I have other code where this works as expected, but reducing one non-working code fragment to it's trivial case above also doesn't work. So I'm stumped. A: The assignment to local_var in func makes it local to func -- so the print statement references that "very very local" variable before it's ever assigned to, as the exception says. As jtb says, in Python 3 you can solve this with nonlocal, but it's clear from your code, using print statements, that you're working in Python 2. The traditional solution in Python 2 is to make sure that the assignment is not to a barename and thus does not make the variable more local than you wish, e.g.: def func1(): local_var = [None] def func(args): print args, print "local_var:", local_var[0] local_var[0] = "local" func("first") func("second") func1() the assignment to the indexing is not to a barename and therefore doesn't affect locality, and since Python 2.2 it's perfectly acceptable for nested inner functions to refer to variables that are locals in outer containing functions, which is all this version does (assigning to barenames is a different issue than referring to variables). A: Before Python 3.0, functions couldn't write to non-global variables in outer scopes. Python3 has introduced the nonlocal keyword that lets this work. You'd add nonlocal local_var at the top of func()'s definition to get the output you expected. See PEP 3104. If you're not working in Python 3 you'll have to make the variable global, or pass it into the function somehow. A: The standard way to solve this problem pre-3.0 would be def func1(): local_var = [None] def func(args): print args, print "local_var:", local_var[0] local_var[0] = "local" func("first") func("second") func1() A: Python's scoping rules are discussed and explained in this related question: Reason for unintuitive UnboundLocalError behaviour
Python scoping problem
I have a trivial example: def func1(): local_var = None def func(args): print args, print "local_var:", local_var local_var = "local" func("first") func("second") func1() I expect the output to be: first local_var: None second local_var: local However, my actual output is: first local_var: Traceback (most recent call last): File "test.py", line 13, in func1() File "test.py", line 10, in func1 func("first") File "test.py", line 6, in func print "local_var:", local_var UnboundLocalError: local variable 'local_var' referenced before assignment My understanding of python scoping rules dictate that this should work as expected. I have other code where this works as expected, but reducing one non-working code fragment to it's trivial case above also doesn't work. So I'm stumped.
[ "The assignment to local_var in func makes it local to func -- so the print statement references that \"very very local\" variable before it's ever assigned to, as the exception says. As jtb says, in Python 3 you can solve this with nonlocal, but it's clear from your code, using print statements, that you're working in Python 2. The traditional solution in Python 2 is to make sure that the assignment is not to a barename and thus does not make the variable more local than you wish, e.g.:\ndef func1():\n local_var = [None]\n\n def func(args):\n print args,\n print \"local_var:\", local_var[0]\n\n local_var[0] = \"local\"\n\n func(\"first\")\n func(\"second\")\n\nfunc1()\n\nthe assignment to the indexing is not to a barename and therefore doesn't affect locality, and since Python 2.2 it's perfectly acceptable for nested inner functions to refer to variables that are locals in outer containing functions, which is all this version does (assigning to barenames is a different issue than referring to variables).\n", "Before Python 3.0, functions couldn't write to non-global variables in outer scopes. Python3 has introduced the nonlocal keyword that lets this work. You'd add nonlocal local_var at the top of func()'s definition to get the output you expected. See PEP 3104.\nIf you're not working in Python 3 you'll have to make the variable global, or pass it into the function somehow.\n", "The standard way to solve this problem pre-3.0 would be\ndef func1():\n local_var = [None]\n\n def func(args):\n print args,\n print \"local_var:\", local_var[0]\n\n local_var[0] = \"local\"\n\n func(\"first\")\n func(\"second\")\n\nfunc1()\n\n", "Python's scoping rules are discussed and explained in this related question:\nReason for unintuitive UnboundLocalError behaviour\n" ]
[ 9, 1, 1, 1 ]
[]
[]
[ "python", "scoping" ]
stackoverflow_0001195577_python_scoping.txt
Q: Load non-uniform data from a txt file into a msql database I have text files with a lot of uniform rows that I'd like to load into a mysql database, but the files are not completely uniform. There are several rows at the beginning for some miscellaneous information, and there are timestamps about every 6 lines. "LOAD DATA INFILE" doesn't seem like the answer here because of my file format. It doesn't seem flexible enough. Note: The header of the file takes up a pre-determined number of lines. The timestamp is predicatable, but there are some other random notes that can pop up that need to be ignored. They always start with several keywords that I can check for though. A sample of my file in the middle: 103.3 .00035 103.4 .00035 103.5 .00035 103.6 .00035 103.7 .00035 103.8 .00035 103.9 .00035 Time: 07-15-2009 13:37 104.0 .00035 104.1 .00035 104.2 .00035 104.3 .00035 104.4 .00035 104.5 .00035 104.6 .00035 104.7 .00035 104.8 .00035 104.9 .00035 Time: 07-15-2009 13:38 105.0 .00035 105.1 .00035 105.2 .00035 From this I need to load information into three fields. The first field needs to be the filename, and the other are present in the example. I could add the filename to be in front of each data line, but this may not be necessary if I use a script to load the data. If required, I can change the file format, but I don't want to lose the timestamps and header information. SQLAlchemy seems like a possible good choice for python, which I'm fairly familiar with. I have thousands of lines of data, so loading all my files that I already have may be slow at first, but afterwards, I just want to load in the new lines of the file. So, I'll need to be selective about what I load in because I don't want duplicate information. Any suggestions on a selective data loading method from a text file to a mysql database? And beyond that, what do you suggest for only loading in lines of the file that are not already in the database? Thanks all. Meanwhile, I'll look into SQLAlchemy a bit more and see if I get somewhere with that. A: LOAD DATA INFILE has an IGNORE LINES option which you can use to skip the header. According to the docs, it also has a " LINES STARTING BY 'prefix_string'" option which you could use since all of your data lines seem to start with two blanks, while your timestamps start at the beginning of the line. A: Another way to do this is to just have Python transform the files for you. You could have it filter the input file to an output file based on the criteria that you specify pretty easily. This code assumes you have some function is_data(line) that checks line for the criteria you specify and returns true if it is data. with file("output", "w") as out: for line in file("input"): if is_data(line): out.write(line) Additionally, if you files just continue to concat you could have it store and read the last recorded offset (this code may not be 100% right, I haven't test it. But you get the idea): if os.path.exists("filter_settings.txt"): start=long(file("filter_settings.txt").read()) else: start=0 with file("output", "w") as out: input = file("input") input.seek(start) for line in input: if is_data(line): out.write(line) file("filter_settings.txt", "w").write(input.tell())
Load non-uniform data from a txt file into a msql database
I have text files with a lot of uniform rows that I'd like to load into a mysql database, but the files are not completely uniform. There are several rows at the beginning for some miscellaneous information, and there are timestamps about every 6 lines. "LOAD DATA INFILE" doesn't seem like the answer here because of my file format. It doesn't seem flexible enough. Note: The header of the file takes up a pre-determined number of lines. The timestamp is predicatable, but there are some other random notes that can pop up that need to be ignored. They always start with several keywords that I can check for though. A sample of my file in the middle: 103.3 .00035 103.4 .00035 103.5 .00035 103.6 .00035 103.7 .00035 103.8 .00035 103.9 .00035 Time: 07-15-2009 13:37 104.0 .00035 104.1 .00035 104.2 .00035 104.3 .00035 104.4 .00035 104.5 .00035 104.6 .00035 104.7 .00035 104.8 .00035 104.9 .00035 Time: 07-15-2009 13:38 105.0 .00035 105.1 .00035 105.2 .00035 From this I need to load information into three fields. The first field needs to be the filename, and the other are present in the example. I could add the filename to be in front of each data line, but this may not be necessary if I use a script to load the data. If required, I can change the file format, but I don't want to lose the timestamps and header information. SQLAlchemy seems like a possible good choice for python, which I'm fairly familiar with. I have thousands of lines of data, so loading all my files that I already have may be slow at first, but afterwards, I just want to load in the new lines of the file. So, I'll need to be selective about what I load in because I don't want duplicate information. Any suggestions on a selective data loading method from a text file to a mysql database? And beyond that, what do you suggest for only loading in lines of the file that are not already in the database? Thanks all. Meanwhile, I'll look into SQLAlchemy a bit more and see if I get somewhere with that.
[ "LOAD DATA INFILE has an IGNORE LINES option which you can use to skip the header. According to the docs, it also has a \" LINES STARTING BY 'prefix_string'\" option which you could use since all of your data lines seem to start with two blanks, while your timestamps start at the beginning of the line.\n", "Another way to do this is to just have Python transform the files for you. You could have it filter the input file to an output file based on the criteria that you specify pretty easily. This code assumes you have some function is_data(line) that checks line for the criteria you specify and returns true if it is data.\nwith file(\"output\", \"w\") as out:\n for line in file(\"input\"):\n if is_data(line):\n out.write(line)\n\nAdditionally, if you files just continue to concat you could have it store and read the last recorded offset (this code may not be 100% right, I haven't test it. But you get the idea):\nif os.path.exists(\"filter_settings.txt\"):\n start=long(file(\"filter_settings.txt\").read())\nelse:\n start=0\n\nwith file(\"output\", \"w\") as out:\n input = file(\"input\")\n input.seek(start)\n for line in input:\n if is_data(line):\n out.write(line)\n file(\"filter_settings.txt\", \"w\").write(input.tell())\n\n" ]
[ 2, 2 ]
[]
[]
[ "file_io", "load_data_infile", "mysql", "python", "sqlalchemy" ]
stackoverflow_0001195753_file_io_load_data_infile_mysql_python_sqlalchemy.txt
Q: django auth User truncating email field I have an issue with the django.contrib.auth User model where the email max_length is 75. I am receiving email addresses that are longer than 75 characters from the facebook api, and I need to (would really like to) store them in the user for continuity among users that are from facebook connect and others. I am able to solve the problem of "Data truncated for column 'email' at row 1" by manually going editing the field in our mySql database, but is there a better way to solve this? preferably one that does not involve me manually editing the database every time I reset it for a schema change? I am ok with editing the database as long as I can add it to the reset script, or the initial_data.json file. A: EmailField 75 chars length is hardcoded in django. You can fix this like that: from django.db.models.fields import EmailField def email_field_init(self, *args, **kwargs): kwargs['max_length'] = kwargs.get('max_length', 200) CharField.__init__(self, *args, **kwargs) EmailField.__init__ = email_field_init but this will change ALL EmailField fields lengths, so you could also try: from django.contrib.auth.models import User from django.utils.translation import ugettext as _ from django.db import models User.email = models.EmailField(_('e-mail address'), blank=True, max_length=200) both ways it'd be best to put this code in init of any module BEFORE django.contrib.auth in your INSTALLED_APPS Since Django 1.5 you can use your own custom model based on AbstractUser model, therefore you can use your own fields & lengths. In your models: from django.contrib.auth.models import AbstractUser from django.db import models class User(AbstractUser): email = models.EmailField(_('e-mail address'), blank=True, max_length=200) In settings: AUTH_USER_MODEL = 'your_app.models.User' A: There is now a ticket to increase the length of the email field in Django: http://code.djangoproject.com/ticket/11579
django auth User truncating email field
I have an issue with the django.contrib.auth User model where the email max_length is 75. I am receiving email addresses that are longer than 75 characters from the facebook api, and I need to (would really like to) store them in the user for continuity among users that are from facebook connect and others. I am able to solve the problem of "Data truncated for column 'email' at row 1" by manually going editing the field in our mySql database, but is there a better way to solve this? preferably one that does not involve me manually editing the database every time I reset it for a schema change? I am ok with editing the database as long as I can add it to the reset script, or the initial_data.json file.
[ "EmailField 75 chars length is hardcoded in django. You can fix this like that:\nfrom django.db.models.fields import EmailField\ndef email_field_init(self, *args, **kwargs):\n kwargs['max_length'] = kwargs.get('max_length', 200)\n CharField.__init__(self, *args, **kwargs)\nEmailField.__init__ = email_field_init\n\nbut this will change ALL EmailField fields lengths, so you could also try:\nfrom django.contrib.auth.models import User\nfrom django.utils.translation import ugettext as _\nfrom django.db import models\nUser.email = models.EmailField(_('e-mail address'), blank=True, max_length=200)\n\nboth ways it'd be best to put this code in init of any module BEFORE django.contrib.auth in your INSTALLED_APPS\nSince Django 1.5 you can use your own custom model based on AbstractUser model, therefore you can use your own fields & lengths.\nIn your models:\nfrom django.contrib.auth.models import AbstractUser\nfrom django.db import models\n\nclass User(AbstractUser):\n email = models.EmailField(_('e-mail address'), blank=True, max_length=200)\n\nIn settings:\nAUTH_USER_MODEL = 'your_app.models.User'\n\n", "There is now a ticket to increase the length of the email field in Django: http://code.djangoproject.com/ticket/11579\n" ]
[ 12, 2 ]
[]
[]
[ "authentication", "django", "mysql", "python" ]
stackoverflow_0000915910_authentication_django_mysql_python.txt
Q: How to manage many to one relationship in Django I am trying to make a many to one relationship and want to be able to control it (add -remove etc) via the admin panel. So this is my model.py: from django.db import models class Office(models.Model): name = models.CharField(max_length=30) class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) office = models.ForeignKey(Office) I want my model allow a province to have several Office. So inside my admin.py: class ProvinceCreator(admin.ModelAdmin): list_filter = ['numberPlate'] list_display = ['name', 'numberPlate','office'] class OfficeCreator(admin.ModelAdmin): list_display = ['name'] This seems correct to me, however when I try to add a new province with the admin panel, I get this: TemplateSyntaxError at /admin/haritaapp/province/ Caught an exception while rendering: no such column: haritaapp_province.office_id Thanks A: It seams that you have your models setup backwards. If you want province to have many offices, then province should be a foreign key in the Office model. from django.db import models class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) class Office(models.Model): name = models.CharField(max_length=30) province = models.ForeignKey(Province) This would be straightforward and very intuitive way to implement one-to-many relationsship As for the error that you are getting "no such column: haritaapp_province.office_id", when you add a new attribute (in your case office) to the model, you should either manually add column to the table. Or drop the table and re-run the syncdb: python manage.py syncdb Django will not automatically add new columns to the table when you add new fields to the model. A: Have you looked at the docs for doing Inlines? In your admin.py class Office(admin.TabularInline): model = Office class ProvinceAdmin(admin.ModelAdmin): inlines = [ Office, ]
How to manage many to one relationship in Django
I am trying to make a many to one relationship and want to be able to control it (add -remove etc) via the admin panel. So this is my model.py: from django.db import models class Office(models.Model): name = models.CharField(max_length=30) class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) office = models.ForeignKey(Office) I want my model allow a province to have several Office. So inside my admin.py: class ProvinceCreator(admin.ModelAdmin): list_filter = ['numberPlate'] list_display = ['name', 'numberPlate','office'] class OfficeCreator(admin.ModelAdmin): list_display = ['name'] This seems correct to me, however when I try to add a new province with the admin panel, I get this: TemplateSyntaxError at /admin/haritaapp/province/ Caught an exception while rendering: no such column: haritaapp_province.office_id Thanks
[ "It seams that you have your models setup backwards. If you want province to have many offices, then province should be a foreign key in the Office model.\nfrom django.db import models\n\nclass Province(models.Model):\n numberPlate = models.IntegerField(primary_key=True)\n name = models.CharField(max_length=20)\n\nclass Office(models.Model):\n name = models.CharField(max_length=30)\n province = models.ForeignKey(Province)\n\nThis would be straightforward and very intuitive way to implement one-to-many relationsship\nAs for the error that you are getting \"no such column: haritaapp_province.office_id\", when you add a new attribute (in your case office) to the model, you should either manually add column to the table. Or drop the table and re-run the syncdb:\n python manage.py syncdb\n\nDjango will not automatically add new columns to the table when you add new fields to the model.\n", "Have you looked at the docs for doing Inlines?\nIn your admin.py\nclass Office(admin.TabularInline):\n model = Office\n\nclass ProvinceAdmin(admin.ModelAdmin):\n inlines = [\n Office,\n ]\n\n" ]
[ 6, 1 ]
[]
[]
[ "django", "django_admin", "django_models", "python" ]
stackoverflow_0001195911_django_django_admin_django_models_python.txt
Q: Python memory footprint vs. heap size I'm having some memory issues while using a python script to issue a large solr query. I'm using the solrpy library to interface with the solr server. The query returns approximately 80,000 records. Immediately after issuing the query the python memory footprint as viewed through top balloons to ~190MB. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8225 root 16 0 193m 189m 3272 S 0.0 11.2 0:11.31 python ... At this point, the heap profile as viewed through heapy looks like this: Partition of a set of 163934 objects. Total size = 14157888 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 80472 49 7401384 52 7401384 52 unicode 1 44923 27 3315928 23 10717312 76 str ... The unicode objects represent the unique identifiers of the records from the query. One thing to note is that the total heap size is only 14MB while python is occupying 190MB of physical memory. Once the variable storing the query results falls out of scope, the heap profile correctly reflects the garbage collection: Partition of a set of 83586 objects. Total size = 6437744 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 44928 54 3316108 52 3316108 52 str However, the memory footprint remains unchanged: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8225 root 16 0 195m 192m 3432 S 0.0 11.3 0:13.46 python ... Why is there such a large disparity between python's physical memory footprint and the size of the python heap? A: Python allocates Unicode objects from the C heap. So when you allocate many of them (along with other malloc blocks), then release most of them except for the very last one, C malloc will not return any memory to the operating system, as the C heap will only shrink on the end (not in the middle). Releasing the last Unicode object will release the block at the end of the C heap, which then allows malloc to return it all to the system. On top of these problems, Python also maintains a pool of freed unicode objects, for faster allocation. So when the last Unicode object is freed, it isn't returned to malloc right away, making all the other pages stuck. A: CPython implementation only exceptionally free's allocated memory. This is a widely known bug, but it isn't receiving much attention by CPython developers. The recommended workaround is to "fork and die" the process that consumes lots RAM. A: What version of python are you using? I am asking because older version of CPython did not release the memory and this was fixed in Python 2.5. A: I've implemented hruske's advice of "fork and die". I'm using os.fork() to execute the memory intensive section of code in a child process, then I let the child process exit. The parent process executes an os.waitpid() on the child so that only one thread is executing at a given time. If anyone sees any pitfalls with this solution, please chime in.
Python memory footprint vs. heap size
I'm having some memory issues while using a python script to issue a large solr query. I'm using the solrpy library to interface with the solr server. The query returns approximately 80,000 records. Immediately after issuing the query the python memory footprint as viewed through top balloons to ~190MB. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8225 root 16 0 193m 189m 3272 S 0.0 11.2 0:11.31 python ... At this point, the heap profile as viewed through heapy looks like this: Partition of a set of 163934 objects. Total size = 14157888 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 80472 49 7401384 52 7401384 52 unicode 1 44923 27 3315928 23 10717312 76 str ... The unicode objects represent the unique identifiers of the records from the query. One thing to note is that the total heap size is only 14MB while python is occupying 190MB of physical memory. Once the variable storing the query results falls out of scope, the heap profile correctly reflects the garbage collection: Partition of a set of 83586 objects. Total size = 6437744 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 44928 54 3316108 52 3316108 52 str However, the memory footprint remains unchanged: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8225 root 16 0 195m 192m 3432 S 0.0 11.3 0:13.46 python ... Why is there such a large disparity between python's physical memory footprint and the size of the python heap?
[ "Python allocates Unicode objects from the C heap. So when you allocate many of them (along with other malloc blocks), then release most of them except for the very last one, C malloc will not return any memory to the operating system, as the C heap will only shrink on the end (not in the middle). Releasing the last Unicode object will release the block at the end of the C heap, which then allows malloc to return it all to the system.\nOn top of these problems, Python also maintains a pool of freed unicode objects, for faster allocation. So when the last Unicode object is freed, it isn't returned to malloc right away, making all the other pages stuck.\n", "CPython implementation only exceptionally free's allocated memory. This is a widely known bug, but it isn't receiving much attention by CPython developers. The recommended workaround is to \"fork and die\" the process that consumes lots RAM.\n", "What version of python are you using?\nI am asking because older version of CPython did not release the memory and this was fixed in Python 2.5.\n", "I've implemented hruske's advice of \"fork and die\". I'm using os.fork() to execute the memory intensive section of code in a child process, then I let the child process exit. The parent process executes an os.waitpid() on the child so that only one thread is executing at a given time.\nIf anyone sees any pitfalls with this solution, please chime in.\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ "memory_leaks", "python", "solr" ]
stackoverflow_0001194416_memory_leaks_python_solr.txt
Q: Using Eval in Python to create class variables I wrote a class that lets me pass in a list of variable types, variable names, prompts, and default values. The class creates a wxPython panel, which is displayed in a frame that lets the user set the input values before pressing the calculate button and getting the results back as a plot. I add all of the variables to the class using exec statements. This keeps all of the variables together in one class, and I can refer to them by name. light = Variables( frame , [ ['f','wavelength','Wavelength (nm)',632.8] ,\ ['f','n','Index of Refraction',1.0],]) Inside the class I create and set the variables with statments like: for variable in self.variable_list: var_type,var_text_ctrl,var_name = variable if var_type == 'f' : exec( 'self.' + var_name + ' = ' + var_text_ctrl.GetValue() ) When I need to use the variables, I can just refer to them by name: wl = light.wavelength n = light.n Then I read on SO that there is rarely a need to use exec in Python. Is there a problem with this approach? Is there a better way to create a class that holds variables that should be grouped together, that you want to be able to edit, and also has the code and wxPython calls for displaying, editing, (and also saving all the variables to a file or reading them back again)? Curt A: You can use the setattr function, which takes three arguments: the object, the name of the attribute, and it's value. For example, setattr(self, 'wavelength', wavelength_val) is equivalent to: self.wavelength = wavelength_val So you could do something like this: for variable in self.variable_list: var_type,var_text_ctrl,var_name = variable if var_type == 'f' : setattr(self, var_name, var_text_ctrl.GetValue()) A: I agree with mipadi's answer, but wanted to add one more answer, since the Original Post asked if there's a problem using exec. I'd like to address that. Think like a criminal. If your malicious adversary knew you had code that read: exec( 'self.' + var_name + ' = ' + var_text_ctrl.GetValue() ) then he or she may try to inject values for var_name and var_text_ctrl that hacks your code. Imagine if a malicious user could get var_name to be this value: var_name = """ a = 1 # some bogus assignment to complete "self." statement import os # malicious code starts here os.rmdir('/bin') # do some evil # end it with another var_name # ("a" alone, on the next line) a """ All of the sudden, the malicious adversary was able to get YOU to exec[ute] code to delete your /bin directory (or whatever evil they want). Now your exec statement roughly reads the equivalent of: exec ("self.a=1 \n import os \n os.rmdir('/bin') \n\n " "a" + ' = ' + var_text_ctrl.GetValue() ) Not good!!! As you can imagine, it's possible to construct all sorts of malicious code injections when exec is used. This puts the burden onto the developer to think of any way that the code can be hacked - and adds unnecessary risk, when a risk-free alternative is available. A: For the security conscious, there might be an acceptable alternative. There used to be a module call rexec that allowed "restricted" execution of arbitrary python code. This module was removed from recent python versions. http://pypi.python.org/pypi/RestrictedPython is another implementation by the Zope people that creates a "restricted" environment for arbitrary python code. A: The module was removed because it had security issues. Very difficult to provide an environment where any code can be executed in a restricted environment, with all the introspection that Python has. A better bet is to avoid eval and exec. A really off-the-wall idea is to use Google App Engine, and let them worry about malicious code.
Using Eval in Python to create class variables
I wrote a class that lets me pass in a list of variable types, variable names, prompts, and default values. The class creates a wxPython panel, which is displayed in a frame that lets the user set the input values before pressing the calculate button and getting the results back as a plot. I add all of the variables to the class using exec statements. This keeps all of the variables together in one class, and I can refer to them by name. light = Variables( frame , [ ['f','wavelength','Wavelength (nm)',632.8] ,\ ['f','n','Index of Refraction',1.0],]) Inside the class I create and set the variables with statments like: for variable in self.variable_list: var_type,var_text_ctrl,var_name = variable if var_type == 'f' : exec( 'self.' + var_name + ' = ' + var_text_ctrl.GetValue() ) When I need to use the variables, I can just refer to them by name: wl = light.wavelength n = light.n Then I read on SO that there is rarely a need to use exec in Python. Is there a problem with this approach? Is there a better way to create a class that holds variables that should be grouped together, that you want to be able to edit, and also has the code and wxPython calls for displaying, editing, (and also saving all the variables to a file or reading them back again)? Curt
[ "You can use the setattr function, which takes three arguments: the object, the name of the attribute, and it's value. For example,\nsetattr(self, 'wavelength', wavelength_val)\n\nis equivalent to:\nself.wavelength = wavelength_val\n\nSo you could do something like this:\nfor variable in self.variable_list:\n var_type,var_text_ctrl,var_name = variable\n if var_type == 'f' :\n setattr(self, var_name, var_text_ctrl.GetValue())\n\n", "I agree with mipadi's answer, but wanted to add one more answer, since the Original Post asked if there's a problem using exec. I'd like to address that.\nThink like a criminal.\nIf your malicious adversary knew you had code that read:\n\nexec( 'self.' + var_name + ' = ' + var_text_ctrl.GetValue() )\n\nthen he or she may try to inject values for var_name and var_text_ctrl that hacks your code. \nImagine if a malicious user could get var_name to be this value:\n\nvar_name = \"\"\"\na = 1 # some bogus assignment to complete \"self.\" statement\nimport os # malicious code starts here\nos.rmdir('/bin') # do some evil\n # end it with another var_name \n # (\"a\" alone, on the next line)\na \n\"\"\"\n\nAll of the sudden, the malicious adversary was able to get YOU to exec[ute] code to delete your /bin directory (or whatever evil they want). Now your exec statement roughly reads the equivalent of:\n\nexec (\"self.a=1 \\n import os \\n os.rmdir('/bin') \\n\\n \"\n \"a\" + ' = ' + var_text_ctrl.GetValue() ) \n\nNot good!!!\nAs you can imagine, it's possible to construct all sorts of malicious code injections when exec is used. This puts the burden onto the developer to think of any way that the code can be hacked - and adds unnecessary risk, when a risk-free alternative is available.\n", "For the security conscious, there might be an acceptable alternative. There used to be a module call rexec that allowed \"restricted\" execution of arbitrary python code. This module was removed from recent python versions. http://pypi.python.org/pypi/RestrictedPython is another implementation by the Zope people that creates a \"restricted\" environment for arbitrary python code.\n", "The module was removed because it had security issues. Very difficult to provide an environment where any code can be executed in a restricted environment, with all the introspection that Python has.\nA better bet is to avoid eval and exec.\nA really off-the-wall idea is to use Google App Engine, and let them worry about malicious code.\n" ]
[ 18, 1, 0, 0 ]
[]
[]
[ "python", "scientific_computing", "wxpython" ]
stackoverflow_0001144702_python_scientific_computing_wxpython.txt
Q: Running programs w/ a GUI over a remote connection I'm trying to start perfmon and another program that have GUI's through a python script that uses a PKA ssh connection. Is it possible to do this? If so could anyone point me in the right direction? A: I've found a program called psexec that will open a program remotely on another windows machine. http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx There are options or flags that you can use with this command line program to open a program with a GUI and view it on a remote machine. A: If you mean this perfmon (the one that runs under Linux &c == I believe there's a honomym program that's Windows-only and would behave very differently), ssh -X or ssh -Y let you open an ssh connection which tunnels an X11 (GUI) connection (if server and client are both configured to allow that, of course). Here are copious details of how to do it "the old way" (with -p etc); here, the explanation of -X and the more secure -Y modern options. As long as the app is running on a Linux box, you can have the display ("X Server") just about anywhere, with a proper ssh tunnel securely connecting them. If it's Windows you're talking about (i.e. running the perfmon app on a Windows box, wherever it is you want the GUI), I don't know how to tunnel a GUI over ssh (it may not be possible). One possibility is VNC (there are several implementations of the protocol, both commercial and free) but I'm not all that experienced with it.
Running programs w/ a GUI over a remote connection
I'm trying to start perfmon and another program that have GUI's through a python script that uses a PKA ssh connection. Is it possible to do this? If so could anyone point me in the right direction?
[ "I've found a program called psexec that will open a program remotely on another windows machine. http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx\nThere are options or flags that you can use with this command line program to open a program with a GUI and view it on a remote machine.\n", "If you mean this perfmon (the one that runs under Linux &c == I believe there's a honomym program that's Windows-only and would behave very differently), ssh -X or ssh -Y let you open an ssh connection which tunnels an X11 (GUI) connection (if server and client are both configured to allow that, of course).\nHere are copious details of how to do it \"the old way\" (with -p etc); here, the explanation of -X and the more secure -Y modern options. As long as the app is running on a Linux box, you can have the display (\"X Server\") just about anywhere, with a proper ssh tunnel securely connecting them.\nIf it's Windows you're talking about (i.e. running the perfmon app on a Windows box, wherever it is you want the GUI), I don't know how to tunnel a GUI over ssh (it may not be possible). One possibility is VNC (there are several implementations of the protocol, both commercial and free) but I'm not all that experienced with it.\n" ]
[ 6, 2 ]
[]
[]
[ "perfmon", "python", "ssh", "user_interface" ]
stackoverflow_0001125894_perfmon_python_ssh_user_interface.txt
Q: Python Hangs When Importing Swig Generated Wrapper Python is 'hanging' when I try to import a c++ shared library into the windows version of python 2.5 and I have no clue why. On Linux, everything works fine. We can compile all of our C++ code, generate swig wrapper classes. They compile and can be imported and used in either python 2.5 or 2.6. Now, we are trying to port the code to Windows using Cygwin. We are able to compile each of the C++ libraries to shared dlls using -mno-cygwin, which removes the dependency on cygwin1.dll. Essentially this causes the gcc target to be MinGW instead of Cygwin, enabling the resulting binaries to be run in Windows without any dependency on Cygwin. Moreover, each of these shared libraries can be linked into c++ binaries and run successfully. Once this one done, we used swig to generate wrappers for each of the shared libraries. These wrappers are generated, compiled, and linked without problem. The next step then, was to import the generated python wrapper into python. We are able to import all but two of our libraries. For the two that do not work, when we try to import either the .py or .pyd files into Windows python (the version compiled with Visual C++), python hangs. We cannot kill python with ctrl+c or ctrl+d, the only recourse is to kill it via task manager. If we attach gdb to the python process and print a stack trace, we mostly get garbage, nothing useful. Next, we tried ifdef'ing out blocks of code in the *.i files and recreating the swig wrappers. This process at least allowed me to import the libraries into Windows python, but the problem is we had to comment out too many functions which are necessary for the software to run. In general, there were three types of functions that had to be commented out: static functions, virtual const functions, and regular public functions that were NOT declared as const. This is reproducible too, if we uncomment any one of these functions, then the import hangs again. Next, we tried extracting the functions into a simple hello world program, generate a swig wrapper and import them into python. This worked. We copied functions exactly from the header files. They work in the very small test program, but not in the larger shared library. We are building them the exact same way. So, any ideas about why this is happening or even just better debugging techniques would be very helpful. These work fine on Linux with gcc 3 and 4 and python 2.5 and 2.6. On Windows, this is the software I am using: gcc 3.4.4 swig 1.39 (Windows binaries from swig.org) python 2.5.4 (Windows binaries and includes/libs from python.org) These are the commands I am using for building the simple hello world program (the full library uses the same options, it's just a lot longer because of extra -I, -L, and -l options) swig -c++ -python -o test_wrap.cc test.i gcc -c -mno-cygwin test.cc gcc -c -mno-cygwin test_wrap.cc -I/usr/python25/include dlltool --export-all --output-def _test.def test.o gcc -mno-cygwin -shared -s test_wrap.o test.o -L/usr/python25/libs -lpython25 -lstdc++ -o _TestModule.pyd Thanks, AJ A: A technique I've used is to insert a "hard" breakpoint (__asm int 3) in the module init function. Then either run it through a debugger or just run it and let the windows debugger pop when the interrupt is called. You can download a nice windows debugger from Microsoft here.
Python Hangs When Importing Swig Generated Wrapper
Python is 'hanging' when I try to import a c++ shared library into the windows version of python 2.5 and I have no clue why. On Linux, everything works fine. We can compile all of our C++ code, generate swig wrapper classes. They compile and can be imported and used in either python 2.5 or 2.6. Now, we are trying to port the code to Windows using Cygwin. We are able to compile each of the C++ libraries to shared dlls using -mno-cygwin, which removes the dependency on cygwin1.dll. Essentially this causes the gcc target to be MinGW instead of Cygwin, enabling the resulting binaries to be run in Windows without any dependency on Cygwin. Moreover, each of these shared libraries can be linked into c++ binaries and run successfully. Once this one done, we used swig to generate wrappers for each of the shared libraries. These wrappers are generated, compiled, and linked without problem. The next step then, was to import the generated python wrapper into python. We are able to import all but two of our libraries. For the two that do not work, when we try to import either the .py or .pyd files into Windows python (the version compiled with Visual C++), python hangs. We cannot kill python with ctrl+c or ctrl+d, the only recourse is to kill it via task manager. If we attach gdb to the python process and print a stack trace, we mostly get garbage, nothing useful. Next, we tried ifdef'ing out blocks of code in the *.i files and recreating the swig wrappers. This process at least allowed me to import the libraries into Windows python, but the problem is we had to comment out too many functions which are necessary for the software to run. In general, there were three types of functions that had to be commented out: static functions, virtual const functions, and regular public functions that were NOT declared as const. This is reproducible too, if we uncomment any one of these functions, then the import hangs again. Next, we tried extracting the functions into a simple hello world program, generate a swig wrapper and import them into python. This worked. We copied functions exactly from the header files. They work in the very small test program, but not in the larger shared library. We are building them the exact same way. So, any ideas about why this is happening or even just better debugging techniques would be very helpful. These work fine on Linux with gcc 3 and 4 and python 2.5 and 2.6. On Windows, this is the software I am using: gcc 3.4.4 swig 1.39 (Windows binaries from swig.org) python 2.5.4 (Windows binaries and includes/libs from python.org) These are the commands I am using for building the simple hello world program (the full library uses the same options, it's just a lot longer because of extra -I, -L, and -l options) swig -c++ -python -o test_wrap.cc test.i gcc -c -mno-cygwin test.cc gcc -c -mno-cygwin test_wrap.cc -I/usr/python25/include dlltool --export-all --output-def _test.def test.o gcc -mno-cygwin -shared -s test_wrap.o test.o -L/usr/python25/libs -lpython25 -lstdc++ -o _TestModule.pyd Thanks, AJ
[ "A technique I've used is to insert a \"hard\" breakpoint (__asm int 3) in the module init function. Then either run it through a debugger or just run it and let the windows debugger pop when the interrupt is called.\nYou can download a nice windows debugger from Microsoft here.\n" ]
[ 1 ]
[]
[]
[ "cygwin", "python", "swig" ]
stackoverflow_0001162461_cygwin_python_swig.txt
Q: How to open python source files using the IDLE shell? If I import a module in IDLE using: import <module_name> print <module_name>.__file__ how can I open it without going through the Menu->File->Open multiple-step procedure? It would be nice to open it via a command that takes the path and outputs a separate editor like IDLE. A: You can use ALT-M and write the name of the module in the popup box You can use CTRL-O to open a file A: I don't think IDLE will do it for you, but if you use Wing, you can mouse over the name of the module, and do a <Ctrl>-<Left Click>, or open the right click context menu on the module name, and select "Goto definition." Simple. You don't even have to know the module's file location; you just have to have it on your PYTHONPATH. Limitation: It doesn't work if you add it to your PYTHONPATH using sys.path.append(). It has to be on the path that exists when the file is opened, though you can configure that on a per-project or per-file basis within Wing.
How to open python source files using the IDLE shell?
If I import a module in IDLE using: import <module_name> print <module_name>.__file__ how can I open it without going through the Menu->File->Open multiple-step procedure? It would be nice to open it via a command that takes the path and outputs a separate editor like IDLE.
[ "\nYou can use ALT-M and write the name of the module in the popup box\nYou can use CTRL-O to open a file\n\n", "I don't think IDLE will do it for you, but if you use Wing, you can mouse over the name of the module, and do a <Ctrl>-<Left Click>, or open the right click context menu on the module name, and select \"Goto definition.\" Simple. You don't even have to know the module's file location; you just have to have it on your PYTHONPATH.\nLimitation: It doesn't work if you add it to your PYTHONPATH using sys.path.append(). It has to be on the path that exists when the file is opened, though you can configure that on a per-project or per-file basis within Wing.\n" ]
[ 2, 0 ]
[]
[]
[ "file", "ide", "python" ]
stackoverflow_0001186884_file_ide_python.txt
Q: How do I change the directory of InMemoryUploadedFile? It seems that if I do not create a ModelForm from a model, and create a new object and save it, it will not respect the field's upload directory. How do I change the directory of a InMemoryUploadedFile so I can manually implement the upload dir? Because the InMemoryUploadedFile obj is just the filename, and I would like to add the upload dir parameters. Thank you! def add_image(self, image): pi = ProductImages() pi.image = image pi.save() self.o_gallery_images.add(pi) return True (Code that does not respect the upload dir of ProductImages "image" field) A: How did you define the attribute of image in your ProductImages model? Did you have upload_to argument in your FileField? class ProductImages(models.Model): image = models.FileField(upload_to="images/")
How do I change the directory of InMemoryUploadedFile?
It seems that if I do not create a ModelForm from a model, and create a new object and save it, it will not respect the field's upload directory. How do I change the directory of a InMemoryUploadedFile so I can manually implement the upload dir? Because the InMemoryUploadedFile obj is just the filename, and I would like to add the upload dir parameters. Thank you! def add_image(self, image): pi = ProductImages() pi.image = image pi.save() self.o_gallery_images.add(pi) return True (Code that does not respect the upload dir of ProductImages "image" field)
[ "How did you define the attribute of image in your ProductImages model? Did you have upload_to argument in your FileField? \nclass ProductImages(models.Model):\n image = models.FileField(upload_to=\"images/\")\n\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001194963_django_python.txt
Q: FormEncode, pylons, and mako example I'm working in pylons with mako, and I'd like to create forms and validations with FormEncode for several parts of my application. I can't seem to find any good examples of the whole process. My question is twofold: Technical FancyValidators and Schemas - Their relationship and syntax Pylons controllers and mako templates - how to collect, handle, and validate the data Stylistic Best practices for controller methods Easing the reuse of forms (for update vs create, for example) So if you know of any complete examples, it would be much appreciated. I would think this would be a common combination with more examples/tutorials out there. A: I don't know if you've gone through the pylons book, but I found chapter 6 to be very thorough in regards to forms. As far as best practices go, I'm not exactly sure what you are looking for. A controller method maps to a url and needs to return a string-like object. How you arrive at that is largely application specific and you are free to choose how you structure the application. For form reuse, I don't know if it would be considered a best practice but tw.forms I find pretty useful for just that (and toscawidgets for general html snippet reuse). If you anticipate having to reuse fields in forms, you may have some success with fieldsets. If you are looking for complete examples, I would consider turbogears2 a good resource. It's built on top of pylons so any information on tg2 is equally applicable to pylons. You can also look at the reddit source code And finally, someone will suggest django. :)
FormEncode, pylons, and mako example
I'm working in pylons with mako, and I'd like to create forms and validations with FormEncode for several parts of my application. I can't seem to find any good examples of the whole process. My question is twofold: Technical FancyValidators and Schemas - Their relationship and syntax Pylons controllers and mako templates - how to collect, handle, and validate the data Stylistic Best practices for controller methods Easing the reuse of forms (for update vs create, for example) So if you know of any complete examples, it would be much appreciated. I would think this would be a common combination with more examples/tutorials out there.
[ "I don't know if you've gone through the pylons book, but I found chapter 6 to be very thorough in regards to forms. \nAs far as best practices go, I'm not exactly sure what you are looking for. A controller method maps to a url and needs to return a string-like object. How you arrive at that is largely application specific and you are free to choose how you structure the application.\nFor form reuse, I don't know if it would be considered a best practice but tw.forms I find pretty useful for just that (and toscawidgets for general html snippet reuse). If you anticipate having to reuse fields in forms, you may have some success with fieldsets.\nIf you are looking for complete examples, I would consider turbogears2 a good resource. It's built on top of pylons so any information on tg2 is equally applicable to pylons. \nYou can also look at the reddit source code\nAnd finally, someone will suggest django. :) \n" ]
[ 1 ]
[]
[]
[ "formencode", "mako", "pylons", "python", "validation" ]
stackoverflow_0001191265_formencode_mako_pylons_python_validation.txt
Q: Passing a Django model attribute name to a function I'd like to build a function in Django that iterates over a set of objects in a queryset and does something based on the value of an arbitrary attribute. The type of the objects is fixed; let's say they're guaranteed to be from the Comment model, which looks like this: class Comment(models.Model): name = models.CharField(max_length=255) text = models.TextField() email = models.EmailField() Sometimes I'll want to do run the function over the names, but other times the emails. I'd like to know how to write and call a function that looks like this: def do_something(attribute, objects): for object in objects: # do something with the object based on object.attribute return results A: def do_something(attribute, objects): results = [] for object in objects: if hasattr(object, attribute): results.append(getattr(object, attribute)) return results Or, more succinctly, def do_something(attribute, objects): return [getattr(o, attribute) for o in objects if hasattr(o, attribute)] A: If you're only doing stuff with a single attribute, you can use .values_list(), which is more performant since you're not instantiating whole objects, and you're only pulling the specific value you're using from the database. >>> def do_something(values): ... for value in values: ... print value ... return something ... >>> emails = Comment.objects.values_list('email', flat=True) >>> names = Comment.objects.values_list('name', flat=True) >>> do_something(emails) # Prints all email addresses >>> do_something(names) # Prints all names A: You don't make clear what you want to return from your function, so substitute a suitable return statement. I assume attribute will be set to one of "name", "text" or "email". def do_something(attribute, objects): for o in objects: print getattr(o, attribute) return something Update: OK, you've updated the question. Cide's answer makes the most sense now.
Passing a Django model attribute name to a function
I'd like to build a function in Django that iterates over a set of objects in a queryset and does something based on the value of an arbitrary attribute. The type of the objects is fixed; let's say they're guaranteed to be from the Comment model, which looks like this: class Comment(models.Model): name = models.CharField(max_length=255) text = models.TextField() email = models.EmailField() Sometimes I'll want to do run the function over the names, but other times the emails. I'd like to know how to write and call a function that looks like this: def do_something(attribute, objects): for object in objects: # do something with the object based on object.attribute return results
[ "def do_something(attribute, objects):\n results = []\n for object in objects:\n if hasattr(object, attribute):\n results.append(getattr(object, attribute))\n return results\n\nOr, more succinctly,\ndef do_something(attribute, objects):\n return [getattr(o, attribute) for o in objects if hasattr(o, attribute)]\n\n", "If you're only doing stuff with a single attribute, you can use .values_list(), which is more performant since you're not instantiating whole objects, and you're only pulling the specific value you're using from the database.\n>>> def do_something(values):\n... for value in values:\n... print value\n... return something\n...\n>>> emails = Comment.objects.values_list('email', flat=True)\n>>> names = Comment.objects.values_list('name', flat=True)\n>>> do_something(emails) # Prints all email addresses\n>>> do_something(names) # Prints all names\n\n", "You don't make clear what you want to return from your function, so substitute a suitable return statement. I assume attribute will be set to one of \"name\", \"text\" or \"email\".\ndef do_something(attribute, objects):\n for o in objects:\n print getattr(o, attribute)\n return something\n\nUpdate: OK, you've updated the question. Cide's answer makes the most sense now.\n" ]
[ 4, 4, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001197042_django_python.txt
Q: Is it possible to overload ++ operators in Python? Is it possible to overload ++ operators in Python? A: There is no ++ operator in Python (nor '--'). Incrementing is usually done with the += operator instead. A: Nope, it is not possible to overload the unary ++ operator, because it is not an operator at all in Python. Only (a subset of) the operators that are allowed by the Python syntax (those operators that already have one or more uses in the language) may be overloaded. These are valid Python operators, and this page lists the methods that you can define to overload them (the ones with two leading and trailing underscores). Instead of i++ as commonly used in other languages, in Python one writes i += 1. In python the + sign needs an operand to its right. It may also have an operand to its left, in which case it will be interpreted as a binary instead of a unary operator. +5, ++5, ..., ++++++5 are all valid Python expressions (all evaluating to 5), as are 7 + 5, 7 ++ 5, ..., 7 ++++++++ 5 (all evaluating to 7 + (+...+5) = 12). 5+ is not valid Python. See also this question. Alternative idea: Depending on what you actually wanted to use the ++ operator for, you may want to consider overloading the unary (prefix) plus operator. Note, thought, that this may lead to some odd looking code. Other people looking at your code would probably assume it's a no-op and be confused. A: Everyone makes good points, I'd just like to clear up one other thing. Open up a Python interpreter and check this out: >>> i = 1 >>> ++i 1 >>> i 1 There is no ++ (or --) operator in Python. The reason it behaves as it did (instead of a syntax error) is that + and - are valid unary operators, acting basically like a sign would on digits. You can think of ++i as a "+(+i)", and --i as "-(-i)". Expecting ++i to work like in any other language leads to absolutely insidious bug-hunts. C programmers: ye be warned. A straight i++ or i-- does fail adequately, for what it's worth. A: Well, the ++ operator doesn't exist in Python, so you really can't overload it. What happens when you do something like: 1 ++ 2 is actually 1 + (+2) A: You could hack it, though this introduces some undesirable consequences: class myint_plus: def __init__(self,myint_instance): self.myint_instance = myint_instance def __pos__(self): self.myint_instance.i += 1 return self.myint_instance class myint: def __init__(self,i): self.i = i def __pos__(self): return myint_plus(self) def __repr__(self): return self.i.__repr__() x = myint(1) print x ++x print x the output is: 1 2
Is it possible to overload ++ operators in Python?
Is it possible to overload ++ operators in Python?
[ "There is no ++ operator in Python (nor '--'). Incrementing is usually done with the += operator instead.\n", "Nope, it is not possible to overload the unary ++ operator, because it is not an operator at all in Python.\nOnly (a subset of) the operators that are allowed by the Python syntax (those operators that already have one or more uses in the language) may be overloaded.\nThese are valid Python operators, and this page lists the methods that you can define to overload them (the ones with two leading and trailing underscores).\nInstead of i++ as commonly used in other languages, in Python one writes i += 1.\nIn python the + sign needs an operand to its right. It may also have an operand to its left, in which case it will be interpreted as a binary instead of a unary operator. +5, ++5, ..., ++++++5 are all valid Python expressions (all evaluating to 5), as are 7 + 5, 7 ++ 5, ..., 7 ++++++++ 5 (all evaluating to 7 + (+...+5) = 12). 5+ is not valid Python. See also this question.\nAlternative idea: Depending on what you actually wanted to use the ++ operator for, you may want to consider overloading the unary (prefix) plus operator. Note, thought, that this may lead to some odd looking code. Other people looking at your code would probably assume it's a no-op and be confused.\n", "Everyone makes good points, I'd just like to clear up one other thing. Open up a Python interpreter and check this out:\n>>> i = 1\n>>> ++i\n1\n>>> i\n1\n\nThere is no ++ (or --) operator in Python. The reason it behaves as it did (instead of a syntax error) is that + and - are valid unary operators, acting basically like a sign would on digits. You can think of ++i as a \"+(+i)\", and --i as \"-(-i)\". Expecting ++i to work like in any other language leads to absolutely insidious bug-hunts. C programmers: ye be warned.\nA straight i++ or i-- does fail adequately, for what it's worth.\n", "Well, the ++ operator doesn't exist in Python, so you really can't overload it.\nWhat happens when you do something like:\n1 ++ 2\nis actually\n1 + (+2)\n", "You could hack it, though this introduces some undesirable consequences:\nclass myint_plus:\n def __init__(self,myint_instance):\n self.myint_instance = myint_instance\n\n def __pos__(self):\n self.myint_instance.i += 1\n return self.myint_instance\n\nclass myint:\n def __init__(self,i):\n self.i = i\n\n def __pos__(self):\n return myint_plus(self)\n\n def __repr__(self):\n return self.i.__repr__()\n\n\nx = myint(1)\nprint x\n++x\nprint x\n\nthe output is:\n1\n2\n\n" ]
[ 20, 18, 7, 5, 5 ]
[]
[]
[ "operator_overloading", "python" ]
stackoverflow_0000774784_operator_overloading_python.txt
Q: Installing Django with mod_wsgi I wrote an application using Django 1.0. It works fine with the django test server. But when I tried to get it into a more likely production enviroment the Apache server fails to run the app. The server I use is WAMP2.0. I've been a PHP programmer for years now and I've been using WAMPServer since long ago. I installed the mod_wsgi.so and seems to work just fine (no services error) but I can't configure the httpd.conf to look at my python scripts located outside the server root. For now, I'm cool with overriding the document root and serve the django app from the document root instead so the httpd.conf line should look like this: WSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi but the server's response is a 403 Forbidden A: You have: WSGIScriptAlias / /C:/Users/Marcos/Documents/mysite/apache/django.wsgi That is wrong as RHS is not a valid Windows pathname. Use: WSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi That is, no leading slash before the Windows drive specifier. Other than that, follow the mod_wsgi documentation others have pointed out. Poster edited question to change what now would appear to be a typo in the post and not a problem with his configuration. If that is the case, next causes for a 403 are as follows. First is that you need to also have: <Directory C:/Users/Marcos/Documents/mysite/apache> Order deny,allow Allow from all </Directory> If you don't have that then Apache isn't being granted rights to serve a script from that directory and so will return FORBIDDEN (403). Second is that you do have that, but don't acknowledge that you do, and that that directory or the WSGI script file is not readable by the user that the Apache service runs as under Windows. A: Have you seen http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango ? You need more than one line to assure that Apache will play nicely. Alias /media/ /usr/local/django/mysite/media/ <Directory /usr/local/django/mysite/media> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi <Directory /usr/local/django/mysite/apache> Order deny,allow Allow from all </Directory> The <Directory>, as well as appropriate file system ownership and permissions are essential. The usr/local/django/mysite/apache directory has your Python/Django app and the all-important django.wsgi file. You must provide permissions on this directory. A: mod_wsgi's documentation is very good. Try using their quick configuration guide and go from there: http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide
Installing Django with mod_wsgi
I wrote an application using Django 1.0. It works fine with the django test server. But when I tried to get it into a more likely production enviroment the Apache server fails to run the app. The server I use is WAMP2.0. I've been a PHP programmer for years now and I've been using WAMPServer since long ago. I installed the mod_wsgi.so and seems to work just fine (no services error) but I can't configure the httpd.conf to look at my python scripts located outside the server root. For now, I'm cool with overriding the document root and serve the django app from the document root instead so the httpd.conf line should look like this: WSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi but the server's response is a 403 Forbidden
[ "You have:\nWSGIScriptAlias / /C:/Users/Marcos/Documents/mysite/apache/django.wsgi\n\nThat is wrong as RHS is not a valid Windows pathname. Use:\nWSGIScriptAlias / C:/Users/Marcos/Documents/mysite/apache/django.wsgi\n\nThat is, no leading slash before the Windows drive specifier.\nOther than that, follow the mod_wsgi documentation others have pointed out.\n\nPoster edited question to change what now would appear to be a typo in the post and not a problem with his configuration.\nIf that is the case, next causes for a 403 are as follows.\nFirst is that you need to also have:\n<Directory C:/Users/Marcos/Documents/mysite/apache>\nOrder deny,allow\nAllow from all\n</Directory>\n\nIf you don't have that then Apache isn't being granted rights to serve a script from that directory and so will return FORBIDDEN (403).\nSecond is that you do have that, but don't acknowledge that you do, and that that directory or the WSGI script file is not readable by the user that the Apache service runs as under Windows.\n", "Have you seen http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango ?\nYou need more than one line to assure that Apache will play nicely.\nAlias /media/ /usr/local/django/mysite/media/\n\n<Directory /usr/local/django/mysite/media>\nOrder deny,allow\nAllow from all\n</Directory>\n\nWSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi\n\n<Directory /usr/local/django/mysite/apache>\nOrder deny,allow\nAllow from all\n</Directory>\n\nThe <Directory>, as well as appropriate file system ownership and permissions are essential.\nThe usr/local/django/mysite/apache directory has your Python/Django app and the all-important django.wsgi file. You must provide permissions on this directory.\n", "mod_wsgi's documentation is very good. Try using their quick configuration guide and go from there:\nhttp://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide\n" ]
[ 7, 6, 1 ]
[]
[]
[ "apache", "django", "mod_wsgi", "python" ]
stackoverflow_0001195260_apache_django_mod_wsgi_python.txt
Q: Processing chunked encoded HTTP POST requests in python (or generic CGI under apache) I have a j2me client that would post some chunked encoded data to a webserver. I'd like to process the data in python. The script is being run as a CGI one, but apparently apache will refuse a chunked encoded post request to a CGI script. As far as I could see mod_python, WSGI and FastCGI are no go too. I'd like to know if there is a way to have a python script process this kind of input. I'm open to any suggestion (e.g. a confoguration setting in apache2 that would assemble the chunks, a standalone python server that would do the same, etc.) I did quite a bit of googling and didn't find anything usable, which is quite strange. I know that resorting to java on the server side would be a solution, but I just can't imagine that this can't be solved with apache + python. A: I had the exact same problem a year ago with a J2ME client talking to a Python/Ruby backend. The only solution I found which doesn't require application or infrastructure level changes was to use a relatively unknown feature of mod_proxy. Mod_proxy has the ability to buffer incoming (chunked) requests, and then rewrite them as a single request with a Content-Length header before passing them on to a proxy backend. The neat trick is that you can create a tiny proxy configuration which passes the request back to the same Apache server. i.e. Take an incoming chunked request on port 80, "dechunk" it, and then pass it on to your non-HTTP 1.1 compliant server on port 81. I used this configuration in production for a little over a year with no problems. It looks a little something like this: ProxyRequests Off <Proxy http://example.com:81> Order deny,allow Allow from all </Proxy> <VirtualHost *:80> SetEnv proxy-sendcl 1 ProxyPass / http://example.com:81/ ProxyPassReverse / http://example.com:81/ ProxyPreserveHost On ProxyVia Full <Directory proxy:*> Order deny,allow Allow from all </Directory> </VirtualHost> Listen 81 <VirtualHost *:81> ServerName example.com # Your Python application configuration goes here </VirtualHost> I've also got a full writeup of the problem and my solution detailed on my blog. A: I'd say use the twisted framework for building your http listener. Twisted supports chunked encoding. http://python.net/crew/mwh/apidocs/twisted.web.http._ChunkedTransferEncoding.html Hope this helps. A: Apache 2.2 mod_cgi works fine for me, Apache transparently unchunks the request as it is passed to the CGI application. WSGI currently disallows chunked requests, and mod_wsgi does indeed block them with a 411 response. It's on the drawing board for WSGI 2.0. But congratulations on finding something that does chunk requests, I've never seen one before! A: You can't do what you want with mod_python. You can do it with mod_wsgi if you are using version 3.0. You do however have to step outside of the WSGI 1.0 specification as WSGI effectively prohibits chunked request content. Search for WSGIChunkedRequest in http://code.google.com/p/modwsgi/wiki/ChangesInVersion0300 for what is required. A: Maybe it is a configuration issue? Django can be fronted with Apache by mod_python, WSGI and FastCGI and it can accept file uploads.
Processing chunked encoded HTTP POST requests in python (or generic CGI under apache)
I have a j2me client that would post some chunked encoded data to a webserver. I'd like to process the data in python. The script is being run as a CGI one, but apparently apache will refuse a chunked encoded post request to a CGI script. As far as I could see mod_python, WSGI and FastCGI are no go too. I'd like to know if there is a way to have a python script process this kind of input. I'm open to any suggestion (e.g. a confoguration setting in apache2 that would assemble the chunks, a standalone python server that would do the same, etc.) I did quite a bit of googling and didn't find anything usable, which is quite strange. I know that resorting to java on the server side would be a solution, but I just can't imagine that this can't be solved with apache + python.
[ "I had the exact same problem a year ago with a J2ME client talking to a Python/Ruby backend. The only solution I found which doesn't require application or infrastructure level changes was to use a relatively unknown feature of mod_proxy.\nMod_proxy has the ability to buffer incoming (chunked) requests, and then rewrite them as a single request with a Content-Length header before passing them on to a proxy backend. The neat trick is that you can create a tiny proxy configuration which passes the request back to the same Apache server. i.e. Take an incoming chunked request on port 80, \"dechunk\" it, and then pass it on to your non-HTTP 1.1 compliant server on port 81.\nI used this configuration in production for a little over a year with no problems. It looks a little something like this:\nProxyRequests Off\n\n<Proxy http://example.com:81>\n Order deny,allow\n Allow from all\n</Proxy>\n\n<VirtualHost *:80>\n SetEnv proxy-sendcl 1\n ProxyPass / http://example.com:81/\n ProxyPassReverse / http://example.com:81/\n ProxyPreserveHost On\n ProxyVia Full\n\n <Directory proxy:*>\n Order deny,allow\n Allow from all\n </Directory>\n\n</VirtualHost>\n\nListen 81\n\n<VirtualHost *:81>\n ServerName example.com\n # Your Python application configuration goes here\n</VirtualHost>\n\nI've also got a full writeup of the problem and my solution detailed on my blog.\n", "I'd say use the twisted framework for building your http listener.\nTwisted supports chunked encoding.\nhttp://python.net/crew/mwh/apidocs/twisted.web.http._ChunkedTransferEncoding.html\nHope this helps.\n", "Apache 2.2 mod_cgi works fine for me, Apache transparently unchunks the request as it is passed to the CGI application.\nWSGI currently disallows chunked requests, and mod_wsgi does indeed block them with a 411 response. It's on the drawing board for WSGI 2.0. But congratulations on finding something that does chunk requests, I've never seen one before!\n", "You can't do what you want with mod_python. You can do it with mod_wsgi if you are using version 3.0. You do however have to step outside of the WSGI 1.0 specification as WSGI effectively prohibits chunked request content.\nSearch for WSGIChunkedRequest in http://code.google.com/p/modwsgi/wiki/ChangesInVersion0300 for what is required.\n", "Maybe it is a configuration issue? Django can be fronted with Apache by mod_python, WSGI and FastCGI and it can accept file uploads. \n" ]
[ 6, 2, 2, 2, 1 ]
[]
[]
[ "http", "java_me", "midlet", "post", "python" ]
stackoverflow_0000284741_http_java_me_midlet_post_python.txt
Q: CopyBlock tag for Django How can I write a tag "copyblock" for Django templates? For such a function: <title> {% block title %} some title... {% endblock %} </title> <h1>{% copyblock title %}</h1> A: Take a look at the solutions mentioned in this question: How to repeat a "block" in a django template A: Django's template parser doesn't expose blocks by name. Instead, they are organized into a tree structure in the Django Template's nodelist, with rendering pushing and popping on the stack of template nodes. You'll have a nearly impossible time accessing them in the way your example indicates. The SO link that ars references provides suggestions on the best solutions. Of those solutions, defining a variable in the context (ie: {{ title }} in the spirit of your example) that can be reused is probably the most straightforward and maintainable approach. If the piece you want to duplicate goes beyond a simple variable, a custom template tag is probably the most appealing option.
CopyBlock tag for Django
How can I write a tag "copyblock" for Django templates? For such a function: <title> {% block title %} some title... {% endblock %} </title> <h1>{% copyblock title %}</h1>
[ "Take a look at the solutions mentioned in this question:\n\nHow to repeat a \"block\" in a django template\n\n", "Django's template parser doesn't expose blocks by name. Instead, they are organized into a tree structure in the Django Template's nodelist, with rendering pushing and popping on the stack of template nodes. You'll have a nearly impossible time accessing them in the way your example indicates.\nThe SO link that ars references provides suggestions on the best solutions. Of those solutions, defining a variable in the context (ie: {{ title }} in the spirit of your example) that can be reused is probably the most straightforward and maintainable approach. If the piece you want to duplicate goes beyond a simple variable, a custom template tag is probably the most appealing option.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_templates", "python", "templates" ]
stackoverflow_0001197650_django_django_templates_python_templates.txt
Q: Web Services with Google App Engine I see that Google App Engine can host web applications that will return html, etc. But what about web services that communicate over http and accept / return xml? Does anyone know how this is being done in Goggle App Engine with Python or for that matter in Java (JAS-WX is not supported)? Any links o samples or articles is greatly appreciated. Thanks // :) A: Google App Engine allows you to write web services that return any type of HTTP response content. This includes xml, json, text, etc. For instance, take a look at the guestbook sample project offered by Google which shows the HTTP response coming back as text/plain: public class GuestbookServlet extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); if (user != null) { resp.setContentType("text/plain"); resp.getWriter().println("Hello, " + user.getNickname()); } else { resp.sendRedirect(userService.createLoginURL(req.getRequestURI())); } } } Additionally, the app engine google group is a great place to learn more, ask questions, and see sample code. A: Most python apps just write a handler that outputs the shaped xml directly... this example serves any GET requests submitted to the root url ("/"): import wsgiref.handlers from google.appengine.ext import webapp class MainHandler(webapp.RequestHandler): def get(self): self.response.out.write('<myXml><node id=1 /></myXml>') def main(): application = webapp.WSGIApplication([('/', MainHandler)], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() A: It's definitely possible (and not too hard) to use GAE to host "web services that communicate over http and accept / return xml". To parse XML requests (presumably coming in as the body of HTTP POST or PUT requests), you have several options, e.g. pyexpat or minidom on top of it, see this thread for example (especially the last post on it). If you wish, you could also use minidom to construct the XML response and write it back (e.g. using a StringIO instance to hold the formatted response and its write method as the argument to your minidom instance's writexml method, then turning around and using the getvalue of that instance to get the needed result as a string). Even though you're limited to pure Python and a few "whiteslisted" C-coded extensions such as pyexpat, that doesn't really limit your choices all that much, nor make your life substantially harder. Just do remember to set your response's content type header to text/xml (or some media type that's even more specific and appropriate, if any, of course!) -- and, I recommend, use UTF-8 (the standard text encoding that lets you express all of Unicode while being plain ASCII if your data do happen to be plain ASCII!-), not weird "code pages" or regionally limited codes such as the ISO-8859 family.
Web Services with Google App Engine
I see that Google App Engine can host web applications that will return html, etc. But what about web services that communicate over http and accept / return xml? Does anyone know how this is being done in Goggle App Engine with Python or for that matter in Java (JAS-WX is not supported)? Any links o samples or articles is greatly appreciated. Thanks // :)
[ "Google App Engine allows you to write web services that return any type of HTTP response content. This includes xml, json, text, etc.\nFor instance, take a look at the guestbook sample project offered by Google which shows the HTTP response coming back as text/plain:\n public class GuestbookServlet extends HttpServlet {\n public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {\n UserService userService = UserServiceFactory.getUserService();\n User user = userService.getCurrentUser();\n\n if (user != null) {\n resp.setContentType(\"text/plain\");\n resp.getWriter().println(\"Hello, \" + user.getNickname());\n } else {\n resp.sendRedirect(userService.createLoginURL(req.getRequestURI()));\n }\n }\n }\n\nAdditionally, the app engine google group is a great place to learn more, ask questions, and see sample code.\n", "Most python apps just write a handler that outputs the shaped xml directly... this example serves any GET requests submitted to the root url (\"/\"): \nimport wsgiref.handlers\n\nfrom google.appengine.ext import webapp\n\nclass MainHandler(webapp.RequestHandler):\n\n def get(self):\n self.response.out.write('<myXml><node id=1 /></myXml>')\n\ndef main():\n application = webapp.WSGIApplication([('/', MainHandler)],\n debug=True)\n wsgiref.handlers.CGIHandler().run(application)\n\nif __name__ == '__main__':\n main()\n\n", "It's definitely possible (and not too hard) to use GAE to host \"web services that communicate over http and accept / return xml\".\nTo parse XML requests (presumably coming in as the body of HTTP POST or PUT requests), you have several options, e.g. pyexpat or minidom on top of it, see this thread for example (especially the last post on it).\nIf you wish, you could also use minidom to construct the XML response and write it back (e.g. using a StringIO instance to hold the formatted response and its write method as the argument to your minidom instance's writexml method, then turning around and using the getvalue of that instance to get the needed result as a string). Even though you're limited to pure Python and a few \"whiteslisted\" C-coded extensions such as pyexpat, that doesn't really limit your choices all that much, nor make your life substantially harder.\nJust do remember to set your response's content type header to text/xml (or some media type that's even more specific and appropriate, if any, of course!) -- and, I recommend, use UTF-8 (the standard text encoding that lets you express all of Unicode while being plain ASCII if your data do happen to be plain ASCII!-), not weird \"code pages\" or regionally limited codes such as the ISO-8859 family.\n" ]
[ 9, 3, 2 ]
[]
[]
[ "google_app_engine", "java", "python", "web_services" ]
stackoverflow_0001195893_google_app_engine_java_python_web_services.txt
Q: Python Printing StdOut As It Received I'm trying to run wrap a simple (windows) command line tool up in a PyQt GUI app that I am writing. The problem I have is that the command line tool throws it's progress out to stdout (it's a server reset command so you get "Attempting to stop" and "Restarting" type output. What I am trying to do is capture the output so I can display it as part of my app. I assumed it would be quite simple to do something like the following : import os import subprocess as sub cmd = "COMMAND LINE APP NAME -ARGS" proc = sub.Popen(cmd, shell=True, stdout=sub.PIPE).stdout while 1: line = proc.readline() if not line: break print line This partially works in that I do get the contents of StdOut but instead of as the progress messages are sent I get it once the command line application exits and it seems to flush StdOut in one go. Is there a simple answer? A: Interactive communication through stdin/stdout is a common problem. You're in luck though, with PyQt you can use QProcess, as described here: http://diotavelli.net/PyQtWiki/Capturing_Output_from_a_Process A: Do I understand the question? I believe you're running something like "echo first; sleep 60; echo second" and you want see the "first" well-ahead of the "second", but they're both spitting out at the same time. The reason you're having issues is that the operating system stores the output of processes in its memory. It will only take the trouble of sending the output to your program if the buffer has filled, or the other program has ended. So, we need to dig into the O/S and figure out how to tell it "Hey, gimme that!" This is generally known as asynchronous or non-blocking mode. Luckily someone has done the hard work for us. This guy has added a send() and recv() method to the python built-in Popen class. It also looks like he fixed the bugs that people found in the comments. Try it out: http://code.activestate.com/recipes/440554/
Python Printing StdOut As It Received
I'm trying to run wrap a simple (windows) command line tool up in a PyQt GUI app that I am writing. The problem I have is that the command line tool throws it's progress out to stdout (it's a server reset command so you get "Attempting to stop" and "Restarting" type output. What I am trying to do is capture the output so I can display it as part of my app. I assumed it would be quite simple to do something like the following : import os import subprocess as sub cmd = "COMMAND LINE APP NAME -ARGS" proc = sub.Popen(cmd, shell=True, stdout=sub.PIPE).stdout while 1: line = proc.readline() if not line: break print line This partially works in that I do get the contents of StdOut but instead of as the progress messages are sent I get it once the command line application exits and it seems to flush StdOut in one go. Is there a simple answer?
[ "Interactive communication through stdin/stdout is a common problem.\nYou're in luck though, with PyQt you can use QProcess, as described here:\nhttp://diotavelli.net/PyQtWiki/Capturing_Output_from_a_Process\n", "Do I understand the question?\nI believe you're running something like \"echo first; sleep 60; echo second\" and you want see the \"first\" well-ahead of the \"second\", but they're both spitting out at the same time.\nThe reason you're having issues is that the operating system stores the output of processes in its memory. It will only take the trouble of sending the output to your program if the buffer has filled, or the other program has ended. So, we need to dig into the O/S and figure out how to tell it \"Hey, gimme that!\" This is generally known as asynchronous or non-blocking mode. \nLuckily someone has done the hard work for us.\nThis guy has added a send() and recv() method to the python built-in Popen class.\nIt also looks like he fixed the bugs that people found in the comments.\nTry it out:\nhttp://code.activestate.com/recipes/440554/\n" ]
[ 1, 0 ]
[]
[]
[ "python", "stdout" ]
stackoverflow_0001152160_python_stdout.txt
Q: Python: How to get rid of circular dependency involving decorator? I had got a following case of circular import (here severly simplified): array2image.py conversion module: import tuti @tuti.log_exec_time # can't do that, evaluated at definition time def convert(arr): '''Convert array to image.''' return image.fromarray(arr) tuti.py test utils module: import array2image def log_exec_time(f): '''A small decorator not using array2image''' def debug_image(arr): image = array2image.convert(arr) image = write('somewhere') It failed with NameError. This didn't look right to me, as there was really no circular dependency there. I was looking for a neat way to avoid that or an explanation... and half way through writing this question I found it. Moving the import below the decorator in tuti.py resolves NameError: def log_exec_time(f): '''A small decorator not using array2image''' import array2image def debug_image(arr): image = array2image.convert(arr) image = write('somewhere') A: The answer you came up with is a valid solution. However, if you were that worried about circular dependencies, I would say that log_exec_time would belong in its own file since its not dependent on anything else in tuti.py.
Python: How to get rid of circular dependency involving decorator?
I had got a following case of circular import (here severly simplified): array2image.py conversion module: import tuti @tuti.log_exec_time # can't do that, evaluated at definition time def convert(arr): '''Convert array to image.''' return image.fromarray(arr) tuti.py test utils module: import array2image def log_exec_time(f): '''A small decorator not using array2image''' def debug_image(arr): image = array2image.convert(arr) image = write('somewhere') It failed with NameError. This didn't look right to me, as there was really no circular dependency there. I was looking for a neat way to avoid that or an explanation... and half way through writing this question I found it. Moving the import below the decorator in tuti.py resolves NameError: def log_exec_time(f): '''A small decorator not using array2image''' import array2image def debug_image(arr): image = array2image.convert(arr) image = write('somewhere')
[ "The answer you came up with is a valid solution. \nHowever, if you were that worried about circular dependencies, I would say that log_exec_time would belong in its own file since its not dependent on anything else in tuti.py.\n" ]
[ 4 ]
[]
[]
[ "circular_dependency", "decorator", "python" ]
stackoverflow_0001198188_circular_dependency_decorator_python.txt
Q: Tkinter locks Python when an icon is loaded and tk.mainloop is in a thread Here's the test case... import Tkinter as tk import thread from time import sleep if __name__ == '__main__': t = tk.Tk() thread.start_new_thread(t.mainloop, ()) # t.iconbitmap('icon.ico') b = tk.Button(text='test', command=exit) b.grid(row=0) while 1: sleep(1) This code works. Uncomment the t.iconbitmap line and it locks. Re-arrange it any way you like; it will lock. How do I prevent tk.mainloop locking the GIL when there is an icon present? The target is win32 and Python 2.6.2. A: I believe you should not execute the main loop on a different thread. AFAIK, the main loop should be executed on the same thread that created the widget. The GUI toolkits that I am familiar with (Tkinter, .NET Windows Forms) are that way: You can manipulate the GUI from one thread only. On Linux, your code raises an exception: self.tk.mainloop(n) RuntimeError: Calling Tcl from different appartment One of the following will work (no extra threads): if __name__ == '__main__': t = tk.Tk() t.iconbitmap('icon.ico') b = tk.Button(text='test', command=exit) b.grid(row=0) t.mainloop() With extra thread: def threadmain(): t = tk.Tk() t.iconbitmap('icon.ico') b = tk.Button(text='test', command=exit) b.grid(row=0) t.mainloop() if __name__ == '__main__': thread.start_new_thread(threadmain, ()) while 1: sleep(1) If you need to do communicate with tkinter from outside the tkinter thread, I suggest you set up a timer that checks a queue for work. Here is an example: import Tkinter as tk import thread from time import sleep import Queue request_queue = Queue.Queue() result_queue = Queue.Queue() def submit_to_tkinter(callable, *args, **kwargs): request_queue.put((callable, args, kwargs)) return result_queue.get() t = None def threadmain(): global t def timertick(): try: callable, args, kwargs = request_queue.get_nowait() except Queue.Empty: pass else: print "something in queue" retval = callable(*args, **kwargs) result_queue.put(retval) t.after(500, timertick) t = tk.Tk() t.configure(width=640, height=480) b = tk.Button(text='test', name='button', command=exit) b.place(x=0, y=0) timertick() t.mainloop() def foo(): t.title("Hello world") def bar(button_text): t.children["button"].configure(text=button_text) def get_button_text(): return t.children["button"]["text"] if __name__ == '__main__': thread.start_new_thread(threadmain, ()) trigger = 0 while 1: trigger += 1 if trigger == 3: submit_to_tkinter(foo) if trigger == 5: submit_to_tkinter(bar, "changed") if trigger == 7: print submit_to_tkinter(get_button_text) sleep(1)
Tkinter locks Python when an icon is loaded and tk.mainloop is in a thread
Here's the test case... import Tkinter as tk import thread from time import sleep if __name__ == '__main__': t = tk.Tk() thread.start_new_thread(t.mainloop, ()) # t.iconbitmap('icon.ico') b = tk.Button(text='test', command=exit) b.grid(row=0) while 1: sleep(1) This code works. Uncomment the t.iconbitmap line and it locks. Re-arrange it any way you like; it will lock. How do I prevent tk.mainloop locking the GIL when there is an icon present? The target is win32 and Python 2.6.2.
[ "I believe you should not execute the main loop on a different thread. AFAIK, the main loop should be executed on the same thread that created the widget. \nThe GUI toolkits that I am familiar with (Tkinter, .NET Windows Forms) are that way: You can manipulate the GUI from one thread only.\nOn Linux, your code raises an exception:\n\nself.tk.mainloop(n)\nRuntimeError: Calling Tcl from different appartment\n\nOne of the following will work (no extra threads):\nif __name__ == '__main__':\n t = tk.Tk()\n t.iconbitmap('icon.ico')\n\n b = tk.Button(text='test', command=exit)\n b.grid(row=0)\n\n t.mainloop()\n\nWith extra thread:\ndef threadmain():\n t = tk.Tk()\n t.iconbitmap('icon.ico')\n b = tk.Button(text='test', command=exit)\n b.grid(row=0)\n t.mainloop()\n\n\nif __name__ == '__main__':\n thread.start_new_thread(threadmain, ())\n\n while 1:\n sleep(1)\n\n\nIf you need to do communicate with tkinter from outside the tkinter thread, I suggest you set up a timer that checks a queue for work.\nHere is an example:\nimport Tkinter as tk\nimport thread\nfrom time import sleep\nimport Queue\n\nrequest_queue = Queue.Queue()\nresult_queue = Queue.Queue()\n\ndef submit_to_tkinter(callable, *args, **kwargs):\n request_queue.put((callable, args, kwargs))\n return result_queue.get()\n\nt = None\ndef threadmain():\n global t\n\n def timertick():\n try:\n callable, args, kwargs = request_queue.get_nowait()\n except Queue.Empty:\n pass\n else:\n print \"something in queue\"\n retval = callable(*args, **kwargs)\n result_queue.put(retval)\n\n t.after(500, timertick)\n\n t = tk.Tk()\n t.configure(width=640, height=480)\n b = tk.Button(text='test', name='button', command=exit)\n b.place(x=0, y=0)\n timertick()\n t.mainloop()\n\ndef foo():\n t.title(\"Hello world\")\n\ndef bar(button_text):\n t.children[\"button\"].configure(text=button_text)\n\ndef get_button_text():\n return t.children[\"button\"][\"text\"]\n\nif __name__ == '__main__':\n thread.start_new_thread(threadmain, ())\n\n trigger = 0\n while 1:\n trigger += 1\n\n if trigger == 3:\n submit_to_tkinter(foo)\n\n if trigger == 5:\n submit_to_tkinter(bar, \"changed\")\n\n if trigger == 7:\n print submit_to_tkinter(get_button_text)\n\n sleep(1)\n\n" ]
[ 22 ]
[]
[]
[ "green_threads", "python", "tkinter", "winapi" ]
stackoverflow_0001198262_green_threads_python_tkinter_winapi.txt
Q: django one to many issue at the admin panel Greetings, I have these 2 models: from django.db import models class Office(models.Model): name = models.CharField(max_length=30) person = models.CharField(max_length=30) phone = models.CharField(max_length=20) fax = models.CharField(max_length=20) address = models.CharField(max_length=100) def __unicode__(self): return self.name class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) content = models.TextField() office = models.ForeignKey(Office) def __unicode__(self): return self.name I want to be able to select several Offices for Provinces, which is a one to many model. Here is my admin.py: from harita.haritaapp.models import Province, Office from django.contrib import admin class ProvinceCreator(admin.ModelAdmin): list_display = ['name', 'numberPlate','content','office'] class OfficeCreator(admin.ModelAdmin): list_display = ['name','person','phone','fax','address'] admin.site.register(Province, ProvinceCreator) admin.site.register(Office, OfficeCreator) Right now, I am able to select one Office per Province at the admin panel while creating a new Province but I want to be able to select more than one. How can I achieve this? Regards A: I'm not sure if I'm misunderstanding you, but your models currently say "an office can be associated with many provinces, but each province may only have one office". This contradicts what you want. Use a ManyToMany field instead: class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) content = models.TextField() office = models.ManyToManyField(Office) def __unicode__(self): return self.name
django one to many issue at the admin panel
Greetings, I have these 2 models: from django.db import models class Office(models.Model): name = models.CharField(max_length=30) person = models.CharField(max_length=30) phone = models.CharField(max_length=20) fax = models.CharField(max_length=20) address = models.CharField(max_length=100) def __unicode__(self): return self.name class Province(models.Model): numberPlate = models.IntegerField(primary_key=True) name = models.CharField(max_length=20) content = models.TextField() office = models.ForeignKey(Office) def __unicode__(self): return self.name I want to be able to select several Offices for Provinces, which is a one to many model. Here is my admin.py: from harita.haritaapp.models import Province, Office from django.contrib import admin class ProvinceCreator(admin.ModelAdmin): list_display = ['name', 'numberPlate','content','office'] class OfficeCreator(admin.ModelAdmin): list_display = ['name','person','phone','fax','address'] admin.site.register(Province, ProvinceCreator) admin.site.register(Office, OfficeCreator) Right now, I am able to select one Office per Province at the admin panel while creating a new Province but I want to be able to select more than one. How can I achieve this? Regards
[ "I'm not sure if I'm misunderstanding you, but your models currently say \"an office can be associated with many provinces, but each province may only have one office\". This contradicts what you want. Use a ManyToMany field instead:\nclass Province(models.Model):\n numberPlate = models.IntegerField(primary_key=True)\n name = models.CharField(max_length=20)\n content = models.TextField()\n office = models.ManyToManyField(Office)\n def __unicode__(self):\n return self.name\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_admin", "django_models", "python" ]
stackoverflow_0001198589_django_django_admin_django_models_python.txt
Q: Provide discount to preferred customer with Satchmo? I am new to Satchmo -- picked it up because I needed payment processing for site subscriptions and physical product. My site will have two classes of users: paid subscribers and free users. Both can order a physical product. Paid subscribers get an automatic discount on all orders. I don't see a configuration for this in the admin. (Discount looks like it would apply to all users. If I'm missing something here, let me know.) So what's the best place to automatically override the price depending on the user class? The displayed price should show up, say, 10% less for subscribers everywhere in the site, not just at the checkout. Thanks. A: Checkout the tiered pricing module
Provide discount to preferred customer with Satchmo?
I am new to Satchmo -- picked it up because I needed payment processing for site subscriptions and physical product. My site will have two classes of users: paid subscribers and free users. Both can order a physical product. Paid subscribers get an automatic discount on all orders. I don't see a configuration for this in the admin. (Discount looks like it would apply to all users. If I'm missing something here, let me know.) So what's the best place to automatically override the price depending on the user class? The displayed price should show up, say, 10% less for subscribers everywhere in the site, not just at the checkout. Thanks.
[ "Checkout the tiered pricing module\n" ]
[ 2 ]
[]
[]
[ "django", "python", "satchmo" ]
stackoverflow_0000647257_django_python_satchmo.txt
Q: How to set the encoding for the tables' char columns in django? I have a project written in Django. All fields that are supposed to store some strings are supposed to be in UTF-8, however, when I run manage.py syncdb all respective columns are created with cp1252 character set (where did it get that -- I have no idea) and I have to manually update every column... Is there a way to tell Django to create all those columns with UTF-8 encoding in the first place? BTW, I use MySQL. A: Django does not specify charset and collation in CREATE TABLE statements. Everything is determined by database charset. Doing ALTER DATABASE ... CHARACTER SET utf8 COLLATE utf8_general_ci before running syncdb should help. For connection, Django issues SET NAMES utf8 automatically, so you don't need to worry about default connection charset settings. A: Django’s database backends automatically handles Unicode strings into the appropriate encoding and talk to the database. You don’t need to tell Django what encoding your database uses. It handles it well, by using you database's encoding. I don't see any way you can tell django to create a column, using some specific encoding. As it appears to me, there is absolutely some previous MySQL configuration affecting you. And despite of doing it manually for all column, use these. CREATE DATABASE db_name [[DEFAULT] CHARACTER SET charset_name] [[DEFAULT] COLLATE collation_name] ALTER DATABASE db_name [[DEFAULT] CHARACTER SET charset_name] [[DEFAULT] COLLATE collation_name] A: What is your MySQL encoding set to? For example, try the following from the command line: mysqld --verbose --help | grep character-set If it doesn't output utf8, then you'll need to set the output in my.cnf: [mysqld] character-set-server=utf8 default-collation=utf8_unicode_ci [client] default-character-set=utf8 This page has some more information: http://www.zulutown.com/blog/tag/character-set/
How to set the encoding for the tables' char columns in django?
I have a project written in Django. All fields that are supposed to store some strings are supposed to be in UTF-8, however, when I run manage.py syncdb all respective columns are created with cp1252 character set (where did it get that -- I have no idea) and I have to manually update every column... Is there a way to tell Django to create all those columns with UTF-8 encoding in the first place? BTW, I use MySQL.
[ "Django does not specify charset and collation in CREATE TABLE statements. Everything is determined by database charset. Doing ALTER DATABASE ... CHARACTER SET utf8 COLLATE utf8_general_ci before running syncdb should help.\nFor connection, Django issues SET NAMES utf8 automatically, so you don't need to worry about default connection charset settings.\n", "Django’s database backends automatically handles Unicode strings into the appropriate encoding and talk to the database. You don’t need to tell Django what encoding your database uses. It handles it well, by using you database's encoding.\nI don't see any way you can tell django to create a column, using some specific encoding.\nAs it appears to me, there is absolutely some previous MySQL configuration affecting you.\nAnd despite of doing it manually for all column, use these.\nCREATE DATABASE db_name\n [[DEFAULT] CHARACTER SET charset_name]\n [[DEFAULT] COLLATE collation_name]\n\nALTER DATABASE db_name\n [[DEFAULT] CHARACTER SET charset_name]\n [[DEFAULT] COLLATE collation_name]\n\n", "What is your MySQL encoding set to?\nFor example, try the following from the command line:\n mysqld --verbose --help | grep character-set\n\nIf it doesn't output utf8, then you'll need to set the output in my.cnf:\n[mysqld]\ncharacter-set-server=utf8\ndefault-collation=utf8_unicode_ci\n\n[client]\ndefault-character-set=utf8\n\nThis page has some more information: \n\nhttp://www.zulutown.com/blog/tag/character-set/\n\n" ]
[ 20, 4, 2 ]
[]
[]
[ "django", "mysql", "python" ]
stackoverflow_0001198486_django_mysql_python.txt
Q: What is the multiplatform alternative to subprocess.getstatusoutput (older commands.setstatusoutput() from Python? The code below is outdated in Python 3.0 by being replaced by subprocess.getstatusoutput(). import commands (ret, out) = commands.getstatusoutput('some command') print ret print out The real question is what's the multiplatform alternative to this command from Python because the above code does fail ugly under Windows because getstatusoutput is supported only under Unix and Python does not tell you this, instead you get something like: >test.py 1 '{' is not recognized as an internal or external command, operable program or batch file. A: I wouldn't really consider this multiplatform, but you can use subprocess.Popen: import subprocess pipe = subprocess.Popen('dir', stdout=subprocess.PIPE, shell=True, universal_newlines=True) output = pipe.stdout.readlines() sts = pipe.wait() print sts print output Here's a drop-in replacement for getstatusoutput: def getstatusoutput(cmd): """Return (status, output) of executing cmd in a shell.""" """This new implementation should work on all platforms.""" import subprocess pipe = subprocess.Popen(cmd, shell=True, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) output = str.join("", pipe.stdout.readlines()) sts = pipe.wait() if sts is None: sts = 0 return sts, output This snippet was proposed by the original poster. I made some changes since getstatusoutput duplicates stderr onto stdout. The problem is that dir isn't really a multiplatform call but subprocess.Popen allows you to execute shell commands on any platform. I would steer clear of using shell commands unless you absolutely need to. Investigate the contents of the os, os.path, and shutil packages instead. import os import os.path for rel_name in os.listdir(os.curdir): abs_name = os.path.join(os.curdir, rel_name) if os.path.isdir(abs_name): print('DIR: ' + rel_name) elif os.path.isfile(abs_name): print('FILE: ' + rel_name) else: print('UNK? ' + rel_name) A: This would be the multiplatform implementation for getstatusoutput(): def getstatusoutput(cmd): """Return (status, output) of executing cmd in a shell.""" """This new implementation should work on all platforms.""" import subprocess pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, universal_newlines=True) output = "".join(pipe.stdout.readlines()) sts = pipe.returncode if sts is None: sts = 0 return sts, output A: getstatusoutput docs say it runs the command like so: { cmd } 2>&1 Which obviously doesn't work with cmd.exe (the 2>&1 works fine if you need it though). You can use Popen as above, but also include the parameter 'stderr=subprocess.STDOUT' to get the same behaviour as getstatusoutput. My tests on Windows had returncode set to None though, which is not ideal if you're counting on the return value.
What is the multiplatform alternative to subprocess.getstatusoutput (older commands.setstatusoutput() from Python?
The code below is outdated in Python 3.0 by being replaced by subprocess.getstatusoutput(). import commands (ret, out) = commands.getstatusoutput('some command') print ret print out The real question is what's the multiplatform alternative to this command from Python because the above code does fail ugly under Windows because getstatusoutput is supported only under Unix and Python does not tell you this, instead you get something like: >test.py 1 '{' is not recognized as an internal or external command, operable program or batch file.
[ "I wouldn't really consider this multiplatform, but you can use subprocess.Popen:\nimport subprocess\npipe = subprocess.Popen('dir', stdout=subprocess.PIPE, shell=True, universal_newlines=True)\noutput = pipe.stdout.readlines()\nsts = pipe.wait()\nprint sts\nprint output\n\n\nHere's a drop-in replacement for getstatusoutput:\ndef getstatusoutput(cmd): \n \"\"\"Return (status, output) of executing cmd in a shell.\"\"\"\n \"\"\"This new implementation should work on all platforms.\"\"\"\n import subprocess\n pipe = subprocess.Popen(cmd, shell=True, universal_newlines=True,\n stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n output = str.join(\"\", pipe.stdout.readlines()) \n sts = pipe.wait()\n if sts is None:\n sts = 0\n return sts, output\n\nThis snippet was proposed by the original poster. I made some changes since getstatusoutput duplicates stderr onto stdout. \n\nThe problem is that dir isn't really a multiplatform call but subprocess.Popen allows you to execute shell commands on any platform. I would steer clear of using shell commands unless you absolutely need to. Investigate the contents of the os, os.path, and shutil packages instead.\nimport os\nimport os.path\nfor rel_name in os.listdir(os.curdir):\n abs_name = os.path.join(os.curdir, rel_name)\n if os.path.isdir(abs_name):\n print('DIR: ' + rel_name)\n elif os.path.isfile(abs_name):\n print('FILE: ' + rel_name)\n else:\n print('UNK? ' + rel_name)\n\n", "This would be the multiplatform implementation for getstatusoutput(): \ndef getstatusoutput(cmd): \n \"\"\"Return (status, output) of executing cmd in a shell.\"\"\"\n \"\"\"This new implementation should work on all platforms.\"\"\"\n import subprocess\n pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, universal_newlines=True) \n output = \"\".join(pipe.stdout.readlines()) \n sts = pipe.returncode\n if sts is None: sts = 0\n return sts, output\n\n", "getstatusoutput docs say it runs the command like so:\n{ cmd } 2>&1\nWhich obviously doesn't work with cmd.exe (the 2>&1 works fine if you need it though).\nYou can use Popen as above, but also include the parameter 'stderr=subprocess.STDOUT' to get the same behaviour as getstatusoutput.\nMy tests on Windows had returncode set to None though, which is not ideal if you're counting on the return value.\n" ]
[ 8, 8, 1 ]
[]
[]
[ "process", "python", "redirect" ]
stackoverflow_0001193583_process_python_redirect.txt
Q: installed module on python editor how can i import modules/packages at python3.0 editor IDLE. and can we see all the modules contain/included by it. A: >>> import idle >>> dir(idle)
installed module on python editor
how can i import modules/packages at python3.0 editor IDLE. and can we see all the modules contain/included by it.
[ ">>> import idle\n>>> dir(idle)\n\n" ]
[ 1 ]
[]
[]
[ "editor", "python" ]
stackoverflow_0001199249_editor_python.txt
Q: Structuring model layout with regards to parents? How does one go about structuring his db.Models effectively? For instance, lets say I have a model for Countries, with properties like "name, northern_hemisphere(boolean), population, states (list of states), capital(boolean). And another model called State or county or something with properties "name, population, cities(list of cities). And another model called Cities, with properties "name, capital(boolean), distance_from_capital, population. Obviously I need the Cities to store data related to certain States, and thus States needs data related to the specific Country. In my States model I would have California, Colorado etc, and each of those has to have a specific list of Cities. How does one structure his models so they are related somehow? I am very new to MVC so am struggling conceptually. Is it possible to use the class(parent) constructor? A: If you want to store relational data in the datastore of Google App Engine, this is a great article to start out with: Modeling Entity Relationships. You use ReferenceProperty to specify a relationship between two models: class Country(db.Model): name = db.StringProperty(required=True) class State(db.Model): country = db.ReferenceProperty(Country, collection_name='states') name = db.StringProperty(required=True) class City(db.Model): state = db.ReferenceProperty(State, collection_name='cities') name = db.StringProperty(required=True) Instances of your Country model will automatically get a new property called states which will be a query to get all related State entities. Same thing with the State model for cities. Its automatically created cities property will be a query to get all related City entities. How to use: # Create a new country: us = Country(name='USA') us.put() # Create a new state ca = State(name='California', country=us) ca.put() # Create a new city la = City(name='Los Angeles', state=ca) la.put() # And another sf = City(name='San Francisco', state=ca) sf.put() # Print states for state in us.states: print state.name # Print cities for city in state.cities: print ' ' + city.name Should output: California Los Angeles San Francisco
Structuring model layout with regards to parents?
How does one go about structuring his db.Models effectively? For instance, lets say I have a model for Countries, with properties like "name, northern_hemisphere(boolean), population, states (list of states), capital(boolean). And another model called State or county or something with properties "name, population, cities(list of cities). And another model called Cities, with properties "name, capital(boolean), distance_from_capital, population. Obviously I need the Cities to store data related to certain States, and thus States needs data related to the specific Country. In my States model I would have California, Colorado etc, and each of those has to have a specific list of Cities. How does one structure his models so they are related somehow? I am very new to MVC so am struggling conceptually. Is it possible to use the class(parent) constructor?
[ "If you want to store relational data in the datastore of Google App Engine, this is a great article to start out with: Modeling Entity Relationships.\nYou use ReferenceProperty to specify a relationship between two models:\nclass Country(db.Model):\n name = db.StringProperty(required=True)\n\nclass State(db.Model):\n country = db.ReferenceProperty(Country, collection_name='states')\n name = db.StringProperty(required=True)\n\nclass City(db.Model):\n state = db.ReferenceProperty(State, collection_name='cities')\n name = db.StringProperty(required=True)\n\nInstances of your Country model will automatically get a new property called states which will be a query to get all related State entities. Same thing with the State model for cities. Its automatically created cities property will be a query to get all related City entities.\nHow to use:\n# Create a new country:\nus = Country(name='USA')\nus.put()\n\n# Create a new state\nca = State(name='California', country=us)\nca.put()\n\n# Create a new city\nla = City(name='Los Angeles', state=ca)\nla.put()\n\n# And another\nsf = City(name='San Francisco', state=ca)\nsf.put()\n\n# Print states\nfor state in us.states:\n print state.name\n\n # Print cities\n for city in state.cities:\n print ' ' + city.name\n\nShould output:\nCalifornia\n Los Angeles\n San Francisco\n\n" ]
[ 4 ]
[]
[]
[ "django_models", "google_app_engine", "python" ]
stackoverflow_0001199713_django_models_google_app_engine_python.txt
Q: How do I use Python serverside with shared hosting? I've been told by my hosting company that Python is installed on their servers. How would I go about using it to output a simple HTML page? This is just as a learning exercise at the moment, but one day I'd like to use Python in the same way as I currently use PHP. A: When I used shared hosting I found that if I renamed the file to .py and prefixed it with a shebang line then it would be executed as Python. #!/usr/bin/python Was probably pretty bad practice, but it did work. Don't expect to be able to spit out any extensive web apps with it though. A: There are many ways you could do this: assuming the server architecture supports the WSGI standard, then you could do something as simple as plugging in your own handler to generate the HTML (this could just build a string manually if you want to be really straightforward, or else use a template engine like Mako. An example of a WSGI handler might look like this: class HelloWorldHandler(object): def __call__(self, environ, start_response): """ Processes a request for a webpage """ start_response('200 OK', [('Content-Type', 'text/html')]) return "<p>Hello world!</p>" Obviously it is up to you what you return or how you generate it: as I say, for more complex pages a templating engine might be useful. The more involved way would be to leverage a more complete framework, such as paste or turbogears but if you only want to output a static page or two this could be overkill. A: If your server is running Apache HTTP server, then you need something like mod_wsgi or mod_python installed and running as a module (your server signature may tell you this). Once running, you may need to add a handler to your apache config, or a default may be setup. After that, look at the documentation for the middleware of the module you are running, then maybe go on and use something like Django. A: You can't use it "the same way" as PHP. There are however tons of ways of doing it like Python. Look into the likes of Turbogears or Django. Or BFG of you want something minimalistic, or WSGI (via mod_wsgi) if you want to go directly to the basics. http://www.djangoproject.com/ http://bfg.repoze.org/ http://turbogears.org/ http://www.wsgi.org/wsgi/
How do I use Python serverside with shared hosting?
I've been told by my hosting company that Python is installed on their servers. How would I go about using it to output a simple HTML page? This is just as a learning exercise at the moment, but one day I'd like to use Python in the same way as I currently use PHP.
[ "When I used shared hosting I found that if I renamed the file to .py and prefixed it with a shebang line then it would be executed as Python.\n#!/usr/bin/python\n\nWas probably pretty bad practice, but it did work. Don't expect to be able to spit out any extensive web apps with it though.\n", "There are many ways you could do this: assuming the server architecture supports the WSGI standard, then you could do something as simple as plugging in your own handler to generate the HTML (this could just build a string manually if you want to be really straightforward, or else use a template engine like Mako.\nAn example of a WSGI handler might look like this:\nclass HelloWorldHandler(object):\n\n def __call__(self, environ, start_response):\n \"\"\"\n Processes a request for a webpage\n \"\"\"\n start_response('200 OK', [('Content-Type', 'text/html')])\n return \"<p>Hello world!</p>\"\n\nObviously it is up to you what you return or how you generate it: as I say, for more complex pages a templating engine might be useful.\nThe more involved way would be to leverage a more complete framework, such as paste or turbogears but if you only want to output a static page or two this could be overkill.\n", "If your server is running Apache HTTP server, then you need something like mod_wsgi or mod_python installed and running as a module (your server signature may tell you this).\nOnce running, you may need to add a handler to your apache config, or a default may be setup.\nAfter that, look at the documentation for the middleware of the module you are running, then maybe go on and use something like Django.\n", "You can't use it \"the same way\" as PHP. There are however tons of ways of doing it like Python.\nLook into the likes of Turbogears or Django. Or BFG of you want something minimalistic, or WSGI (via mod_wsgi) if you want to go directly to the basics.\nhttp://www.djangoproject.com/\nhttp://bfg.repoze.org/\nhttp://turbogears.org/\nhttp://www.wsgi.org/wsgi/\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "python", "server_side" ]
stackoverflow_0001199703_python_server_side.txt
Q: Python: Mapping from intervals to values I'm refactoring a function that, given a series of endpoints that implicitly define intervals, checks if a number is included in the interval, and then return a corresponding (not related in any computable way). The code that is now handling the work is: if p <= 100: return 0 elif p > 100 and p <= 300: return 1 elif p > 300 and p <= 500: return 2 elif p > 500 and p <= 800: return 3 elif p > 800 and p <= 1000: return 4 elif p > 1000: return 5 Which is IMO quite horrible, and lacks in that both the intervals and the return values are hardcoded. Any use of any data structure is of course possible. A: import bisect bisect.bisect_left([100,300,500,800,1000], p) here the docs: bisect A: You could try a take on this: def check_mapping(p): mapping = [(100, 0), (300, 1), (500, 2)] # Add all your values and returns here for check, value in mapping: if p <= check: return value print check_mapping(12) print check_mapping(101) print check_mapping(303) produces: 0 1 2 As always in Python, there will be any better ways to do it. A: It is indeed quite horrible. Without a requirement to have no hardcoding, it should have been written like this: if p <= 100: return 0 elif p <= 300: return 1 elif p <= 500: return 2 elif p <= 800: return 3 elif p <= 1000: return 4 else: return 5 Here are examples of creating a lookup function, both linear and using binary search, with the no-hardcodings requirement fulfilled, and a couple of sanity checks on the two tables: def make_linear_lookup(keys, values): assert sorted(keys) == keys assert len(values) == len(keys) + 1 def f(query): return values[sum(1 for key in keys if query > key)] return f import bisect def make_bisect_lookup(keys, values): assert sorted(keys) == keys assert len(values) == len(keys) + 1 def f(query): return values[bisect.bisect_left(keys, query)] return f A: Try something along the lines of: d = {(None,100): 0, (100,200): 1, ... (1000, None): 5} value = 300 # example value for k,v in d.items(): if (k[0] is None or value > k[0]) and (k[1] is None or value <= k[1]): return v A: def which_interval(endpoints, number): for n, endpoint in enumerate(endpoints): if number <= endpoint: return n previous = endpoint return n + 1 Pass your endpoints as a list in endpoints, like this: which_interval([100, 300, 500, 800, 1000], 5) Edit: The above is a linear search. Glenn Maynard's answer will have better performance, since it uses a bisection algorithm. A: Another way ... def which(lst, p): return len([1 for el in lst if p > el]) lst = [100, 300, 500, 800, 1000] which(lst, 2) which(lst, 101) which(lst, 1001)
Python: Mapping from intervals to values
I'm refactoring a function that, given a series of endpoints that implicitly define intervals, checks if a number is included in the interval, and then return a corresponding (not related in any computable way). The code that is now handling the work is: if p <= 100: return 0 elif p > 100 and p <= 300: return 1 elif p > 300 and p <= 500: return 2 elif p > 500 and p <= 800: return 3 elif p > 800 and p <= 1000: return 4 elif p > 1000: return 5 Which is IMO quite horrible, and lacks in that both the intervals and the return values are hardcoded. Any use of any data structure is of course possible.
[ "import bisect\nbisect.bisect_left([100,300,500,800,1000], p)\n\nhere the docs: bisect\n", "You could try a take on this:\ndef check_mapping(p):\n mapping = [(100, 0), (300, 1), (500, 2)] # Add all your values and returns here\n\n for check, value in mapping:\n if p <= check:\n return value\n\nprint check_mapping(12)\nprint check_mapping(101)\nprint check_mapping(303)\n\nproduces:\n0\n1\n2\n\nAs always in Python, there will be any better ways to do it.\n", "It is indeed quite horrible. Without a requirement to have no hardcoding, it should have been written like this:\nif p <= 100:\n return 0\nelif p <= 300:\n return 1\nelif p <= 500:\n return 2\nelif p <= 800:\n return 3\nelif p <= 1000:\n return 4\nelse:\n return 5\n\nHere are examples of creating a lookup function, both linear and using binary search, with the no-hardcodings requirement fulfilled, and a couple of sanity checks on the two tables:\ndef make_linear_lookup(keys, values):\n assert sorted(keys) == keys\n assert len(values) == len(keys) + 1\n def f(query):\n return values[sum(1 for key in keys if query > key)]\n return f\n\nimport bisect\ndef make_bisect_lookup(keys, values):\n assert sorted(keys) == keys\n assert len(values) == len(keys) + 1\n def f(query):\n return values[bisect.bisect_left(keys, query)]\n return f\n\n", "Try something along the lines of:\nd = {(None,100): 0, \n (100,200): 1,\n ...\n (1000, None): 5}\nvalue = 300 # example value\nfor k,v in d.items():\n if (k[0] is None or value > k[0]) and (k[1] is None or value <= k[1]):\n return v\n\n", "def which_interval(endpoints, number):\n for n, endpoint in enumerate(endpoints):\n if number <= endpoint:\n return n\n previous = endpoint\n return n + 1\n\nPass your endpoints as a list in endpoints, like this:\nwhich_interval([100, 300, 500, 800, 1000], 5)\n\nEdit:\nThe above is a linear search. Glenn Maynard's answer will have better performance, since it uses a bisection algorithm.\n", "Another way ...\ndef which(lst, p): \n return len([1 for el in lst if p > el])\n\nlst = [100, 300, 500, 800, 1000]\nwhich(lst, 2)\nwhich(lst, 101)\nwhich(lst, 1001)\n\n" ]
[ 53, 3, 3, 0, 0, 0 ]
[]
[]
[ "intervals", "python", "range" ]
stackoverflow_0001199053_intervals_python_range.txt
Q: Django (?) really slow with large datasets after doing some python profiling I was comparing an old PHP script of mine versus the newer, fancier Django version and the PHP one, with full spitting out of HTML and all was functioning faster. MUCH faster to the point that something has to be wrong on the Django one. First, some context: I have a page that spits out reports of sales data. The data can be filtered by a number of things but is mostly filtered by date. This makes it a bit hard to cache it as the possibilities for results is nearly endless. There are a lot of numbers and calculations done but it was never much of a problem to handle within PHP. UPDATES: After some additional testing there is nothing within my view that is causing the slowdown. If I am simply number-crunching the data and spitting out 5 rows of rendered HTML, it's not that slow (still slower than PHP), but if I am rendering a lot of data, it's VERY slow. Whenever I ran a large report (e.g. all sales for the year), the CPU usage of the machine goes to 100%. Don't know if this means much. I am using mod_python and Apache. Perhaps switching to WSGI may help? My template tags that show the subtotals/totals process anywhere from 0.1 seconds to 1 second for really large sets. I call them about 6 times within the report so they don't seem like the biggest issue. Now, I ran a Python profiler and came back with these results: Ordered by: internal time List reduced from 3074 to 20 due to restriction ncalls tottime percall cumtime percall filename:lineno(function) 2939417 26.290 0.000 44.857 0.000 /usr/lib/python2.5/tokenize.py:212(generate_tokens) 2822655 17.049 0.000 17.049 0.000 {built-in method match} 1689928 15.418 0.000 23.297 0.000 /usr/lib/python2.5/decimal.py:515(__new__) 12289605 11.464 0.000 11.464 0.000 {isinstance} 882618 9.614 0.000 25.518 0.000 /usr/lib/python2.5/decimal.py:1447(_fix) 17393 8.742 0.001 60.798 0.003 /usr/lib/python2.5/tokenize.py:158(tokenize_loop) 11 7.886 0.717 7.886 0.717 {method 'accept' of '_socket.socket' objects} 365577 7.854 0.000 30.233 0.000 /usr/lib/python2.5/decimal.py:954(__add__) 2922024 7.199 0.000 7.199 0.000 /usr/lib/python2.5/inspect.py:571(tokeneater) 438750 5.868 0.000 31.033 0.000 /usr/lib/python2.5/decimal.py:1064(__mul__) 60799 5.666 0.000 9.377 0.000 /usr/lib/python2.5/site-packages/django/db/models/base.py:241(__init__) 17393 4.734 0.000 4.734 0.000 {method 'query' of '_mysql.connection' objects} 1124348 4.631 0.000 8.469 0.000 /usr/lib/python2.5/site-packages/django/utils/encoding.py:44(force_unicode) 219076 4.139 0.000 156.618 0.001 /usr/lib/python2.5/site-packages/django/template/__init__.py:700(_resolve_lookup) 1074478 3.690 0.000 11.096 0.000 /usr/lib/python2.5/decimal.py:5065(_convert_other) 2973281 3.424 0.000 3.424 0.000 /usr/lib/python2.5/decimal.py:718(__nonzero__) 759014 2.962 0.000 3.371 0.000 /usr/lib/python2.5/decimal.py:4675(__init__) 381756 2.806 0.000 128.447 0.000 /usr/lib/python2.5/site-packages/django/db/models/fields/related.py:231(__get__) 842130 2.764 0.000 3.557 0.000 /usr/lib/python2.5/decimal.py:3339(_dec_from_triple) tokenize.py comes out on top, which can make some sense as I am doing a lot of number formatting. Decimal.py makes sense since the report is essentially 90% numbers. I have no clue what the built-in method match is as I am not doing any Regex or similar in my own code (Something Django is doing?) The closest thing is I am using itertools ifilter. It seems those are the main culprits and if I could figure out how to reduce the processing time of those then I would have a much much faster page. Does anyone have any suggestions on how I could start on reducing this? I don't really know how I would fix this the tokenize/decimal issues without simply removing them. Update: I ran some tests with/without filters on most of the data and the result times pretty much came back the same, the latter being a bit faster but not much to be the cause of the issue. What is exactly going on in tokenize.py? A: There is a lot of things to assume about your problem as you don't have any type of code sample. Here are my assumptions: You are using Django's built-in ORM tools and models (i.e. sales-data = modelobj.objects().all() ) and on the PHP side you are dealing with direct SQL queries and working with a query_set. Django is doing a lot of type converting and casting to datatypes going from a database query into the ORM/Model object and the associated manager (objects() by default). In PHP you are controlling the conversions and know exactly how to cast from one data type to another, you are saving some execution time based on that issue alone. I would recommend trying to move some of that fancy number work into the database, especially if you are doing record-set based processing - databases eat that kind of processing from breakfast. In Django you can send RAW SQL over to the database: http://docs.djangoproject.com/en/dev/topics/db/sql/#topics-db-sql I hope this at least can get you pointed in the right direction... A: "tokenize.py comes out on top, which can make some sense as I am doing a lot of number formatting. " Makes no sense at all. See http://docs.python.org/library/tokenize.html. The tokenize module provides a lexical scanner for Python source code, implemented in Python Tokenize coming out on top means that you have dynamic code parsing going on. AFAIK (doing a search on the Django repository) Django does not use tokenize. So that leaves your program doing some kind of dynamic code instantiation. Or, you're only profiling the first time your program is loaded, parsed and run, leading to false assumptions about where the time is going. You should not ever do calculation in template tags -- it's slow. It involves a complex meta-evaluation of the template tag. You should do all calculations in the view in simple, low-overhead Python. Use the templates for presentation only. Also, if you're constantly doing queries, filters, sums, and what-not, you have a data warehouse. Get a book on data warehouse design, and follow the data warehouse design patterns. You must have a central fact table, surrounded by dimension tables. This is very, very efficient. Sums, group bys, etc., are can be done as defaultdict operations in Python. Bulk fetch all the rows, building the dictionary with the desired results. If this is too slow, then you have to use data warehousing techniques of saving persistent sums and groups separate from your fine-grained facts. Often this involves stepping outside the Django ORM and using RDBMS features like views or tables of derived data. A: When dealing with large sets of data, you can also save a lot of CPU and memory by using the ValuesQuerySet that accesses the query results more directly instead of creating a model object instance for each row in the result. It's usage looks a bit like this: Blog.objects.order_by('id').values() A: In such a scenario the database is often the bottleneck. Also, using an ORM might result in sub-optimal SQL queries. As some pointed out it's not possible to tell what the probem really is, just with the information you provided. I just can give you some general advice: If your view is working with related model objects, consider using select_related(). This simple method might speed up the queries generated by the ORM considerably. Use the Debug Footer Middleware to see what SQL queries are generated by your views and what time they took to execute. PS: Just fyi, I had once a fairly simple view which was very slow. After installing the Debug Footer Middleware I saw that around 500! sql queries were executed in that single view. Just using select_related() brought that down to 5 queries and the view performed as expected.
Django (?) really slow with large datasets after doing some python profiling
I was comparing an old PHP script of mine versus the newer, fancier Django version and the PHP one, with full spitting out of HTML and all was functioning faster. MUCH faster to the point that something has to be wrong on the Django one. First, some context: I have a page that spits out reports of sales data. The data can be filtered by a number of things but is mostly filtered by date. This makes it a bit hard to cache it as the possibilities for results is nearly endless. There are a lot of numbers and calculations done but it was never much of a problem to handle within PHP. UPDATES: After some additional testing there is nothing within my view that is causing the slowdown. If I am simply number-crunching the data and spitting out 5 rows of rendered HTML, it's not that slow (still slower than PHP), but if I am rendering a lot of data, it's VERY slow. Whenever I ran a large report (e.g. all sales for the year), the CPU usage of the machine goes to 100%. Don't know if this means much. I am using mod_python and Apache. Perhaps switching to WSGI may help? My template tags that show the subtotals/totals process anywhere from 0.1 seconds to 1 second for really large sets. I call them about 6 times within the report so they don't seem like the biggest issue. Now, I ran a Python profiler and came back with these results: Ordered by: internal time List reduced from 3074 to 20 due to restriction ncalls tottime percall cumtime percall filename:lineno(function) 2939417 26.290 0.000 44.857 0.000 /usr/lib/python2.5/tokenize.py:212(generate_tokens) 2822655 17.049 0.000 17.049 0.000 {built-in method match} 1689928 15.418 0.000 23.297 0.000 /usr/lib/python2.5/decimal.py:515(__new__) 12289605 11.464 0.000 11.464 0.000 {isinstance} 882618 9.614 0.000 25.518 0.000 /usr/lib/python2.5/decimal.py:1447(_fix) 17393 8.742 0.001 60.798 0.003 /usr/lib/python2.5/tokenize.py:158(tokenize_loop) 11 7.886 0.717 7.886 0.717 {method 'accept' of '_socket.socket' objects} 365577 7.854 0.000 30.233 0.000 /usr/lib/python2.5/decimal.py:954(__add__) 2922024 7.199 0.000 7.199 0.000 /usr/lib/python2.5/inspect.py:571(tokeneater) 438750 5.868 0.000 31.033 0.000 /usr/lib/python2.5/decimal.py:1064(__mul__) 60799 5.666 0.000 9.377 0.000 /usr/lib/python2.5/site-packages/django/db/models/base.py:241(__init__) 17393 4.734 0.000 4.734 0.000 {method 'query' of '_mysql.connection' objects} 1124348 4.631 0.000 8.469 0.000 /usr/lib/python2.5/site-packages/django/utils/encoding.py:44(force_unicode) 219076 4.139 0.000 156.618 0.001 /usr/lib/python2.5/site-packages/django/template/__init__.py:700(_resolve_lookup) 1074478 3.690 0.000 11.096 0.000 /usr/lib/python2.5/decimal.py:5065(_convert_other) 2973281 3.424 0.000 3.424 0.000 /usr/lib/python2.5/decimal.py:718(__nonzero__) 759014 2.962 0.000 3.371 0.000 /usr/lib/python2.5/decimal.py:4675(__init__) 381756 2.806 0.000 128.447 0.000 /usr/lib/python2.5/site-packages/django/db/models/fields/related.py:231(__get__) 842130 2.764 0.000 3.557 0.000 /usr/lib/python2.5/decimal.py:3339(_dec_from_triple) tokenize.py comes out on top, which can make some sense as I am doing a lot of number formatting. Decimal.py makes sense since the report is essentially 90% numbers. I have no clue what the built-in method match is as I am not doing any Regex or similar in my own code (Something Django is doing?) The closest thing is I am using itertools ifilter. It seems those are the main culprits and if I could figure out how to reduce the processing time of those then I would have a much much faster page. Does anyone have any suggestions on how I could start on reducing this? I don't really know how I would fix this the tokenize/decimal issues without simply removing them. Update: I ran some tests with/without filters on most of the data and the result times pretty much came back the same, the latter being a bit faster but not much to be the cause of the issue. What is exactly going on in tokenize.py?
[ "There is a lot of things to assume about your problem as you don't have any type of code sample.\nHere are my assumptions: You are using Django's built-in ORM tools and models (i.e. sales-data = modelobj.objects().all() ) and on the PHP side you are dealing with direct SQL queries and working with a query_set.\nDjango is doing a lot of type converting and casting to datatypes going from a database query into the ORM/Model object and the associated manager (objects() by default).\nIn PHP you are controlling the conversions and know exactly how to cast from one data type to another, you are saving some execution time based on that issue alone.\nI would recommend trying to move some of that fancy number work into the database, especially if you are doing record-set based processing - databases eat that kind of processing from breakfast. In Django you can send RAW SQL over to the database: http://docs.djangoproject.com/en/dev/topics/db/sql/#topics-db-sql\nI hope this at least can get you pointed in the right direction...\n", "\"tokenize.py comes out on top, which can make some sense as I am doing a lot of number formatting. \"\nMakes no sense at all.\nSee http://docs.python.org/library/tokenize.html.\n\nThe tokenize module provides a lexical\n scanner for Python source code,\n implemented in Python\n\nTokenize coming out on top means that you have dynamic code parsing going on.\nAFAIK (doing a search on the Django repository) Django does not use tokenize. So that leaves your program doing some kind of dynamic code instantiation. Or, you're only profiling the first time your program is loaded, parsed and run, leading to false assumptions about where the time is going.\nYou should not ever do calculation in template tags -- it's slow. It involves a complex meta-evaluation of the template tag. You should do all calculations in the view in simple, low-overhead Python. Use the templates for presentation only.\nAlso, if you're constantly doing queries, filters, sums, and what-not, you have a data warehouse. Get a book on data warehouse design, and follow the data warehouse design patterns.\nYou must have a central fact table, surrounded by dimension tables. This is very, very efficient. \nSums, group bys, etc., are can be done as defaultdict operations in Python. Bulk fetch all the rows, building the dictionary with the desired results. If this is too slow, then you have to use data warehousing techniques of saving persistent sums and groups separate from your fine-grained facts. Often this involves stepping outside the Django ORM and using RDBMS features like views or tables of derived data.\n", "When dealing with large sets of data, you can also save a lot of CPU and memory by using the ValuesQuerySet that accesses the query results more directly instead of creating a model object instance for each row in the result.\nIt's usage looks a bit like this:\nBlog.objects.order_by('id').values()\n\n", "In such a scenario the database is often the bottleneck. Also, using an ORM might result in sub-optimal SQL queries.\nAs some pointed out it's not possible to tell what the probem really is, just with the information you provided.\nI just can give you some general advice:\n\nIf your view is working with related model objects, consider using select_related(). This simple method might speed up the queries generated by the ORM considerably.\nUse the Debug Footer Middleware to see what SQL queries are generated by your views and what time they took to execute.\n\nPS: Just fyi, I had once a fairly simple view which was very slow. After installing the Debug Footer Middleware I saw that around 500! sql queries were executed in that single view. Just using select_related() brought that down to 5 queries and the view performed as expected.\n" ]
[ 7, 2, 2, 1 ]
[]
[]
[ "django", "optimization", "python" ]
stackoverflow_0001173798_django_optimization_python.txt
Q: How to import python module in a shared folder? I have some python modules in a shared folder on a Windows machine. The file is \mtl12366150\test\mymodule.py os.path.exists tells me this path is valid. I appended to sys.path the folder \mtl12366150\test (and os.path.exists tells me this path is valid). When I try to import mymodule I get an error saying the module doesn't exist. Is there a way to import module that are located in shared path? A: Did you forget to use a raw string, or escape the backslashes, in your additional sys.path component? Remember that "\t" is a tab, whereas r"\t" or "\t" are a backslash followed by a tab. In most applications you are actually better off using forward slashes rather than backslashes even for Windows paths, and most Windows APIs will accept them just fine. Otherwise, be careful to use raw strings! [There is no need to add __init__.py files in the directories above a simple Python module] A: You say os.path.exists() says the path is there, but are you absolutely sure you escaped the \? Try this: sys.path.append('\\mtl12366150\\tes') A: To import a python item there needs to be a __init__.py file in each of the folder above it to show that it is a valid python package. The __init__.py files can be empty, they are there just to show structure. \mtl12366150 __init__.py \test __init__.py \mymodule.py A: As S.Lott said, the best approach is to set the PYTHONPATH environment variable. I don't have a windows box handy, but it would look something like this from your command prompt: c:> SET PYTHONPATH=c:\mtl12366150\test c:> python >>> import mymodule >>> A: I think I found the answer. I was using Python 2.6.1 and with Python 2.6.2 it now works. I had the same faulty behavior with python 2.5.4.
How to import python module in a shared folder?
I have some python modules in a shared folder on a Windows machine. The file is \mtl12366150\test\mymodule.py os.path.exists tells me this path is valid. I appended to sys.path the folder \mtl12366150\test (and os.path.exists tells me this path is valid). When I try to import mymodule I get an error saying the module doesn't exist. Is there a way to import module that are located in shared path?
[ "Did you forget to use a raw string, or escape the backslashes, in your additional sys.path component? Remember that \"\\t\" is a tab, whereas r\"\\t\" or \"\\t\" are a backslash followed by a tab.\nIn most applications you are actually better off using forward slashes rather than backslashes even for Windows paths, and most Windows APIs will accept them just fine. Otherwise, be careful to use raw strings!\n[There is no need to add __init__.py files in the directories above a simple Python module]\n", "You say os.path.exists() says the path is there, but are you absolutely sure you escaped the \\? Try this:\nsys.path.append('\\\\mtl12366150\\\\tes')\n\n", "To import a python item there needs to be a __init__.py file in each of the folder above it to show that it is a valid python package.\nThe __init__.py files can be empty, they are there just to show structure.\n\\mtl12366150\n __init__.py\n \\test\n __init__.py\n \\mymodule.py\n\n", "As S.Lott said, the best approach is to set the PYTHONPATH environment variable. I don't have a windows box handy, but it would look something like this from your command prompt:\nc:> SET PYTHONPATH=c:\\mtl12366150\\test\nc:> python\n>>> import mymodule\n>>>\n\n", "I think I found the answer. I was using Python 2.6.1 and with Python 2.6.2 it now works. I had the same faulty behavior with python 2.5.4.\n" ]
[ 1, 1, 0, 0, 0 ]
[ "\"I appended to sys.path ...\"\nPlease don't.\nSet the PYTHONPATH environment variable from outside your application.\n" ]
[ -2 ]
[ "directory", "import", "python", "shared" ]
stackoverflow_0001196708_directory_import_python_shared.txt
Q: Using Storm: ImportError: No module named local As stated in the Storm documentation, I am doing the following to import the necessary symbols for using Storm: from storm.locals import * I'm using it alongside with Pylons, and storm is indeed installed as an egg in the virtual Python environment which Pylon setup for me, and it also searches the correct paths. However, when the import code above is evaluated, the following exception is being thrown: ImportError: No module named local But I'm not explicitly including anything from a module named 'local', but 'locals'. Update (traceback) URL: http://localhost:5000/characters/index File '/home/andy/pylon-env/lib/python2.6/site-packages/WebError-0.10.1-py2.6.egg/weberror/evalexception.py', line 431 in respond app_iter = self.application(environ, detect_start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Beaker-1.3.1-py2.6.egg/beaker/middleware.py', line 70 in __call__ return self.app(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Beaker-1.3.1-py2.6.egg/beaker/middleware.py', line 149 in __call__ return self.wrap_app(environ, session_start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Routes-1.10.3-py2.6.egg/routes/middleware.py', line 130 in __call__ response = self.app(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 124 in __call__ controller = self.resolve(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 263 in resolve return self.find_controller(controller) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 284 in find_controller __import__(full_module_name) File '/home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9 in <module> from storm.local import * ImportError: No module named local A: Here's the code that's failing. File '/home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9 in <module> from storm.local import * ImportError: No module named local You claim your snippet is from storm.locals import * But the error traceback says from storm.local import * I'm betting the traceback is right and the file /home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9 has the incorrect code from storm.local import *. Not the code you wish that it had.
Using Storm: ImportError: No module named local
As stated in the Storm documentation, I am doing the following to import the necessary symbols for using Storm: from storm.locals import * I'm using it alongside with Pylons, and storm is indeed installed as an egg in the virtual Python environment which Pylon setup for me, and it also searches the correct paths. However, when the import code above is evaluated, the following exception is being thrown: ImportError: No module named local But I'm not explicitly including anything from a module named 'local', but 'locals'. Update (traceback) URL: http://localhost:5000/characters/index File '/home/andy/pylon-env/lib/python2.6/site-packages/WebError-0.10.1-py2.6.egg/weberror/evalexception.py', line 431 in respond app_iter = self.application(environ, detect_start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Beaker-1.3.1-py2.6.egg/beaker/middleware.py', line 70 in __call__ return self.app(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Beaker-1.3.1-py2.6.egg/beaker/middleware.py', line 149 in __call__ return self.wrap_app(environ, session_start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Routes-1.10.3-py2.6.egg/routes/middleware.py', line 130 in __call__ response = self.app(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 124 in __call__ controller = self.resolve(environ, start_response) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 263 in resolve return self.find_controller(controller) File '/home/andy/pylon-env/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py', line 284 in find_controller __import__(full_module_name) File '/home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9 in <module> from storm.local import * ImportError: No module named local
[ "Here's the code that's failing.\nFile '/home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9 in <module>\n from storm.local import *\nImportError: No module named local\n\nYou claim your snippet is\nfrom storm.locals import *\n\nBut the error traceback says\nfrom storm.local import *\n\nI'm betting the traceback is right and the file\n/home/andy/projects/evecharacters/evecharacters/controllers/characters.py', line 9\nhas the incorrect code from storm.local import *. Not the code you wish that it had.\n" ]
[ 1 ]
[]
[]
[ "python", "storm_orm" ]
stackoverflow_0001199415_python_storm_orm.txt
Q: pygame is screwing up ctypes import mymodule, ctypes #import pygame foo = ctypes.cdll.MyDll.foo print 'success' if i uncomment the import pygame this fails with WindowsError: [Errno 182] The operating system cannot load %1. the stack frame is in ctypes python code, trying to load MyDll. win32 error code 182 is ERROR_INVALID_ORDINAL. if the pygame import is not there, the script runs successfully. Update: If I run it outside the debugger, the %1 is filled with 'libpng13.dll', which is in the working directory and referenced by MyDll, and pygame is certainly loading some version of libpng. I have no idea how I would resolve this. A: This sounds like a dll conflict. It seems that import pygame loads some dll that is not compatible with a dll that MyDll needs. You should try to debug this with sysinternals ProcessExplorer, it can show which dlls a process has loaded; look for different dlls in both cases. Another usefull tool to debug dll problems is the dependencywalker, from www.dependencywalker.com A: Update for the record: I believe there were multiple versions of libpng being loaded by different modules (pygame, and mydll). I used multiprocessing to separate the two modules and everything's dandy.
pygame is screwing up ctypes
import mymodule, ctypes #import pygame foo = ctypes.cdll.MyDll.foo print 'success' if i uncomment the import pygame this fails with WindowsError: [Errno 182] The operating system cannot load %1. the stack frame is in ctypes python code, trying to load MyDll. win32 error code 182 is ERROR_INVALID_ORDINAL. if the pygame import is not there, the script runs successfully. Update: If I run it outside the debugger, the %1 is filled with 'libpng13.dll', which is in the working directory and referenced by MyDll, and pygame is certainly loading some version of libpng. I have no idea how I would resolve this.
[ "This sounds like a dll conflict. It seems that import pygame loads some dll that is not compatible with a dll that MyDll needs.\nYou should try to debug this with sysinternals ProcessExplorer, it can show which dlls a process has loaded; look for different dlls in both cases.\nAnother usefull tool to debug dll problems is the dependencywalker, from www.dependencywalker.com\n", "Update for the record: I believe there were multiple versions of libpng being loaded by different modules (pygame, and mydll). I used multiprocessing to separate the two modules and everything's dandy.\n" ]
[ 2, 2 ]
[]
[]
[ "ctypes", "pygame", "python", "winapi" ]
stackoverflow_0000686798_ctypes_pygame_python_winapi.txt
Q: Whats the best way of putting tabular data into python? I have a CSV file which I am processing and putting the processed data into a text file. The entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file? For eg. Assume there is a column called 'col'. I just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.) Hope I am not too vague... A: Go with a list of lists. That is: [[col1, col2, col3, col4], # Row 1 [col1, col2, col3, col4], # Row 2 [col1, col2, col3, col4], # Row 3 [col1, col2, col3, col4]] # Row 4 To modify a specific column, you can transform this into a list of columns with a single statement: >>> cols = zip(*rows) >>> cols [[row1, row2, row3, row4], # Col 1 [row1, row2, row3, row4], # Col 2 [row1, row2, row3, row4], # Col 3 [row1, row2, row3, row4]] # Col 4 A: Python's CSV library has a function named DictReader that allow you to view and manipulate the data as a Python dictionary, which allows you to use normal iterative tools. A: Is SQLite an option for you? I know that you have CSV input and output. However, you can import all the data into the SQLite database. Then do all the necessary processing with the power of SQL. Then you can export the results as CSV. A: Probably either a dict of list or a list of dict. Personally, I'd go with the former. So, parse the heading row of the CSV to get a dict from column heading to column index. Then when you're reading through each row, work out what index you're at, grab the column heading, and then append to the end of the list for that column heading. A: Good question, I have this problem very frequently. In general, to handle csv files like that, I prefer to use R which is a data.frame object specifically designed for this. In python, you can have a look at this library called datamatrix: http://github.com/cswegger/datamatrix/tree/master Or maybe at numpy/scipy's matrixes. Named tuples are another alternative which has been tought for parsing csv files, but they are not pbased on the concept of a matrix: http://code.activestate.com/recipes/500261/ A: Your situation is kind of vague, but I'll try to answer your question, "How do I remember the column into which a piece of data goes in the text file?" One way is to store a list of rows as dictionaries. Note: I usually use tab-delimited text files, so forgive me if I'm forgetting something about csv formatting. input_file = open('input.csv', 'r') # ['col1', 'col2', 'col3'] headers = input_file.readline().strip().split(',') stored_rows = [] for line in input_file: row_data = line.strip().split(',') stored_rows.append(dict(zip(headers, row_data))) Now each row has a value for each column, which you can then process and output in whatever order you need. output_headers = ['col3', 'col1', 'col2'] output_file = open('ouput.csv', 'w') output_file.write(','.join(output_headers) + '\n') for row in stored_rows: # do any processing you need here row['col1'] = row['col1'].strip().lower() #for example # write the data to your output file in the order you want it output_file.write(','.join(map(row.get, output_headers)) + '\n')
Whats the best way of putting tabular data into python?
I have a CSV file which I am processing and putting the processed data into a text file. The entire data that goes into the text file is one big table(comma separated instead of space). My problem is How do I remember the column into which a piece of data goes in the text file? For eg. Assume there is a column called 'col'. I just put some data under col. Now after a few iterations, I want to put some other piece of data under col again (In a different row). How do I know where exactly col comes? (And there are a lot of columns like this.) Hope I am not too vague...
[ "Go with a list of lists. That is:\n[[col1, col2, col3, col4], # Row 1\n [col1, col2, col3, col4], # Row 2\n [col1, col2, col3, col4], # Row 3\n [col1, col2, col3, col4]] # Row 4\n\nTo modify a specific column, you can transform this into a list of columns with a single statement:\n>>> cols = zip(*rows)\n>>> cols\n[[row1, row2, row3, row4], # Col 1\n [row1, row2, row3, row4], # Col 2\n [row1, row2, row3, row4], # Col 3\n [row1, row2, row3, row4]] # Col 4\n\n", "Python's CSV library has a function named DictReader that allow you to view and manipulate the data as a Python dictionary, which allows you to use normal iterative tools.\n", "Is SQLite an option for you? I know that you have CSV input and output. However, you can import all the data into the SQLite database. Then do all the necessary processing with the power of SQL. Then you can export the results as CSV. \n", "Probably either a dict of list or a list of dict. Personally, I'd go with the former. So, parse the heading row of the CSV to get a dict from column heading to column index. Then when you're reading through each row, work out what index you're at, grab the column heading, and then append to the end of the list for that column heading.\n", "Good question, I have this problem very frequently.\nIn general, to handle csv files like that, I prefer to use R which is a data.frame object specifically designed for this.\nIn python, you can have a look at this library called datamatrix:\n\nhttp://github.com/cswegger/datamatrix/tree/master\n\nOr maybe at numpy/scipy's matrixes.\nNamed tuples are another alternative which has been tought for parsing csv files, but they are not pbased on the concept of a matrix:\n\nhttp://code.activestate.com/recipes/500261/\n\n", "Your situation is kind of vague, but I'll try to answer your question, \"How do I remember the column into which a piece of data goes in the text file?\"\nOne way is to store a list of rows as dictionaries. \nNote: I usually use tab-delimited text files, so forgive me if I'm forgetting something about csv formatting.\ninput_file = open('input.csv', 'r')\n\n# ['col1', 'col2', 'col3']\nheaders = input_file.readline().strip().split(',')\nstored_rows = []\nfor line in input_file:\n row_data = line.strip().split(',')\n stored_rows.append(dict(zip(headers, row_data)))\n\nNow each row has a value for each column, which you can then process and output in whatever order you need.\noutput_headers = ['col3', 'col1', 'col2']\noutput_file = open('ouput.csv', 'w')\noutput_file.write(','.join(output_headers) + '\\n')\nfor row in stored_rows:\n # do any processing you need here\n row['col1'] = row['col1'].strip().lower() #for example\n\n # write the data to your output file in the order you want it\n output_file.write(','.join(map(row.get, output_headers)) + '\\n')\n\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "csv", "file", "python" ]
stackoverflow_0001199350_csv_file_python.txt
Q: Python interpolation I have a set of data that looks like: Table-1 X1 | Y1 ------+-------- 0.1 | 0.52147 0.02 | 0.8879 0.08 | 0.901 0.11 | 1.55 0.15 | 1.82 0.152 | 1.95 Table-2 X2 | Y2 -----+------ 0.2 | 0.11 0.21 | 0.112 0.34 | 0.120 0.33 | 1.121 I have to interpolate the Y2 value from Table-2 for the X1 values from Table-1, i.e., I need to find the values of Y2 for the following values of X: X1 | Y2 -------+------- 0.1 | 0.02 | 0.08 | 0.11 | 0.15 | 0.152 | Note: Both Table-1 & 2 have unequal intervals. The number of (X, Y) entries will differ, for e.g., here we have 6 (X1, Y1) entries in Table-1 and only 4 (X2, Y2) in Table-2. Which interpolation algorithm should I use in Numpy, and how do I proceed? A: numpy.interp seems to be the function you want: pass your X1 as the first argument x, your X2 as the second argument xp, your Y2 as the third argument fp, and you'll get the Y values corresponding to the X1 coordinates. Y2_at_X1 = np.interp(X1, X2, Y2) I'm assuming you want to completely ignore the existing Y1 values. This is what the above snippet does. Otherwise you'll have to clarify your question to explain what role you might have for Y1! If you want more than linear interpolation, I suggest you look at scipy.interpolate and its tutorial rather than trying to stretch numpy beyond its simplicity ;-).
Python interpolation
I have a set of data that looks like: Table-1 X1 | Y1 ------+-------- 0.1 | 0.52147 0.02 | 0.8879 0.08 | 0.901 0.11 | 1.55 0.15 | 1.82 0.152 | 1.95 Table-2 X2 | Y2 -----+------ 0.2 | 0.11 0.21 | 0.112 0.34 | 0.120 0.33 | 1.121 I have to interpolate the Y2 value from Table-2 for the X1 values from Table-1, i.e., I need to find the values of Y2 for the following values of X: X1 | Y2 -------+------- 0.1 | 0.02 | 0.08 | 0.11 | 0.15 | 0.152 | Note: Both Table-1 & 2 have unequal intervals. The number of (X, Y) entries will differ, for e.g., here we have 6 (X1, Y1) entries in Table-1 and only 4 (X2, Y2) in Table-2. Which interpolation algorithm should I use in Numpy, and how do I proceed?
[ "numpy.interp seems to be the function you want: pass your X1 as the first argument x, your X2 as the second argument xp, your Y2 as the third argument fp, and you'll get the Y values corresponding to the X1 coordinates.\nY2_at_X1 = np.interp(X1, X2, Y2)\n\nI'm assuming you want to completely ignore the existing Y1 values. This is what the above snippet does. Otherwise you'll have to clarify your question to explain what role you might have for Y1!\nIf you want more than linear interpolation, I suggest you look at scipy.interpolate and its tutorial rather than trying to stretch numpy beyond its simplicity ;-).\n" ]
[ 31 ]
[]
[]
[ "interpolation", "numpy", "python" ]
stackoverflow_0001200644_interpolation_numpy_python.txt
Q: Which events can be bound to a Tkinter Frame? I am making a small application with Tkinter. I would like to clean few things in a function called when my window is closed. I am trying to bind the close event of my window with that function. I don't know if it is possible and what is the corresponding sequence. The Python documentation says: See the bind man page and page 201 of John Ousterhout’s book for details. Unfortunately, I don't have these resources in my hands. Does anybody know the list of events that can be bound? An alternative solution would be to clean everything in the __del__ of my Frame class. For an unknown reason it seems that it is never called. Does anybody knows what can be the cause? Some circular dependencies? As soon as, I add a control (uncomment in the code below), the __del__ is not called anymore. Any solution for that problem? from tkinter import * class MyDialog(Frame): def __init__(self): print("hello") self.root = Tk() self.root.title("Test") Frame.__init__(self, self.root) self.list = Listbox(self, selectmode=BROWSE) self.list.pack(fill=BOTH, expand=1) self.pack(fill=BOTH, expand=1) def __del__(self): print("bye-bye") dialog = MyDialog() dialog.root.mainloop() A: I believe this is the bind man page you may have been looking for; I believe the event you're trying to bind is Destroy. __del__ is not to be relied on (just too hard to know when a circular reference loop, e.g. parent to child widget and back, will stop it from triggering!), using event binding is definitely preferable. A: A more-or-less definitive resource for events is the bind man page for Tk. I'm not exactly clear what you're wanting to do, but binding on "<Destroy>" is probably the event you are looking for. Whether it does what you really need, I don't know. ... self.bind("<Destroy>", self.callback) ... def callback(self, event): print("callback called")
Which events can be bound to a Tkinter Frame?
I am making a small application with Tkinter. I would like to clean few things in a function called when my window is closed. I am trying to bind the close event of my window with that function. I don't know if it is possible and what is the corresponding sequence. The Python documentation says: See the bind man page and page 201 of John Ousterhout’s book for details. Unfortunately, I don't have these resources in my hands. Does anybody know the list of events that can be bound? An alternative solution would be to clean everything in the __del__ of my Frame class. For an unknown reason it seems that it is never called. Does anybody knows what can be the cause? Some circular dependencies? As soon as, I add a control (uncomment in the code below), the __del__ is not called anymore. Any solution for that problem? from tkinter import * class MyDialog(Frame): def __init__(self): print("hello") self.root = Tk() self.root.title("Test") Frame.__init__(self, self.root) self.list = Listbox(self, selectmode=BROWSE) self.list.pack(fill=BOTH, expand=1) self.pack(fill=BOTH, expand=1) def __del__(self): print("bye-bye") dialog = MyDialog() dialog.root.mainloop()
[ "I believe this is the bind man page you may have been looking for; I believe the event you're trying to bind is Destroy. __del__ is not to be relied on (just too hard to know when a circular reference loop, e.g. parent to child widget and back, will stop it from triggering!), using event binding is definitely preferable.\n", "A more-or-less definitive resource for events is the bind man page for Tk. I'm not exactly clear what you're wanting to do, but binding on \"<Destroy>\" is probably the event you are looking for. Whether it does what you really need, I don't know. \n ...\n self.bind(\"<Destroy>\", self.callback)\n ...\n def callback(self, event):\n print(\"callback called\")\n\n" ]
[ 3, 3 ]
[]
[]
[ "python", "python_3.x", "tkinter" ]
stackoverflow_0001200592_python_python_3.x_tkinter.txt
Q: pylint false positive for superclass __init__ If I derive a class from ctypes.BigEndianStructure, pylint warns if I don't call BigEndianStructure.__init__(). Great, but if I fix my code, pylint still warns: import ctypes class Foo(ctypes.BigEndianStructure): def __init__(self): ctypes.BigEndianStructure.__init__(self) $ pylint mymodule.py C: 1: Missing docstring C: 3:Foo: Missing docstring W: 4:Foo.__init__: __init__ method from base class 'Structure' is not called W: 4:Foo.__init__: __init__ method from base class 'BigEndianStructure' is not called R: 3:Foo: Too few public methods (0/2) At first I thought this was because Structure comes from a C module. I don't get the warning if I subclass from one of my classes or, say, SocketServer.BaseServer which is pure python. But I also don't get the warning if I subclass from smbus.SMBus, which is in a C module. Anyone know of a workaround other than disabling W0231? A: Try using the new-style super calls: class Foo(ctypes.BigEndianStructure): def __init__(self): super(Foo, self).__init__()
pylint false positive for superclass __init__
If I derive a class from ctypes.BigEndianStructure, pylint warns if I don't call BigEndianStructure.__init__(). Great, but if I fix my code, pylint still warns: import ctypes class Foo(ctypes.BigEndianStructure): def __init__(self): ctypes.BigEndianStructure.__init__(self) $ pylint mymodule.py C: 1: Missing docstring C: 3:Foo: Missing docstring W: 4:Foo.__init__: __init__ method from base class 'Structure' is not called W: 4:Foo.__init__: __init__ method from base class 'BigEndianStructure' is not called R: 3:Foo: Too few public methods (0/2) At first I thought this was because Structure comes from a C module. I don't get the warning if I subclass from one of my classes or, say, SocketServer.BaseServer which is pure python. But I also don't get the warning if I subclass from smbus.SMBus, which is in a C module. Anyone know of a workaround other than disabling W0231?
[ "Try using the new-style super calls:\nclass Foo(ctypes.BigEndianStructure):\n def __init__(self):\n super(Foo, self).__init__()\n\n" ]
[ 6 ]
[]
[]
[ "pylint", "python" ]
stackoverflow_0001201094_pylint_python.txt
Q: Importing files in Python from __init__.py Suppose I have the following structure: app/ __init__.py foo/ a.py b.py c.py __init__.py a.py, b.py and c.py share some common imports (logging, os, re, etc). Is it possible to import these three or four common modules from the __init__.py file so I don't have to import them in every one of the files? Edit: My goal is to avoid having to import 5-6 modules in each file and it's not related to performance reasons. A: You can do this using a common file such as include.py, but it goes against recommended practices because it involves a wildcard import. Consider the following files: app/ __init__.py foo/ a.py b.py c.py include.py <- put the includes here. __init__.py Now, in a.py, etc., do: from include import * As stated above, it's not recommended because wildcard-imports are discouraged. A: No, they have to be put in each module's namespace, so you have to import them somehow (unless you pass logging around as a function argument, which would be a weird way to do things, to say the least). But the modules are only imported once anyway (and then put into the a, b, and c namespaces), so don't worry about using too much memory or something like that. You can of course put them into a separate module and import that into each a, b, and c, but this separate module would still have to be imported everytime. A: Yes, but don't do it. Seriously, don't. But if you still want to know how to do it, it'd look like this: import __init__ re = __init__.re logging = __init__.logging os = __init__.os I say not to do it not only because it's messy and pointless, but also because your package isn't really supposed to use __init__.py like that. It's package initialization code.
Importing files in Python from __init__.py
Suppose I have the following structure: app/ __init__.py foo/ a.py b.py c.py __init__.py a.py, b.py and c.py share some common imports (logging, os, re, etc). Is it possible to import these three or four common modules from the __init__.py file so I don't have to import them in every one of the files? Edit: My goal is to avoid having to import 5-6 modules in each file and it's not related to performance reasons.
[ "You can do this using a common file such as include.py, but it goes against recommended practices because it involves a wildcard import. Consider the following files:\napp/\n __init__.py\nfoo/\n a.py\n b.py\n c.py\n include.py <- put the includes here.\n __init__.py\n\nNow, in a.py, etc., do:\nfrom include import *\n\nAs stated above, it's not recommended because wildcard-imports are discouraged.\n", "No, they have to be put in each module's namespace, so you have to import them somehow (unless you pass logging around as a function argument, which would be a weird way to do things, to say the least).\nBut the modules are only imported once anyway (and then put into the a, b, and c namespaces), so don't worry about using too much memory or something like that.\nYou can of course put them into a separate module and import that into each a, b, and c, but this separate module would still have to be imported everytime.\n", "Yes, but don't do it. Seriously, don't. But if you still want to know how to do it, it'd look like this:\nimport __init__\n\nre = __init__.re\nlogging = __init__.logging\nos = __init__.os\n\nI say not to do it not only because it's messy and pointless, but also because your package isn't really supposed to use __init__.py like that. It's package initialization code.\n" ]
[ 14, 11, 6 ]
[]
[]
[ "import", "module", "python" ]
stackoverflow_0001201115_import_module_python.txt
Q: Cannot insert data into an sqlite3 database using Python I can successfully use Python to create a database and run the execute() method to create 2 new tables and specify the column names. However, I cannot insert data into the database. This is the code that I am trying to use to insert the data into the database: #! /usr/bin/env python import sqlite3 companies = ('GOOG', 'AAPL', 'MSFT') db = sqlite3.connect('data.db') c = db.cursor() for company in companies: c.execute('INSERT INTO companies VALUES (?)', (company,)) Here is the code that I use to successfully create the database with: #! /usr/bin/env python import sqlite3 db = sqlite3.connect('data.db') db.execute('CREATE TABLE companies ' \ '( '\ 'company varchar(255) '\ ')') db.execute('CREATE TABLE data ' \ '( '\ 'timestamp int, '\ 'company int, '\ 'shares_held_by_all_insider int, '\ 'shares_held_by_institutional int, '\ 'float_held_by_institutional int, '\ 'num_institutions int '\ ')') A: Try to add db.commit() after the inserting. A: To insert the data you don't need a cursor just use the db db.execute() instead of c.execute() and get rid of the c = db.cursor() line Cursors aren't used to insert data, but usually to read data, or update data in place.
Cannot insert data into an sqlite3 database using Python
I can successfully use Python to create a database and run the execute() method to create 2 new tables and specify the column names. However, I cannot insert data into the database. This is the code that I am trying to use to insert the data into the database: #! /usr/bin/env python import sqlite3 companies = ('GOOG', 'AAPL', 'MSFT') db = sqlite3.connect('data.db') c = db.cursor() for company in companies: c.execute('INSERT INTO companies VALUES (?)', (company,)) Here is the code that I use to successfully create the database with: #! /usr/bin/env python import sqlite3 db = sqlite3.connect('data.db') db.execute('CREATE TABLE companies ' \ '( '\ 'company varchar(255) '\ ')') db.execute('CREATE TABLE data ' \ '( '\ 'timestamp int, '\ 'company int, '\ 'shares_held_by_all_insider int, '\ 'shares_held_by_institutional int, '\ 'float_held_by_institutional int, '\ 'num_institutions int '\ ')')
[ "Try to add\ndb.commit()\n\nafter the inserting.\n", "To insert the data you don't need a cursor\njust use the db\ndb.execute() instead of c.execute() and get rid of the c = db.cursor() line\nCursors aren't used to insert data, but usually to read data, or update data in place.\n" ]
[ 21, 4 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0001201522_python_sqlite.txt
Q: Django and File Permissions: Best Practices? I am integrating "legacy" code with Django, and have problems when the process executing Django must write to legacy code directories where it lacks write permissions. (The legacy code is a Python backend to a Tkinter GUI, which I'm repurposing to a browser-based UI.) I could: Make the legacy directory writeable to all, but this seems like bad practice. Find the userid of the Django execution process, assign that to a group and give that group write permissions to the whole legacy directory. (I suspect this is the user running apache.) This too seems bad -- if that user is compromised,the whole directory is at risk. Isolate the "write" calls in the code, ensure they all go somewhere in a designated subdirectory tree, and make that tree world (or Django user group) writeable. This seems the least risky, but also the most work. Any other ideas? Am I missing some obvious fix? I'm completely new to this. A: I'd go with option number 2. I don't think your django user is any more likely to get compromised than your Tkinter user. If there's something else under apache that you're worried about, run it under a separate apache with the right user.
Django and File Permissions: Best Practices?
I am integrating "legacy" code with Django, and have problems when the process executing Django must write to legacy code directories where it lacks write permissions. (The legacy code is a Python backend to a Tkinter GUI, which I'm repurposing to a browser-based UI.) I could: Make the legacy directory writeable to all, but this seems like bad practice. Find the userid of the Django execution process, assign that to a group and give that group write permissions to the whole legacy directory. (I suspect this is the user running apache.) This too seems bad -- if that user is compromised,the whole directory is at risk. Isolate the "write" calls in the code, ensure they all go somewhere in a designated subdirectory tree, and make that tree world (or Django user group) writeable. This seems the least risky, but also the most work. Any other ideas? Am I missing some obvious fix? I'm completely new to this.
[ "I'd go with option number 2. I don't think your django user is any more likely to get compromised than your Tkinter user. If there's something else under apache that you're worried about, run it under a separate apache with the right user.\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001201785_django_python.txt
Q: How to identify the currently available wireless networks with Python in Windows? Is there a way to get a list of wireless networks (SSID's) that are currently available? And seeing what is the current connected network? Doesn't need to be exactly the SSID, I just need to identify the current wireless network. A: You can use the netsh command. I don't remember the exact syntax used to invoke cli commands from within python, but I'm sure it should be fairly easy to locate. The article below has more information about how to use netsh itself: http://technet.microsoft.com/en-us/library/cc755301%28WS.10%29.aspx#bkmk_wlanShowNetworks
How to identify the currently available wireless networks with Python in Windows?
Is there a way to get a list of wireless networks (SSID's) that are currently available? And seeing what is the current connected network? Doesn't need to be exactly the SSID, I just need to identify the current wireless network.
[ "You can use the netsh command. I don't remember the exact syntax used to invoke cli commands from within python, but I'm sure it should be fairly easy to locate.\nThe article below has more information about how to use netsh itself:\nhttp://technet.microsoft.com/en-us/library/cc755301%28WS.10%29.aspx#bkmk_wlanShowNetworks\n" ]
[ 2 ]
[]
[]
[ "python", "ssid", "windows", "windows_xp", "wireless" ]
stackoverflow_0001201749_python_ssid_windows_windows_xp_wireless.txt
Q: Check if an element exists I'm trying to find out if an Element in a Django model exists. I think that should be very easy to do, but couldn't find any elegant way in the Making queries section of the Django documentation. The problem I have is that I've thousands of screenshots in a directory and need to check if they are in the database that is supposed to store them. So I'm iterating over the filenames and want to see for each of them if a corresponding element exists. Having a model called Screenshot, the only way I could come up with is filenames = os.listdir(settings.SCREENSHOTS_ON_DISC) for filename in filenames: exists = Screenshot.objects.filter(filename=filename) if exists: ... Is there a nicer/ faster way to do this? Note that a screenshot can be in the database more than once (thus I didn't use .get). A: If your Screenshot model has a lot of attributes, then the code you showed is doing unnecessary work for your specific need. For example, you can do something like this: files_in_db = Screenshot.objects.values_list('filename', flat=True).distinct() which will give you a list of all filenames in the database, and generate SQL to only fetch the filenames. It won't try to create and populate Screenshot objects. If you have files_on_disc = os.listdir(settings.SCREENSHOTS_ON_DISC) then you can iterate over one list looking for membership in the other, or make one or both lists into sets to find common members etc. A: You could try: Screenshot.objects.filter(filename__in = filenames) That will give you a list of all the screenshots you do have. You could compare the two lists and see what doesnt exist between the two. That should get you started, but you might want to tweak the query for performance/use. A: This query gets you all the files that are in your database and filesystem: discfiles = os.listdir(settings.SCREENSHOTS_ON_DISC) filenames = (Screenshot.objects.filter(filename__in=discfiles) .values_list('filename', flat=True) .order_by('filename') .distinct()) Note the order_by. If you have an ordering specified in your model definition, then using distinct may not return what you expect. This is documented here: http://docs.djangoproject.com/en/dev/ref/models/querysets/#distinct So make the ordering explicit, then execute the query.
Check if an element exists
I'm trying to find out if an Element in a Django model exists. I think that should be very easy to do, but couldn't find any elegant way in the Making queries section of the Django documentation. The problem I have is that I've thousands of screenshots in a directory and need to check if they are in the database that is supposed to store them. So I'm iterating over the filenames and want to see for each of them if a corresponding element exists. Having a model called Screenshot, the only way I could come up with is filenames = os.listdir(settings.SCREENSHOTS_ON_DISC) for filename in filenames: exists = Screenshot.objects.filter(filename=filename) if exists: ... Is there a nicer/ faster way to do this? Note that a screenshot can be in the database more than once (thus I didn't use .get).
[ "If your Screenshot model has a lot of attributes, then the code you showed is doing unnecessary work for your specific need. For example, you can do something like this:\nfiles_in_db = Screenshot.objects.values_list('filename', flat=True).distinct()\n\nwhich will give you a list of all filenames in the database, and generate SQL to only fetch the filenames. It won't try to create and populate Screenshot objects. If you have\nfiles_on_disc = os.listdir(settings.SCREENSHOTS_ON_DISC)\n\nthen you can iterate over one list looking for membership in the other, or make one or both lists into sets to find common members etc.\n", "You could try:\nScreenshot.objects.filter(filename__in = filenames)\n\nThat will give you a list of all the screenshots you do have. You could compare the two lists and see what doesnt exist between the two. That should get you started, but you might want to tweak the query for performance/use.\n", "This query gets you all the files that are in your database and filesystem:\ndiscfiles = os.listdir(settings.SCREENSHOTS_ON_DISC)\n\nfilenames = (Screenshot.objects.filter(filename__in=discfiles)\n .values_list('filename', flat=True)\n .order_by('filename')\n .distinct())\n\nNote the order_by. If you have an ordering specified in your model definition, then using distinct may not return what you expect. This is documented here:\n\nhttp://docs.djangoproject.com/en/dev/ref/models/querysets/#distinct\n\nSo make the ordering explicit, then execute the query.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "django", "performance", "python" ]
stackoverflow_0001201381_django_performance_python.txt
Q: Windows build for PyLucene+JCC on python 2.6 Where can I download a PyLucene+JCC Windows build compiled for python 2.6? Jose A: I ended up using solr and interfacing it using XML/JSON. A: Not tested but there appears to be an egg here: http://code.google.com/p/pylucene-win32-binary/ A: you might want to see this recent mailing list post and the related thread. There's also a pylucene-dev thread that seems apropos. Unfortunately, none of these things have what you want; you'll still have to build it yourself. If you're really feeling charitable, you can publish the steps + binaries once you figure it out, but I don't know.
Windows build for PyLucene+JCC on python 2.6
Where can I download a PyLucene+JCC Windows build compiled for python 2.6? Jose
[ "I ended up using solr and interfacing it using XML/JSON.\n", "Not tested but there appears to be an egg here:\nhttp://code.google.com/p/pylucene-win32-binary/\n", "you might want to see this recent mailing list post and the related thread. There's also a pylucene-dev thread that seems apropos. Unfortunately, none of these things have what you want; you'll still have to build it yourself. \nIf you're really feeling charitable, you can publish the steps + binaries once you figure it out, but I don't know. \n" ]
[ 3, 2, 1 ]
[]
[]
[ "jcc", "pylucene", "python", "windows" ]
stackoverflow_0000338008_jcc_pylucene_python_windows.txt
Q: OpenSocial Win32 compatibility I am planning to develop an application in Python on the Win32 platform. Does the OpenSocial API work upon the Win32 platform as well? To make things more clear, I need to use information from the OpenSocial API to conduct certain things in the application. A: Yes the general idea of an API is it uses a standard language like XML or JSON or whatever. You can easily find libraries that read/write those formats in most languages, no matter the platform. If you're lucky someone will have written a library for the specific API you need. Which in this case, they have :) http://code.google.com/p/opensocial-python-client/ Hope that helps!
OpenSocial Win32 compatibility
I am planning to develop an application in Python on the Win32 platform. Does the OpenSocial API work upon the Win32 platform as well? To make things more clear, I need to use information from the OpenSocial API to conduct certain things in the application.
[ "Yes the general idea of an API is it uses a standard language like XML or JSON or whatever. You can easily find libraries that read/write those formats in most languages, no matter the platform. If you're lucky someone will have written a library for the specific API you need.\nWhich in this case, they have :)\nhttp://code.google.com/p/opensocial-python-client/\nHope that helps!\n" ]
[ 1 ]
[]
[]
[ "opensocial", "python" ]
stackoverflow_0001202086_opensocial_python.txt
Q: random.sample return only characters instead of strings This is a kind of newbie question, but I couldn't find a solution. I read a list of strings from a file, and try to get a random, 5 element sample with random.sample, but the resultung list only contains characters. Why is that? How can I get a random sample list of strings? This is what I do: names = random.sample( open('names.txt').read(), 5 ) print names This gives a five element character list like: ['\x91', 'h', 'l', 'n', 's'] If I omit the random.sample part, and print the list, it prints out every line of the file, which is the expected behaviour, and proves that the file is read OK. A: If the names are all on seperate lines, ry the following: names = random.sample(open('names.txt').readlines(), count) print names Essentially you are going wrong because you need to pass an interable to random.sample(). When you pass a string it treats it like a list. If you're names are all on one line, you need to use split() to pull them apart and pass that list to random.sample().
random.sample return only characters instead of strings
This is a kind of newbie question, but I couldn't find a solution. I read a list of strings from a file, and try to get a random, 5 element sample with random.sample, but the resultung list only contains characters. Why is that? How can I get a random sample list of strings? This is what I do: names = random.sample( open('names.txt').read(), 5 ) print names This gives a five element character list like: ['\x91', 'h', 'l', 'n', 's'] If I omit the random.sample part, and print the list, it prints out every line of the file, which is the expected behaviour, and proves that the file is read OK.
[ "If the names are all on seperate lines, ry the following:\nnames = random.sample(open('names.txt').readlines(), count)\nprint names\n\nEssentially you are going wrong because you need to pass an interable to random.sample(). When you pass a string it treats it like a list. If you're names are all on one line, you need to use split() to pull them apart and pass that list to random.sample().\n" ]
[ 3 ]
[]
[]
[ "character", "python", "random", "string" ]
stackoverflow_0001202251_character_python_random_string.txt
Q: raw_input without leaving a history in readline Is there a way of using raw_input without leaving a sign in the readline history, so that it don't show when tab-completing? A: You could make a function something like import readline def raw_input_no_history(): input = raw_input() readline.remove_history_item(readline.get_current_history_length()-1) return input and call that function instead of raw_input. You may not need the minus 1 dependent on where you call it from.
raw_input without leaving a history in readline
Is there a way of using raw_input without leaving a sign in the readline history, so that it don't show when tab-completing?
[ "You could make a function something like\nimport readline\n\ndef raw_input_no_history():\n input = raw_input()\n readline.remove_history_item(readline.get_current_history_length()-1)\n return input\n\nand call that function instead of raw_input. You may not need the minus 1 dependent on where you call it from.\n" ]
[ 7 ]
[]
[]
[ "history", "python", "readline", "tab_completion" ]
stackoverflow_0001202127_history_python_readline_tab_completion.txt
Q: How do I do cross-project refactorings with ropemacs? I have a file structure that looks something like this: project1_root/ tests/ ... src/ .ropeproject/ project1/ ... (project1 source code) project2_root/ tests/ ... src/ .ropeproject/ project2/ ... (project2 source) I'm frequently switching back and forth between these two projects, and project2 depends on project1. What is the best way to set up ropemacs to handle this? It would be nice if I could facilitate cross-project refactorings (which I see mentioned in the rope library reference), but I'll be happy if I can at least keep both projects open at once without having to switch back and forth. A: The documention on ropemacs and ropemode seems to be very sparse (the homepage http://rope.sourceforge.net/ropemacs.html only point to the mercurial repos, which I checked out and read through the code), but it seems you can give a specific .ropeproject to use, and it may be guess it (ropemode/interfaces.py:_guess_project) by searching up in the directory tree for a .ropeproject directory. So it should be fairly easy to hack around the issue by creating a (new) .ropeproject which covers both projects if you create a specific .ropeproject for project1/ and project2/ . Disadvantages that I see might be that you might have to move the orignal .ropeproject dirs out of the way, and it needs some extra scripting to manage ropeproject directories over more than 2 projects.
How do I do cross-project refactorings with ropemacs?
I have a file structure that looks something like this: project1_root/ tests/ ... src/ .ropeproject/ project1/ ... (project1 source code) project2_root/ tests/ ... src/ .ropeproject/ project2/ ... (project2 source) I'm frequently switching back and forth between these two projects, and project2 depends on project1. What is the best way to set up ropemacs to handle this? It would be nice if I could facilitate cross-project refactorings (which I see mentioned in the rope library reference), but I'll be happy if I can at least keep both projects open at once without having to switch back and forth.
[ "The documention on ropemacs and ropemode seems to be very sparse (the homepage http://rope.sourceforge.net/ropemacs.html only point to the mercurial repos, which I checked out and read through the code), but it seems you can give a specific .ropeproject to use, and it may be guess it (ropemode/interfaces.py:_guess_project) by searching up in the directory tree for a .ropeproject directory.\nSo it should be fairly easy to hack around the issue by creating a (new) .ropeproject which covers both projects if you create a specific .ropeproject for project1/ and project2/ .\nDisadvantages that I see might be that you might have to move the orignal .ropeproject dirs out of the way, and it needs some extra scripting to manage ropeproject directories over more than 2 projects.\n" ]
[ 3 ]
[]
[]
[ "elisp", "emacs", "python", "rope", "ropemacs" ]
stackoverflow_0001160057_elisp_emacs_python_rope_ropemacs.txt
Q: Sorting a list of dictionaries of objects by dictionary values This is related to the various other questions about sorting values of dictionaries that I have read here, but I have not found the answer. I'm a newbie and maybe I just didn't see the answer as it concerns my problem. I have this function, which I'm using as a Django custom filter to sort results from a list of dictionaries. Actually, the main part of this function was answered in a related question on stackoverflow. def multikeysorting(dict_list, sortkeys): from operator import itemgetter def multikeysort(items, columns): comparers = [ ((itemgetter(col[1:]), -1) if col.startswith('-') else (itemgetter(col), 1)) for col in columns] def sign(a, b): if a < b: return -1 elif a > b: return 1 else: return 0 def comparer(left,right): for fn, mult in comparers: result = sign(fn(left), fn(right)) if result: return mult * result else: return 0 return sorted(items, cmp=comparer) keys_list = sortkeys.split(",") return multikeysort(dict_list, keys_list) This filter is called as follows in Django: {% for item in stats|statleaders_has_stat:"TOT_PTS_Misc"|multikeysorting:"-TOT_PTS_Misc.value,TOT_PTS_Misc.player.last_name" %} This means that there are two dictionary values passed to the function to sort the list of dictionaries. The sort works with the dictionary keys, but not the values. How can I sort the and return the dictionary by sorting the list of dictionaries with more than one value? In the example above, first by the value, then by the last_name. Here is an example of the data: [{u'TOT_PTS_Misc': < StatisticPlayerRollup: DeWitt, Ash Total Points : 6.0>, 'player': < Player: DeWitt, Ash>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Ackerman, Luke Total Points : 18.0>, 'player': < Player: Ackerman, Luke>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Wise, Dan Total Points : 19.0>, 'player': < Player: Wise, Dan>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Allison, Mike Total Points : 18.0>, 'player': < Player: Allison, Mike>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Wolford, Alex Total Points : 18.0>, 'player': < Player: Wolford, Alex>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Okes, Joe Total Points : 18.0>, 'player': < Player: Okes, Joe>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Grattan, Paul Total Points : 18.0>, 'player': < Player: Grattan, Paul>}] The listing should be sorted as follows: LastName Points Wise 19.0 Ackerman 18.0 Allison 18.0 Grattan 18.0 Okes 18.0 Wolford 18.0 Hagg 6.0 DeWitt 6.0 The TOT_PTS_Misc is an object that contains the player name as well as the number of points. (I hope I am explaining this correct.) But, there should be an arbitrary sort of values, either ascending or descending. Not always that same values and possibly more than two. So I came up with this solution, but wanted to know if it makes sense and if there is anything that should be changed. def multikeysorting(dict_list, sortkeys): from operator import itemgetter, attrgetter klist = sortkeys.split(",") vlist = [] for i in klist: vlist.append(tuple(i.split("."))) def getkeyvalue(val_list): result = [] for id,val in enumerate(val_list): if val[0].startswith('-'): if len(val) == 2: result.append((itemgetter(val[0][1:]).attrgetter(val[1]), -1)) else: att = val[1] for j in val[2:]: att = att + "." + j result.append((itemgetter(val[0][1:]).attrgetter(att), -1)) else: if len(val) == 2: result.append((itemgetter(val[0]).attrgetter(val[1]), 1)) else: att = val[1] for j in val[2:]: att = att + "." + j result.append((itemgetter(val[0]).attrgetter(att), 1)) return result return sorted(dict_list, key=getkeyvalue(vlist)) A: You gain access to the keys using itemgetter and to the value attributes using attrgetter. So, once you've extracted the key, value names you're interested in, you can construct your key function: from operator import attrgetter, itemgetter itmget = itemgetter('TOT_PTS_Misc') attget_v = attrgetter('value') attget_l = attrgetter('last_name') def keyfunc(x): itm = itmget(x) return (-attget_v(itm), attget_n(itm)) sorted(dictlist, key=keyfunc) This seems to work. Is it what you're asking? Or am I missing something? A: As far as I can see, there are two things you need to do. First, parse the call path out of the sort key, that is: turn 'TOT_PTS_Misc.value' to ('TOT_PTS_Misc','value') Second, use attrgetter in a similar way to the use of itemgetter, for the callable part. If i'm not mistaken, itemgetter('TOT_PTS_Misc').attrgetter('value') SHOULD be equal to dict['TOT_PTS_Misc'].value
Sorting a list of dictionaries of objects by dictionary values
This is related to the various other questions about sorting values of dictionaries that I have read here, but I have not found the answer. I'm a newbie and maybe I just didn't see the answer as it concerns my problem. I have this function, which I'm using as a Django custom filter to sort results from a list of dictionaries. Actually, the main part of this function was answered in a related question on stackoverflow. def multikeysorting(dict_list, sortkeys): from operator import itemgetter def multikeysort(items, columns): comparers = [ ((itemgetter(col[1:]), -1) if col.startswith('-') else (itemgetter(col), 1)) for col in columns] def sign(a, b): if a < b: return -1 elif a > b: return 1 else: return 0 def comparer(left,right): for fn, mult in comparers: result = sign(fn(left), fn(right)) if result: return mult * result else: return 0 return sorted(items, cmp=comparer) keys_list = sortkeys.split(",") return multikeysort(dict_list, keys_list) This filter is called as follows in Django: {% for item in stats|statleaders_has_stat:"TOT_PTS_Misc"|multikeysorting:"-TOT_PTS_Misc.value,TOT_PTS_Misc.player.last_name" %} This means that there are two dictionary values passed to the function to sort the list of dictionaries. The sort works with the dictionary keys, but not the values. How can I sort the and return the dictionary by sorting the list of dictionaries with more than one value? In the example above, first by the value, then by the last_name. Here is an example of the data: [{u'TOT_PTS_Misc': < StatisticPlayerRollup: DeWitt, Ash Total Points : 6.0>, 'player': < Player: DeWitt, Ash>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Ackerman, Luke Total Points : 18.0>, 'player': < Player: Ackerman, Luke>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Wise, Dan Total Points : 19.0>, 'player': < Player: Wise, Dan>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Allison, Mike Total Points : 18.0>, 'player': < Player: Allison, Mike>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Wolford, Alex Total Points : 18.0>, 'player': < Player: Wolford, Alex>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Okes, Joe Total Points : 18.0>, 'player': < Player: Okes, Joe>}, {u'TOT_PTS_Misc': < StatisticPlayerRollup: Grattan, Paul Total Points : 18.0>, 'player': < Player: Grattan, Paul>}] The listing should be sorted as follows: LastName Points Wise 19.0 Ackerman 18.0 Allison 18.0 Grattan 18.0 Okes 18.0 Wolford 18.0 Hagg 6.0 DeWitt 6.0 The TOT_PTS_Misc is an object that contains the player name as well as the number of points. (I hope I am explaining this correct.) But, there should be an arbitrary sort of values, either ascending or descending. Not always that same values and possibly more than two. So I came up with this solution, but wanted to know if it makes sense and if there is anything that should be changed. def multikeysorting(dict_list, sortkeys): from operator import itemgetter, attrgetter klist = sortkeys.split(",") vlist = [] for i in klist: vlist.append(tuple(i.split("."))) def getkeyvalue(val_list): result = [] for id,val in enumerate(val_list): if val[0].startswith('-'): if len(val) == 2: result.append((itemgetter(val[0][1:]).attrgetter(val[1]), -1)) else: att = val[1] for j in val[2:]: att = att + "." + j result.append((itemgetter(val[0][1:]).attrgetter(att), -1)) else: if len(val) == 2: result.append((itemgetter(val[0]).attrgetter(val[1]), 1)) else: att = val[1] for j in val[2:]: att = att + "." + j result.append((itemgetter(val[0]).attrgetter(att), 1)) return result return sorted(dict_list, key=getkeyvalue(vlist))
[ "You gain access to the keys using itemgetter and to the value attributes using attrgetter.\nSo, once you've extracted the key, value names you're interested in, you can construct your key function:\nfrom operator import attrgetter, itemgetter\nitmget = itemgetter('TOT_PTS_Misc')\nattget_v = attrgetter('value')\nattget_l = attrgetter('last_name')\ndef keyfunc(x):\n itm = itmget(x)\n return (-attget_v(itm), attget_n(itm))\nsorted(dictlist, key=keyfunc)\n\nThis seems to work. Is it what you're asking? Or am I missing something?\n", "As far as I can see, there are two things you need to do.\nFirst, parse the call path out of the sort key, that is: turn 'TOT_PTS_Misc.value' to ('TOT_PTS_Misc','value')\nSecond, use attrgetter in a similar way to the use of itemgetter, for the callable part.\nIf i'm not mistaken, itemgetter('TOT_PTS_Misc').attrgetter('value') SHOULD be equal to dict['TOT_PTS_Misc'].value\n" ]
[ 2, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001202088_django_python.txt
Q: Python - tempfile.TemporaryFile cannot be read; why? The official documentation for TemporaryFile reads: The mode parameter defaults to 'w+b' so that the file created can be read and written without being closed. Yet, the below code does not work as expected: import tempfile def play_with_fd(): with tempfile.TemporaryFile() as f: f.write('test data\n') f.write('most test data\n') print 'READ:', f.read() f.write('further data') print 'READ:', f.read() f.write('even more') print 'READ:', f.read() print 'READ:', f.read() print 'READ:', f.read() if __name__ == '__main__': play_with_fd() The output I get is: > python play.py READ: READ: READ: READ: READ: Can anyone explain this behavior? Is there a way to read from temporary files at all? (without having to use the low-level mkstemp that wouldn't automatically delete the files; and I don't care about named files) A: You must put f.seek(0) before trying to read the file (this will send you to the beginning of the file), and f.seek(0, 2) to return to the end so you can assure you won't overwrite it. A: read() does not return anything because you are at the end of the file. You need to call seek() first before read() will return anything. For example, put this line in front of the first read(): f.seek(-10, 1) Of course, before writing again, be sure to seek() to the end (if that is where you want to continue writing to).
Python - tempfile.TemporaryFile cannot be read; why?
The official documentation for TemporaryFile reads: The mode parameter defaults to 'w+b' so that the file created can be read and written without being closed. Yet, the below code does not work as expected: import tempfile def play_with_fd(): with tempfile.TemporaryFile() as f: f.write('test data\n') f.write('most test data\n') print 'READ:', f.read() f.write('further data') print 'READ:', f.read() f.write('even more') print 'READ:', f.read() print 'READ:', f.read() print 'READ:', f.read() if __name__ == '__main__': play_with_fd() The output I get is: > python play.py READ: READ: READ: READ: READ: Can anyone explain this behavior? Is there a way to read from temporary files at all? (without having to use the low-level mkstemp that wouldn't automatically delete the files; and I don't care about named files)
[ "You must put \nf.seek(0)\n\nbefore trying to read the file (this will send you to the beginning of the file), and\nf.seek(0, 2)\n\nto return to the end so you can assure you won't overwrite it.\n", "read() does not return anything because you are at the end of the file. You need to call seek() first before read() will return anything. For example, put this line in front of the first read():\nf.seek(-10, 1)\n\nOf course, before writing again, be sure to seek() to the end (if that is where you want to continue writing to).\n" ]
[ 39, 7 ]
[]
[]
[ "file", "io", "python", "temporary_files" ]
stackoverflow_0001202848_file_io_python_temporary_files.txt
Q: Constructor specialization in python Class hierarchies and constructors are related. Parameters from a child class need to be passed to their parent. So, in Python, we end up with something like this: class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): # do something with a, b, c, ka, kb, kc class Child(Parent): def __init__(self, a, b, c, d, e, f, ka=None, kb=None, kc=None, kd=None, ke=None, kf=None): super(Child, self).__init__(a, b, c, ka=ka, kb=kb, kc=kc) # do something with d, e, f, kd, ke, kf Imagine this with a dozen child classes and lots of parameters. Adding new parameters becomes very tedious. Of course one can dispense with named parameters completely and use *args and **kwargs, but that makes the method declarations ambiguous. Is there a pattern for elegantly dealing with this in Python (2.6)? By "elegantly" I mean I would like to reduce the number of times the parameters appear. a, b, c, ka, kb, kc all appear 3 times: in the Child constructor, in the super() call to Parent, and in the Parent constructor. Ideally, I'd like to specify the parameters for Parent's init once, and in Child's init only specify the additional parameters. I'd like to do something like this: class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): print 'Parent: ', a, b, c, ka, kb, kc class Child(Parent): def __init__(self, d, e, f, kd='d', ke='e', kf='f', *args, **kwargs): super(Child, self).__init__(*args, **kwargs) print 'Child: ', d, e, f, kd, ke, kf x = Child(1, 2, 3, 4, 5, 6, ka='a', kb='b', kc='c', kd='d', ke='e', kf='f') This unfortunately doesn't work, since 4, 5, 6 end up assigned to kd, ke, kf. Is there some elegant python pattern for accomplishing the above? A: "dozen child classes and lots of parameters" sounds like a problem irrespective of parameter naming. I suspect that a little refactoring can peel out some Strategy objects that would simplify this hierarchy and make the super-complex constructors go away. A: Well, the only solution I could see is using a mixture of listed variables as well as *args and **kwargs, as such: class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): pass class Child(Parent): def __init__(self, d, e, f, *args, kd=None, ke=None, kf=None, **kwargs): Parent.__init__(self, *args, **kwargs) pass This way, you could see which parameters are required by each of the classes, but without having to re-type them. One thing to note is that you lose your desired ordering (a, b, c, d, e, f) as it becomes (d, e, f, a, b, c). I'm not sure if there's a way to have the *args before the other non-named parameters. A: I try to group the parameters into their own objects, e.g, instead of passing sourceDirectory, targetDirectory, temporaryDirectory, serverName, serverPort, I'd have a DirectoryContext and ServerContext objects. If the context objects start having more behavior or logic it might lead to the strategy objects mentioned in here.
Constructor specialization in python
Class hierarchies and constructors are related. Parameters from a child class need to be passed to their parent. So, in Python, we end up with something like this: class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): # do something with a, b, c, ka, kb, kc class Child(Parent): def __init__(self, a, b, c, d, e, f, ka=None, kb=None, kc=None, kd=None, ke=None, kf=None): super(Child, self).__init__(a, b, c, ka=ka, kb=kb, kc=kc) # do something with d, e, f, kd, ke, kf Imagine this with a dozen child classes and lots of parameters. Adding new parameters becomes very tedious. Of course one can dispense with named parameters completely and use *args and **kwargs, but that makes the method declarations ambiguous. Is there a pattern for elegantly dealing with this in Python (2.6)? By "elegantly" I mean I would like to reduce the number of times the parameters appear. a, b, c, ka, kb, kc all appear 3 times: in the Child constructor, in the super() call to Parent, and in the Parent constructor. Ideally, I'd like to specify the parameters for Parent's init once, and in Child's init only specify the additional parameters. I'd like to do something like this: class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): print 'Parent: ', a, b, c, ka, kb, kc class Child(Parent): def __init__(self, d, e, f, kd='d', ke='e', kf='f', *args, **kwargs): super(Child, self).__init__(*args, **kwargs) print 'Child: ', d, e, f, kd, ke, kf x = Child(1, 2, 3, 4, 5, 6, ka='a', kb='b', kc='c', kd='d', ke='e', kf='f') This unfortunately doesn't work, since 4, 5, 6 end up assigned to kd, ke, kf. Is there some elegant python pattern for accomplishing the above?
[ "\"dozen child classes and lots of parameters\" sounds like a problem irrespective of parameter naming.\nI suspect that a little refactoring can peel out some Strategy objects that would simplify this hierarchy and make the super-complex constructors go away.\n", "Well, the only solution I could see is using a mixture of listed variables as well as *args and **kwargs, as such:\nclass Parent(object):\n def __init__(self, a, b, c, ka=None, kb=None, kc=None):\n pass\n\nclass Child(Parent):\n def __init__(self, d, e, f, *args, kd=None, ke=None, kf=None, **kwargs):\n Parent.__init__(self, *args, **kwargs)\n pass\n\nThis way, you could see which parameters are required by each of the classes, but without having to re-type them.\nOne thing to note is that you lose your desired ordering (a, b, c, d, e, f) as it becomes (d, e, f, a, b, c). I'm not sure if there's a way to have the *args before the other non-named parameters.\n", "I try to group the parameters into their own objects, e.g, instead of passing\nsourceDirectory, targetDirectory, temporaryDirectory, serverName, serverPort, I'd have a \nDirectoryContext and ServerContext objects. \nIf the context objects start having more \nbehavior or logic it might lead to the strategy objects mentioned in here.\n" ]
[ 8, 3, 1 ]
[]
[]
[ "anti_patterns", "class", "design_patterns", "oop", "python" ]
stackoverflow_0001202711_anti_patterns_class_design_patterns_oop_python.txt
Q: Good python library for designing a mmo? Actor based design i m trying to design a mmo game using python... I have evaluated stackless and since it is not the general python and it is a fork, i dont want to use it I am trying to chose between pysage candygram dramatis and parley any one try any of these libraries? Thanks a lot for your responses A: I would go for pysage. It has the highest level of abstraction and a lightweight messaging API which will give you lots of flexibility. I would imagine when designing an MMO you will want as much flexibility as possible. It also takes a page from Erlang's Actor model which is really solid. That's great you are trying to build an MMO via python! It has great OpenGL bindings when you want to add graphics which is great! Hope that helps. A: Initially Twisted Python was designed to write MMOs, but it not really easy to use. I don't know if there is an Actor implementation for it, perhaps in the tx project in Launchpad ? A: I tried to write an MMO in Python, it was horrible. Now I have switched to Erlang and its lightyears ahead of other languages in terms of developing server software. You can check my project at: http://www.next-gen.cc. Btw writing the client graphics in OpenGL is a huge task, you probably want something like Ogre3d (there are python bindings).
Good python library for designing a mmo? Actor based design
i m trying to design a mmo game using python... I have evaluated stackless and since it is not the general python and it is a fork, i dont want to use it I am trying to chose between pysage candygram dramatis and parley any one try any of these libraries? Thanks a lot for your responses
[ "I would go for pysage.\nIt has the highest level of abstraction and a lightweight messaging API which will give you lots of flexibility. I would imagine when designing an MMO you will want as much flexibility as possible.\nIt also takes a page from Erlang's Actor model which is really solid.\nThat's great you are trying to build an MMO via python! It has great OpenGL bindings when you want to add graphics which is great!\nHope that helps.\n", "Initially Twisted Python was designed to write MMOs, but it not really easy to use. I don't know if there is an Actor implementation for it, perhaps in the tx project in Launchpad ?\n", "I tried to write an MMO in Python, it was horrible. Now I have switched to Erlang and its lightyears ahead of other languages in terms of developing server software. You can check my project at: http://www.next-gen.cc.\nBtw writing the client graphics in OpenGL is a huge task, you probably want something like Ogre3d (there are python bindings).\n" ]
[ 7, 1, 0 ]
[]
[]
[ "mmo", "python", "python_stackless", "stackless" ]
stackoverflow_0000312096_mmo_python_python_stackless_stackless.txt
Q: Sources of PyS60's standard functions (particularly appuifw.query) I need to give user ability to enter a time in form hh:mm:ss (with appropriate validation of course). And standard function appuifw.query(u'Label', 'time') works almost fine except that it allows to enter only hours and minutes (hh:mm). So I want to look though its source and write my own that enhances it in the stated manner. I've found file epoc32\winscw\c\resource\appuifw.py that comes with PyS60 SDK extension but it only contains constructor implementation (__init__). So the question is where to find sources of platform's standard functions (particularly appuifw.query). A: Are the *_src.zip files on garage.maemo.org any useful? (I don't currently have the tools to verify what's in there.)
Sources of PyS60's standard functions (particularly appuifw.query)
I need to give user ability to enter a time in form hh:mm:ss (with appropriate validation of course). And standard function appuifw.query(u'Label', 'time') works almost fine except that it allows to enter only hours and minutes (hh:mm). So I want to look though its source and write my own that enhances it in the stated manner. I've found file epoc32\winscw\c\resource\appuifw.py that comes with PyS60 SDK extension but it only contains constructor implementation (__init__). So the question is where to find sources of platform's standard functions (particularly appuifw.query).
[ "Are the *_src.zip files on garage.maemo.org any useful? (I don't currently have the tools to verify what's in there.)\n" ]
[ 1 ]
[]
[]
[ "mobile", "pys60", "python" ]
stackoverflow_0001196050_mobile_pys60_python.txt
Q: Using Enthought I need to calculate the inverse of the complementary error function (erfc^(1)) for a problem. I was looking into Python tools for it, and many threads said Enthought has most of the math tools needed, so I downloaded and installed it in my local user account. But I am not very sure about how to use it? Any ideas? A: SciPy, which is included in the Enthought Python distribution, contains that special function. In [1]: from scipy.special import erfcinv In [2]: from numpy import linspace In [3]: x = linspace(0, 1, 10) In [4]: y = erfcinv(x) In [5]: y Out[5]: array([ 1.27116101e+308, 1.12657583e+000, 8.63123068e-001, 6.84070350e-001, 5.40731396e-001, 4.16808192e-001, 3.04570194e-001, 1.99556951e-001, 9.87900997e-002, 0.00000000e+000]) A: Here's a quick example of calculations in the Enthought Python Distribution (EPD) with the inverse of the complementary error function (erfcinv), which is included in the SciPy package that comes with the EPD: C:\>c:\Python25\python EPD Py25 (4.1.30101) -- http://www.enthought.com/epd Python 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, 13:49:12) [MSC v.131 0 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.special import erfcinv >>> erfcinv(0.) 1.271161006153646e+308 >>> erfcinv(1.) 0.0 >>> erfcinv(2.) -1.271161006153646e+308 >>> exit() C:\>
Using Enthought
I need to calculate the inverse of the complementary error function (erfc^(1)) for a problem. I was looking into Python tools for it, and many threads said Enthought has most of the math tools needed, so I downloaded and installed it in my local user account. But I am not very sure about how to use it? Any ideas?
[ "SciPy, which is included in the Enthought Python distribution, contains that special function.\nIn [1]: from scipy.special import erfcinv\nIn [2]: from numpy import linspace\nIn [3]: x = linspace(0, 1, 10)\nIn [4]: y = erfcinv(x)\nIn [5]: y\nOut[5]: \narray([ 1.27116101e+308, 1.12657583e+000, 8.63123068e-001,\n 6.84070350e-001, 5.40731396e-001, 4.16808192e-001,\n 3.04570194e-001, 1.99556951e-001, 9.87900997e-002,\n 0.00000000e+000])\n\n", "Here's a quick example of calculations in the Enthought Python Distribution (EPD) with the inverse of the complementary error function (erfcinv), which is included in the SciPy package that comes with the EPD:\nC:\\>c:\\Python25\\python\nEPD Py25 (4.1.30101) -- http://www.enthought.com/epd\n\nPython 2.5.2 |EPD Py25 4.1.30101| (r252:60911, Dec 19 2008, 13:49:12) [MSC v.131\n0 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from scipy.special import erfcinv\n>>> erfcinv(0.)\n1.271161006153646e+308\n>>> erfcinv(1.)\n0.0\n>>> erfcinv(2.)\n-1.271161006153646e+308\n>>> exit()\n\nC:\\>\n\n" ]
[ 8, 1 ]
[]
[]
[ "python", "scipy" ]
stackoverflow_0001202967_python_scipy.txt
Q: Combinatorics Counting Puzzle: Roll 20, 8-sided dice, what is the probability of getting at least 5 dice of the same value Assume a game in which one rolls 20, 8-sided die, for a total number of 8^20 possible outcomes. To calculate the probability of a particular event occurring, we divide the number of ways that event can occur by 8^20. One can calculate the number of ways to get exactly 5 dice of the value 3. (20 choose 5) gives us the number of orders of 3. 7^15 gives us the number of ways we can not get the value 3 for 15 rolls. number of ways to get exactly 5, 3's = (20 choose 5)*7^15. The answer can also be viewed as how many ways can I rearrange the string 3,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 (20 choose 5) times the total number of values we the zero's (assuming 7 legal values) 7^15 (is this correct). Question 1: How can I calculate the number of ways to get exactly 5 dice of the same value(That is, for all die values). Note: if I just naively use my first answer above and multiply bt 8, I get an enormous amount of double counting? I understand that I could solve for each of the cases (5 1's), (5, 2's), (5, 3's), ... (5's, 8) sum them (more simply 8*(5 1's) ). Then subtract the sum of number of overlaps (5 1's) and (5 2's), (5 1's) and (5 3's)... (5 1's) and (5, 2's) and ... and (5, 8's) but this seems exceedingly messy. I would a generalization of this in a way that scales up to large numbers of samples and large numbers of classes. How can I calculate the number of ways to get at least 5 dice of the same value? So 111110000000000000000 or 11110100000000000002 or 11111100000001110000 or 11011211222222223333, but not 00001111222233334444 or 000511512252363347744. I'm looking for answers which either explain the math or point to a library which supports this (esp python modules). Extra points for detail and examples. A: I suggest that you spend a little bit of time writing up a Monte Carlo simulation and let it run while you work out the math by hand. Hopefully the Monte Carlo simulation will converge before you're finished with the math and you'll be able to check your solution. A slightly faster option might involve creating a SO clone for math questions. A: Double counting can be solved by use of the Inclusion/Exclusion Principle I suspect it comes out to: Choose(8,1)*P(one set of 5 Xs) - Choose(8,2)*P(a set of 5 Xs and a set of 5 Ys) + Choose(8,3)*P(5 Xs, 5 Ys, 5 Zs) - Choose(8,4)*P(5 Xs, 5 Ys, 5 Zs, 5 As) P(set of 5 Xs) = 20 Choose 5 * 7^15 / 8^20 P(5 Xs, 5 Ys) = 20 Choose 5,5 * 6^10 / 8^20 And so on. This doesn't solve the problem directly of 'more then 5 of the same', as if you simply summed the results of this applied to 5,6,7..20; you would over count the cases where you have, say, 10 1's and 5 8's. You could probably apply inclusion exclusion again to come up with that second answer; so, P(of at least 5)=P(one set of 20)+ ... + (P(one set of 15) - 7*P(set of 5 from 5 dice)) + ((P(one set of 14) - 7*P(one set of 5 from 6) - 7*P(one set of 6 from 6)). Coming up with the source code for that is proving itself more difficult. A: The exact probability distribution Fs,i of a sum of i s-sided dice can be calculated as the repeated convolution of the single-die probability distribution with itself. where for all and 0 otherwise. http://en.wikipedia.org/wiki/Dice A: This problem is really hard if you have to generalize it (get the exact formula). But anyways, let me explain the algorithm. If you want to know the number of ways to get exactly 5 dice of the same value you have to rephrase your previous problem, as calculate the number of ways to get exactly 5 dice of the value 3 AND no other value can be repeated exactly 5 times For simplicity's sake, let's call function F(20,8,5) (5 dice, all values) the first answer, and F(20,8,5,3) (5 dice, value 3) the second. We have that F(20,8,5) = F(20,8,5,3) * 8 + (events when more than one value is repeated 5 times) So if we can get F(20,8,5,3) it should be pretty simple isn't it? Well...not so much... First, let us define some variables: X1,X2,X3...,Xi , where Xi=number of times we get the dice i Then: F(20,8,5)/20^8 = P(X1=5 or X2=5 or ... or X8=5, with R=20(rolls) and N=8(dice number)) , P(statement) being the standard way to write a probability. we continue: F(20,8,5,3)/20^8 = P(X3=5 and X1<>5 and ... and X8<>5, R=20, N=8) F(20,8,5,3)/20^8 = 1 - P(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7) F(20,8,5,3)/20^8 = 1 - F(15,7,5)/7^15 recursively: F(15,8,5) = F(15,7,5,1) * 7 P(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7) = P(X1=5 and X2<>5 and X4<>5 and .. and X8<>5. R=15, N=7) * 7 F(15,7,5,1)/7^15 = 1 - F(10,6,5)/6^10 F(10,6,5) = F(10,6,5,2) * 6 F(10,6,5,2)/6^10 = 1 - F(5,5,5)/5^5 F(5,5,5) = F(5,5,5,4) * 5 Well then... F(5,5,5,4) is the number of ways to get 5 dices of value 4 in 5 rolls, such as no other dice repeats 5 times. There is only 1 way, out of a total 5^5. The probability is then 1/5^5. F(5,5,5) is the number of ways to get 5 dices of any value (out of 5 values) in 5 rolls. It's obviously 5. The probability is then 5/5^5 = 1/5^4. F(10,6,5,2) is the number of ways to get 5 dices of value 2 in 10 rolls, such as no other dice repeats 5 times. F(10,6,5,2) = (1-F(5,5,5)/5^5) * 6^10 = (1-1/5^4) * 6^10 Well... I think it may be incorrect at some part, but anyway, you get the idea. I hope I could make the algorithm understandable. edit: I did some checks, and I realized you have to add some cases when you get more than one value repeated exactly 5 times. Don't have time to solve that part thou... A: Recursive solution: Prob_same_value(n) = Prob_same_value(n-1) * (1 - Prob_noone_rolling_that_value(N-(n-1))) A: Here is what I am thinking... If you just had 5 dice, you would only have eight ways to get what you want. For each of those eight ways, all possible combinations of the other 15 dice work. So - I think the answer is: (8 * 815) / 820 (The answer for at least 5 the same.) A: I believe you can use the formula of x occurrences in n events as: P = probability^n * (n!/((n - x)!x!)) So the final result is going to be the sum of results from 0 to n. I don't really see any easy way to combine it into one step that would be less messy. With this way you have the formula spelled out in the code as well. You may have to write your own factorial method though. float calculateProbability(int tosses, int atLeastNumber) { float atLeastProbability = 0; float eventProbability = Math.pow( 1.0/8.0, tosses); int nFactorial = factorial(tosses); for ( i = 1; i <= atLeastNumber; i++) { atLeastProbability += eventProbability * (nFactorial / (factorial(tosses - i) * factorial(i) ); } }
Combinatorics Counting Puzzle: Roll 20, 8-sided dice, what is the probability of getting at least 5 dice of the same value
Assume a game in which one rolls 20, 8-sided die, for a total number of 8^20 possible outcomes. To calculate the probability of a particular event occurring, we divide the number of ways that event can occur by 8^20. One can calculate the number of ways to get exactly 5 dice of the value 3. (20 choose 5) gives us the number of orders of 3. 7^15 gives us the number of ways we can not get the value 3 for 15 rolls. number of ways to get exactly 5, 3's = (20 choose 5)*7^15. The answer can also be viewed as how many ways can I rearrange the string 3,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 (20 choose 5) times the total number of values we the zero's (assuming 7 legal values) 7^15 (is this correct). Question 1: How can I calculate the number of ways to get exactly 5 dice of the same value(That is, for all die values). Note: if I just naively use my first answer above and multiply bt 8, I get an enormous amount of double counting? I understand that I could solve for each of the cases (5 1's), (5, 2's), (5, 3's), ... (5's, 8) sum them (more simply 8*(5 1's) ). Then subtract the sum of number of overlaps (5 1's) and (5 2's), (5 1's) and (5 3's)... (5 1's) and (5, 2's) and ... and (5, 8's) but this seems exceedingly messy. I would a generalization of this in a way that scales up to large numbers of samples and large numbers of classes. How can I calculate the number of ways to get at least 5 dice of the same value? So 111110000000000000000 or 11110100000000000002 or 11111100000001110000 or 11011211222222223333, but not 00001111222233334444 or 000511512252363347744. I'm looking for answers which either explain the math or point to a library which supports this (esp python modules). Extra points for detail and examples.
[ "I suggest that you spend a little bit of time writing up a Monte Carlo simulation and let it run while you work out the math by hand. Hopefully the Monte Carlo simulation will converge before you're finished with the math and you'll be able to check your solution.\nA slightly faster option might involve creating a SO clone for math questions.\n", "Double counting can be solved by use of the Inclusion/Exclusion Principle\nI suspect it comes out to: \nChoose(8,1)*P(one set of 5 Xs) \n- Choose(8,2)*P(a set of 5 Xs and a set of 5 Ys) \n+ Choose(8,3)*P(5 Xs, 5 Ys, 5 Zs) \n- Choose(8,4)*P(5 Xs, 5 Ys, 5 Zs, 5 As)\n\nP(set of 5 Xs) = 20 Choose 5 * 7^15 / 8^20\nP(5 Xs, 5 Ys) = 20 Choose 5,5 * 6^10 / 8^20\n\nAnd so on. This doesn't solve the problem directly of 'more then 5 of the same', as if you simply summed the results of this applied to 5,6,7..20; you would over count the cases where you have, say, 10 1's and 5 8's. \nYou could probably apply inclusion exclusion again to come up with that second answer; so, P(of at least 5)=P(one set of 20)+ ... + (P(one set of 15) - 7*P(set of 5 from 5 dice)) + ((P(one set of 14) - 7*P(one set of 5 from 6) - 7*P(one set of 6 from 6)). Coming up with the source code for that is proving itself more difficult.\n", "The exact probability distribution Fs,i of a sum of i s-sided dice can be calculated as the repeated convolution of the single-die probability distribution with itself.\n\nwhere for all and 0 otherwise.\nhttp://en.wikipedia.org/wiki/Dice\n", "This problem is really hard if you have to generalize it (get the exact formula).\nBut anyways, let me explain the algorithm. \nIf you want to know \n\nthe number of ways to get exactly 5\n dice of the same value\n\nyou have to rephrase your previous problem, as\n\ncalculate the number of ways to get\n exactly 5 dice of the value 3 AND no\n other value can be repeated exactly 5\n times\n\nFor simplicity's sake, let's call function F(20,8,5) (5 dice, all values) the first answer, and F(20,8,5,3) (5 dice, value 3) the second. \nWe have that F(20,8,5) = F(20,8,5,3) * 8 + (events when more than one value is repeated 5 times)\nSo if we can get F(20,8,5,3) it should be pretty simple isn't it?\nWell...not so much...\nFirst, let us define some variables:\nX1,X2,X3...,Xi , where Xi=number of times we get the dice i\nThen:\nF(20,8,5)/20^8 = P(X1=5 or X2=5 or ... or X8=5, with R=20(rolls) and N=8(dice number))\n\n, P(statement) being the standard way to write a probability.\nwe continue:\nF(20,8,5,3)/20^8 = P(X3=5 and X1<>5 and ... and X8<>5, R=20, N=8) \nF(20,8,5,3)/20^8 = 1 - P(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7) \nF(20,8,5,3)/20^8 = 1 - F(15,7,5)/7^15\n\nrecursively:\nF(15,8,5) = F(15,7,5,1) * 7 \nP(X1=5 or X2=5 or X4=5 or X5=5 or X6=5 or X7=5 or X8=5, R=15, N=7) = P(X1=5 and X2<>5 and X4<>5 and .. and X8<>5. R=15, N=7) * 7\n\n\nF(15,7,5,1)/7^15 = 1 - F(10,6,5)/6^10 F(10,6,5) = F(10,6,5,2) * 6\n\n\nF(10,6,5,2)/6^10 = 1 - F(5,5,5)/5^5\nF(5,5,5) = F(5,5,5,4) * 5\n\nWell then... F(5,5,5,4) is the number of ways to get 5 dices of value 4 in 5 rolls, such as no other dice repeats 5 times. There is only 1 way, out of a total 5^5. The probability is then 1/5^5.\nF(5,5,5) is the number of ways to get 5 dices of any value (out of 5 values) in 5 rolls. It's obviously 5. The probability is then 5/5^5 = 1/5^4.\nF(10,6,5,2) is the number of ways to get 5 dices of value 2 in 10 rolls, such as no other dice repeats 5 times. \nF(10,6,5,2) = (1-F(5,5,5)/5^5) * 6^10 = (1-1/5^4) * 6^10\nWell... I think it may be incorrect at some part, but anyway, you get the idea. I hope I could make the algorithm understandable.\nedit:\nI did some checks, and I realized you have to add some cases when you get more than one value repeated exactly 5 times. Don't have time to solve that part thou...\n", "Recursive solution:\nProb_same_value(n) = Prob_same_value(n-1) * (1 - Prob_noone_rolling_that_value(N-(n-1)))\n\n", "Here is what I am thinking...\nIf you just had 5 dice, you would only have eight ways to get what you want.\nFor each of those eight ways, all possible combinations of the other 15 dice work.\nSo - I think the answer is: (8 * 815) / 820\n(The answer for at least 5 the same.)\n", "I believe you can use the formula of x occurrences in n events as:\nP = probability^n * (n!/((n - x)!x!))\nSo the final result is going to be the sum of results from 0 to n.\nI don't really see any easy way to combine it into one step that would be less messy. With this way you have the formula spelled out in the code as well. You may have to write your own factorial method though.\n float calculateProbability(int tosses, int atLeastNumber) {\n float atLeastProbability = 0;\n float eventProbability = Math.pow( 1.0/8.0, tosses);\n int nFactorial = factorial(tosses);\n\n for ( i = 1; i <= atLeastNumber; i++) {\n atLeastProbability += eventProbability * (nFactorial / (factorial(tosses - i) * factorial(i) );\n }\n }\n\n" ]
[ 5, 3, 2, 2, 1, 1, 1 ]
[]
[]
[ "combinatorics", "dice", "discrete_mathematics", "puzzle", "python" ]
stackoverflow_0001202343_combinatorics_dice_discrete_mathematics_puzzle_python.txt
Q: Java Wrapper to Perl/Python code I have to deploy some Web Services on a server that only supports the Java ones, but some of them will be done using perl or python. I want to know if is possible to develop a Java wrapper to call a specific code written in perl or python. So, I want to have all the Web Services in Java, but some of them will call some code using other languages. Thanks in advance. Regards, Ukrania A: This depends heavily upon your needs. If Jython is an option for the Python code (it isn't always 100% compatible), then it is probably the best option there. Otherwise, you will need to use Java's Process Builder to call the interpretters directly and return the results on their output stream. This will not be fast (but then again, Jython isn't that fast either, relative to regular Java code), but it is an extremely flexible solution. A: For the Python part of it you can use Jython to run Python code right from your Java virtual machine. It'll integrate fully with your Java code as a bonus. A: For Perl, use Inline::Java. There are several options for integrating the code; you can call a separate process or you can use an embedded interpreter. A: For Python you can use the Java Scripting API. A Perl implementation is sadly still missing. A: There's something I used a while back called Jython which allows you to execute Python code from Java. It was a little quirky, but I got it to do what I needed. http://www.jython.org
Java Wrapper to Perl/Python code
I have to deploy some Web Services on a server that only supports the Java ones, but some of them will be done using perl or python. I want to know if is possible to develop a Java wrapper to call a specific code written in perl or python. So, I want to have all the Web Services in Java, but some of them will call some code using other languages. Thanks in advance. Regards, Ukrania
[ "This depends heavily upon your needs. If Jython is an option for the Python code (it isn't always 100% compatible), then it is probably the best option there. Otherwise, you will need to use Java's Process Builder to call the interpretters directly and return the results on their output stream. This will not be fast (but then again, Jython isn't that fast either, relative to regular Java code), but it is an extremely flexible solution.\n", "For the Python part of it you can use Jython to run Python code right from your Java virtual machine. It'll integrate fully with your Java code as a bonus.\n", "For Perl, use Inline::Java. There are several options for integrating the code; you can call a separate process or you can use an embedded interpreter.\n", "For Python you can use the Java Scripting API.\nA Perl implementation is sadly still missing.\n", "There's something I used a while back called Jython which allows you to execute Python code from Java. It was a little quirky, but I got it to do what I needed.\nhttp://www.jython.org\n" ]
[ 4, 3, 3, 1, 0 ]
[]
[]
[ "java", "perl", "python", "web_services", "wrapper" ]
stackoverflow_0001201628_java_perl_python_web_services_wrapper.txt
Q: wxPython: Handling events in a widget that is inside a notebook I have a wxPython notebook, in this case a wx.aui.AuiNotebook. (but this problem has happened with other kinds of notebooks as well.) In my notebook I have a widget, in this case a subclass of ScrolledPanel, for which I am trying to do some custom event handling (for wx.EVT_KEY_DOWN). However, the events are not being handled. I checked my code outside of the notebook, and the event handling works, but when I put my widget in the notebook, the event handler doesn't seem to get invoked when the event happens. Does the notebook somehow block the event? How do I solve this? A: I tried reproducing your problem but it worked fine for me. The only thing I can think of is that there is one of your classes that also binds to wx.EVT_KEY_DOWN and doesn't call wx.Event.Skip() in its callback. That would prevent further handling of the event. If your scrolled panel happens to be downstream of such an object in the sequence of event handlers it will never see the event. For reference, here's an example that worked for me (on Windows). Is what you're doing much different than this? import wx import wx.aui, wx.lib.scrolledpanel class AppFrame(wx.Frame): def __init__(self, *args, **kwds): wx.Frame.__init__(self, *args, **kwds) # The notebook self.nb = wx.aui.AuiNotebook(self) # Create a scrolled panel panel = wx.lib.scrolledpanel.ScrolledPanel(self, -1) panel.SetupScrolling() self.add_panel(panel, 'Scrolled Panel') # Create a normal panel panel = wx.Panel(self, -1) self.add_panel(panel, 'Simple Panel') # Set the notebook on the frame self.sizer = wx.BoxSizer() self.sizer.Add(self.nb, 1, wx.EXPAND) self.SetSizer(self.sizer) # Status bar to display the key code of what was typed self.sb = self.CreateStatusBar() def add_panel(self, panel, name): panel.Bind(wx.EVT_KEY_DOWN, self.on_key) self.nb.AddPage(panel, name) def on_key(self, event): self.sb.SetStatusText("key: %d [%d]" % (event.GetKeyCode(), event.GetTimestamp())) event.Skip() class TestApp(wx.App): def OnInit(self): frame = AppFrame(None, -1, 'Click on a panel and hit a key') frame.Show() self.SetTopWindow(frame) return 1 if __name__ == "__main__": app = TestApp(0) app.MainLoop()
wxPython: Handling events in a widget that is inside a notebook
I have a wxPython notebook, in this case a wx.aui.AuiNotebook. (but this problem has happened with other kinds of notebooks as well.) In my notebook I have a widget, in this case a subclass of ScrolledPanel, for which I am trying to do some custom event handling (for wx.EVT_KEY_DOWN). However, the events are not being handled. I checked my code outside of the notebook, and the event handling works, but when I put my widget in the notebook, the event handler doesn't seem to get invoked when the event happens. Does the notebook somehow block the event? How do I solve this?
[ "I tried reproducing your problem but it worked fine for me. The only thing I can think of is that there is one of your classes that also binds to wx.EVT_KEY_DOWN and doesn't call wx.Event.Skip() in its callback. That would prevent further handling of the event. If your scrolled panel happens to be downstream of such an object in the sequence of event handlers it will never see the event.\nFor reference, here's an example that worked for me (on Windows). Is what you're doing much different than this?\nimport wx\nimport wx.aui, wx.lib.scrolledpanel\n\nclass AppFrame(wx.Frame):\n def __init__(self, *args, **kwds):\n wx.Frame.__init__(self, *args, **kwds)\n\n # The notebook\n self.nb = wx.aui.AuiNotebook(self)\n\n # Create a scrolled panel\n panel = wx.lib.scrolledpanel.ScrolledPanel(self, -1)\n panel.SetupScrolling()\n self.add_panel(panel, 'Scrolled Panel')\n\n # Create a normal panel\n panel = wx.Panel(self, -1)\n self.add_panel(panel, 'Simple Panel')\n\n # Set the notebook on the frame\n self.sizer = wx.BoxSizer()\n self.sizer.Add(self.nb, 1, wx.EXPAND)\n self.SetSizer(self.sizer)\n\n # Status bar to display the key code of what was typed\n self.sb = self.CreateStatusBar()\n\n def add_panel(self, panel, name):\n panel.Bind(wx.EVT_KEY_DOWN, self.on_key)\n self.nb.AddPage(panel, name)\n\n def on_key(self, event):\n self.sb.SetStatusText(\"key: %d [%d]\" % (event.GetKeyCode(), event.GetTimestamp()))\n event.Skip()\n\nclass TestApp(wx.App):\n def OnInit(self):\n frame = AppFrame(None, -1, 'Click on a panel and hit a key')\n frame.Show()\n self.SetTopWindow(frame)\n return 1\n\nif __name__ == \"__main__\":\n app = TestApp(0)\n app.MainLoop()\n\n" ]
[ 2 ]
[]
[]
[ "event_handling", "python", "user_interface", "wxpython" ]
stackoverflow_0001201979_event_handling_python_user_interface_wxpython.txt
Q: Python: \number Backreference in re.sub I'm trying to use python's re.sub function to replace some text. >>> import re >>> text = "<hi type=\"italic\"> the></hi>" >>> pat_error = re.compile(">(\s*\w*)*>") >>> pat_error.search(text) <_sre.SRE_Match object at 0xb7a3fea0> >>> re.sub(pat_error, ">\1", text) '<hi type="italic">\x01</hi>' Afterwards the value of text should be "<hi type="italic"> the</hi>" A: Two bugs in your code. First, you're not matching (and specifically, capturing) what you think you're matching and capturing -- insert after your call to .search: >>> _.groups() ('',) The unconstrained repetition of repetitions (star after a capturing group with nothing but stars) matches once too many -- with the empty string at the end of what you think you're matchin -- and that's what gets captured. Fix by changing at least one of the stars to a plus, e.g., by: >>> pat_error = re.compile(r">(\s*\w+)*>") >>> pat_error.search(text) <_sre.SRE_Match object at 0x83ba0> >>> _.groups() (' the',) Now THIS matches and captures sensibly. Second, youre not using raw string literal syntax where you should, so you don't have a backslash where you think you have one -- you have an escape sequence \1 which is the same as chr(1). Fix by using raw string literal syntax, i.e. after the above snippet >>> pat_error.sub(r">\1", text) '<hi type="italic"> the</hi>' Alternatively you could double up all of your backslashes, to avoid them being taken as the start of escape sequences -- but, raw string literal syntax is much more readable. A: >>> text.replace("><", "<") '<hi type="italic"> the</hi>'
Python: \number Backreference in re.sub
I'm trying to use python's re.sub function to replace some text. >>> import re >>> text = "<hi type=\"italic\"> the></hi>" >>> pat_error = re.compile(">(\s*\w*)*>") >>> pat_error.search(text) <_sre.SRE_Match object at 0xb7a3fea0> >>> re.sub(pat_error, ">\1", text) '<hi type="italic">\x01</hi>' Afterwards the value of text should be "<hi type="italic"> the</hi>"
[ "Two bugs in your code. First, you're not matching (and specifically, capturing) what you think you're matching and capturing -- insert after your call to .search:\n>>> _.groups()\n('',)\n\nThe unconstrained repetition of repetitions (star after a capturing group with nothing but stars) matches once too many -- with the empty string at the end of what you think you're matchin -- and that's what gets captured. Fix by changing at least one of the stars to a plus, e.g., by:\n>>> pat_error = re.compile(r\">(\\s*\\w+)*>\")\n>>> pat_error.search(text)\n<_sre.SRE_Match object at 0x83ba0>\n>>> _.groups()\n(' the',)\n\nNow THIS matches and captures sensibly. Second, youre not using raw string literal syntax where you should, so you don't have a backslash where you think you have one -- you have an escape sequence \\1 which is the same as chr(1). Fix by using raw string literal syntax, i.e. after the above snippet\n>>> pat_error.sub(r\">\\1\", text)\n'<hi type=\"italic\"> the</hi>'\n\nAlternatively you could double up all of your backslashes, to avoid them being taken as the start of escape sequences -- but, raw string literal syntax is much more readable.\n", ">>> text.replace(\"><\", \"<\")\n'<hi type=\"italic\"> the</hi>'\n\n" ]
[ 10, 0 ]
[]
[]
[ "backreference", "python", "regex" ]
stackoverflow_0001204223_backreference_python_regex.txt
Q: Split a list into parts based on a set of indexes in Python What is the best way to split a list into parts based on an arbitrary number of indexes? E.g. given the code below indexes = [5, 12, 17] list = range(20) return something like this part1 = list[:5] part2 = list[5:12] part3 = list[12:17] part4 = list[17:] If there are no indexes it should return the entire list. A: This is the simplest and most pythonic solution I can think of: def partition(alist, indices): return [alist[i:j] for i, j in zip([0]+indices, indices+[None])] if the inputs are very large, then the iterators solution should be more convenient: from itertools import izip, chain def partition(alist, indices): pairs = izip(chain([0], indices), chain(indices, [None])) return (alist[i:j] for i, j in pairs) and of course, the very, very lazy guy solution (if you don't mind to get arrays instead of lists, but anyway you can always revert them to lists): import numpy partition = numpy.split A: I would be interested in seeing a more Pythonic way of doing this also. But this is a crappy solution. You need to add a checking for an empty index list. Something along the lines of: indexes = [5, 12, 17] list = range(20) output = [] prev = 0 for index in indexes: output.append(list[prev:index]) prev = index output.append(list[indexes[-1]:]) print output produces [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16], [17, 18, 19]] A: My solution is similar to Il-Bhima's. >>> def parts(list_, indices): ... indices = [0]+indices+[len(list_)] ... return [list_[v:indices[k+1]] for k, v in enumerate(indices[:-1])] Alternative approach If you're willing to slightly change the way you input indices, from absolute indices to relative (that is, from [5, 12, 17] to [5, 7, 5], the below will also give you the desired output, while it doesn't create intermediary lists. >>> from itertools import islice >>> def parts(list_, indices): ... i = iter(list_) ... return [list(islice(i, n)) for n in chain(indices, [None])] A: >>> def burst_seq(seq, indices): ... startpos = 0 ... for index in indices: ... yield seq[startpos:index] ... startpos = index ... yield seq[startpos:] ... >>> list(burst_seq(range(20), [5, 12, 17])) [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16], [17, 18, 19]] >>> list(burst_seq(range(20), [])) [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]] >>> list(burst_seq(range(0), [5, 12, 17])) [[], [], [], []] >>> Maxima mea culpa: it uses a for statement, and it's not using whizzbang stuff like itertools, zip(), None as a sentinel, list comprehensions, ... ;-) A: indices = [5, 12, 17] input = range(20) output = [] reduce(lambda x, y: output.append(input[x:y]) or y, indices + [len(input)], 0) print output A: This is all that I could think of def partition(list_, indexes): if indexes[0] != 0: indexes = [0] + indexes if indexes[-1] != len(list_): indexes = indexes + [len(list_)] return [ list_[a:b] for (a,b) in zip(indexes[:-1], indexes[1:])] A: Cide's makes three copies of the array: [0]+indices copies, ([0]+indices)+[] copies again, and indices[:-1] will copy a third time. Il-Bhima makes five copies. (I'm not counting the return value, of course.) Those could be reduced (izip, islice), but here's a zero-copy version: def iterate_pairs(lst, indexes): prev = 0 for i in indexes: yield prev, i prev = i yield prev, len(lst) def partition(lst, indexes): for first, last in iterate_pairs(lst, indexes): yield lst[first:last] indexes = [5, 12, 17] lst = range(20) print [l for l in partition(lst, indexes)] Of course, array copies are fairly cheap (native code) compared to interpreted Python, but this has another advantage: it's easy to reuse, to mutate the data directly: for first, last in iterate_pairs(lst, indexes): for i in range(first, last): lst[i] = first print lst # [0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 5, 5, 12, 12, 12, 12, 12, 17, 17, 17] (That's why I passed indexes to iterate_pairs. If you don't care about that, you can remove that parameter and just have the final line be "yield prev, None", which is all partition() needs.) A: Here's yet another answer. def partition(l, indexes): result, indexes = [], indexes+[len(l)] reduce(lambda x, y: result.append(l[x:y]) or y, indexes, 0) return result It supports negative indexes and such. >>> partition([1,2,3,4,5], [1, -1]) [[1], [2, 3, 4], [5]] >>>
Split a list into parts based on a set of indexes in Python
What is the best way to split a list into parts based on an arbitrary number of indexes? E.g. given the code below indexes = [5, 12, 17] list = range(20) return something like this part1 = list[:5] part2 = list[5:12] part3 = list[12:17] part4 = list[17:] If there are no indexes it should return the entire list.
[ "This is the simplest and most pythonic solution I can think of:\ndef partition(alist, indices):\n return [alist[i:j] for i, j in zip([0]+indices, indices+[None])]\n\nif the inputs are very large, then the iterators solution should be more convenient:\nfrom itertools import izip, chain\ndef partition(alist, indices):\n pairs = izip(chain([0], indices), chain(indices, [None]))\n return (alist[i:j] for i, j in pairs)\n\nand of course, the very, very lazy guy solution (if you don't mind to get arrays instead of lists, but anyway you can always revert them to lists):\nimport numpy\npartition = numpy.split\n\n", "I would be interested in seeing a more Pythonic way of doing this also. But this is a crappy solution. You need to add a checking for an empty index list.\nSomething along the lines of:\nindexes = [5, 12, 17]\nlist = range(20)\n\noutput = []\nprev = 0\n\nfor index in indexes:\n output.append(list[prev:index])\n prev = index\n\noutput.append(list[indexes[-1]:])\n\nprint output\n\nproduces\n[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16], [17, 18, 19]]\n\n", "My solution is similar to Il-Bhima's.\n>>> def parts(list_, indices):\n... indices = [0]+indices+[len(list_)]\n... return [list_[v:indices[k+1]] for k, v in enumerate(indices[:-1])]\n\nAlternative approach\nIf you're willing to slightly change the way you input indices, from absolute indices to relative (that is, from [5, 12, 17] to [5, 7, 5], the below will also give you the desired output, while it doesn't create intermediary lists.\n>>> from itertools import islice\n>>> def parts(list_, indices):\n... i = iter(list_)\n... return [list(islice(i, n)) for n in chain(indices, [None])]\n\n", ">>> def burst_seq(seq, indices):\n... startpos = 0\n... for index in indices:\n... yield seq[startpos:index]\n... startpos = index\n... yield seq[startpos:]\n...\n>>> list(burst_seq(range(20), [5, 12, 17]))\n[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16], [17, 18, 19]]\n>>> list(burst_seq(range(20), []))\n[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]]\n>>> list(burst_seq(range(0), [5, 12, 17]))\n[[], [], [], []]\n>>>\n\nMaxima mea culpa: it uses a for statement, and it's not using whizzbang stuff like itertools, zip(), None as a sentinel, list comprehensions, ... \n;-)\n", "indices = [5, 12, 17]\ninput = range(20)\noutput = []\n\nreduce(lambda x, y: output.append(input[x:y]) or y, indices + [len(input)], 0)\nprint output\n\n", "This is all that I could think of\ndef partition(list_, indexes):\n if indexes[0] != 0:\n indexes = [0] + indexes\n if indexes[-1] != len(list_):\n indexes = indexes + [len(list_)]\n return [ list_[a:b] for (a,b) in zip(indexes[:-1], indexes[1:])]\n\n", "Cide's makes three copies of the array: [0]+indices copies, ([0]+indices)+[] copies again, and indices[:-1] will copy a third time. Il-Bhima makes five copies. (I'm not counting the return value, of course.)\nThose could be reduced (izip, islice), but here's a zero-copy version:\ndef iterate_pairs(lst, indexes):\n prev = 0\n for i in indexes:\n yield prev, i\n prev = i\n yield prev, len(lst)\n\ndef partition(lst, indexes):\n for first, last in iterate_pairs(lst, indexes):\n yield lst[first:last]\n\nindexes = [5, 12, 17]\nlst = range(20)\n\nprint [l for l in partition(lst, indexes)]\n\nOf course, array copies are fairly cheap (native code) compared to interpreted Python, but this has another advantage: it's easy to reuse, to mutate the data directly:\nfor first, last in iterate_pairs(lst, indexes):\n for i in range(first, last):\n lst[i] = first\nprint lst\n# [0, 0, 0, 0, 0, 5, 5, 5, 5, 5, 5, 5, 12, 12, 12, 12, 12, 17, 17, 17]\n\n(That's why I passed indexes to iterate_pairs. If you don't care about that, you can remove that parameter and just have the final line be \"yield prev, None\", which is all partition() needs.)\n", "Here's yet another answer.\ndef partition(l, indexes):\n result, indexes = [], indexes+[len(l)]\n reduce(lambda x, y: result.append(l[x:y]) or y, indexes, 0)\n return result\n\nIt supports negative indexes and such.\n>>> partition([1,2,3,4,5], [1, -1])\n[[1], [2, 3, 4], [5]]\n>>> \n\n" ]
[ 57, 13, 8, 6, 3, 0, 0, 0 ]
[ "The plural of index is indices. Going for simplicity/readability.\nindices = [5, 12, 17]\ninput = range(20)\noutput = []\n\nfor i in reversed(indices):\n output.append(input[i:])\n input[i:] = []\noutput.append(input)\n\nwhile len(output):\n print output.pop()\n\n" ]
[ -1 ]
[ "list", "python" ]
stackoverflow_0001198512_list_python.txt
Q: \r\n vs \n in python eval function Why eval function doesn't work with \r\n but with \n. for example eval("for i in range(5):\r\n print 'hello'") doesn't work eval("for i in range(5):\n print 'hello'") works I know there is not a problem cause using replace("\r","") is corrected, but someone knows why happens? --Edit-- Oh! sorry , exactly, I meant exec. Carriage returns appeared because I'm reading from a HTML textarea via POST (I'm on a Linux box). now It is clearer, thanks to everyone. A: You have a strange definition of "work": >>> eval("for i in range(5):\n print 'hello'") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 1 for i in range(5): ^ SyntaxError: invalid syntax >>> I'm not sure why you're using eval -- I suspect you mean exec. Expressions and statements are drastically different entities in Python -- eval only deals with expressions (a bare expression is also a statement, so exec can deal with it as well as other statements). Turning to exec, and pondering the situation as a Python core committer, I think it's a minor mis-design: just like (redundant and useless) spaces, tabs and form feeds just before a NEWLINE are accepted and ignored, so should (just as redundant and useless) carriage returns be. I apologize: I think we never ever considered that somebody might want to put carriage returns there -- but then, it doesn't make any more sense to have e.g. form feeds there, and we do accept that... so I see no rationale for rejecting carriage returns (or other Unicode non-ANSI whitespace, either, now that in Python 3 we accept arbitrary Unicode non-ANSI alphanumerics in identifiers). If you care, please open an issue on Python's issue tracker, and (barring unforeseen opposition by other commiters) I think I can get it fixed by Python 3.2 (which should be out in 12 to 18 months -- that's an estimate [informed guess], not a promise;-). A: Do you mean eval or exec? Please post exactly what you ran, plus the full traceback and error message. The problem is probably because the Python grammar says that lines are terminated by newlines ('\n'), not the two-character sequence '\r\n'. In general, it would be safer for you to use replace('\r\n', '\n') in case there's a meaningful '\r' in there somewhere. It would be better if you didn't have the '\r' there in the first place ... how are you obtaining the text -- binary read on a Windows box?? Talking about safety, you should be careful about using eval or exec on any old code obtained from a possible enemy.
\r\n vs \n in python eval function
Why eval function doesn't work with \r\n but with \n. for example eval("for i in range(5):\r\n print 'hello'") doesn't work eval("for i in range(5):\n print 'hello'") works I know there is not a problem cause using replace("\r","") is corrected, but someone knows why happens? --Edit-- Oh! sorry , exactly, I meant exec. Carriage returns appeared because I'm reading from a HTML textarea via POST (I'm on a Linux box). now It is clearer, thanks to everyone.
[ "You have a strange definition of \"work\":\n>>> eval(\"for i in range(5):\\n print 'hello'\")\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1\n for i in range(5):\n ^\nSyntaxError: invalid syntax\n>>> \n\nI'm not sure why you're using eval -- I suspect you mean exec. Expressions and statements are drastically different entities in Python -- eval only deals with expressions (a bare expression is also a statement, so exec can deal with it as well as other statements).\nTurning to exec, and pondering the situation as a Python core committer, I think it's a minor mis-design: just like (redundant and useless) spaces, tabs and form feeds just before a NEWLINE are accepted and ignored, so should (just as redundant and useless) carriage returns be. I apologize: I think we never ever considered that somebody might want to put carriage returns there -- but then, it doesn't make any more sense to have e.g. form feeds there, and we do accept that... so I see no rationale for rejecting carriage returns (or other Unicode non-ANSI whitespace, either, now that in Python 3 we accept arbitrary Unicode non-ANSI alphanumerics in identifiers).\nIf you care, please open an issue on Python's issue tracker, and (barring unforeseen opposition by other commiters) I think I can get it fixed by Python 3.2 (which should be out in 12 to 18 months -- that's an estimate [informed guess], not a promise;-).\n", "Do you mean eval or exec? Please post exactly what you ran, plus the full traceback and error message.\nThe problem is probably because the Python grammar says that lines are terminated by newlines ('\\n'), not the two-character sequence '\\r\\n'.\nIn general, it would be safer for you to use replace('\\r\\n', '\\n') in case there's a meaningful '\\r' in there somewhere. It would be better if you didn't have the '\\r' there in the first place ... how are you obtaining the text -- binary read on a Windows box??\nTalking about safety, you should be careful about using eval or exec on any old code obtained from a possible enemy.\n" ]
[ 6, 1 ]
[]
[]
[ "eval", "python" ]
stackoverflow_0001204376_eval_python.txt
Q: Why is wxGridSizer much slower to initialize on a wxDialog then on a wxFrame? It seems that this is specific to windows, here is an example that reproduces the effect: import wx def makegrid(window): grid = wx.GridSizer(24, 10, 1, 1) window.SetSizer(grid) for i in xrange(240): cell = wx.Panel(window) cell.SetBackgroundColour(wx.Color(i, i, i)) grid.Add(cell, flag=wx.EXPAND) class TestFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent) makegrid(self) class TestDialog(wx.Dialog): def __init__(self, parent): wx.Dialog.__init__(self, parent) makegrid(self) class Test(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) btn1 = wx.Button(self, label="Show Frame") btn2 = wx.Button(self, label="Show Dialog") sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) sizer.Add(btn1, flag=wx.EXPAND) sizer.Add(btn2, flag=wx.EXPAND) btn1.Bind(wx.EVT_BUTTON, self.OnShowFrame) btn2.Bind(wx.EVT_BUTTON, self.OnShowDialog) def OnShowFrame(self, event): TestFrame(self).Show() def OnShowDialog(self, event): TestDialog(self).ShowModal() app = wx.PySimpleApp() app.TopWindow = Test() app.TopWindow.Show() app.MainLoop() I have tried this on the following configurations: Windows 7 with Python 2.5.4 and wxPython 2.8.10.1 Windows XP with Python 2.5.2 and wxPython 2.8.7.1 Windows XP with Python 2.6.0 and wxPython 2.8.9.1 Ubuntu 9.04 with Python 2.6.2 and wxPython 2.8.9.1 The wxDialog wasn't slow only on Ubuntu. A: I got a reply on the wxPython-users mailing list, the problem can be fixed by calling Layout explicitly before the dialog is shown. This is really weird... My guess is that this is due to Windows and wxWidgets not dealing very well with overlapping siblings, and so when the sizer is doing the initial layout and moving all the panels from (0,0) to where they need to be that something about the dialog is causing all of them to be refreshed and repainted at each move. If you instead do the initial layout before the dialog is shown then it is just as fast as the frame. You can do this by adding a call to window.Layout() at the end of makegrid. -- Robin Dunn
Why is wxGridSizer much slower to initialize on a wxDialog then on a wxFrame?
It seems that this is specific to windows, here is an example that reproduces the effect: import wx def makegrid(window): grid = wx.GridSizer(24, 10, 1, 1) window.SetSizer(grid) for i in xrange(240): cell = wx.Panel(window) cell.SetBackgroundColour(wx.Color(i, i, i)) grid.Add(cell, flag=wx.EXPAND) class TestFrame(wx.Frame): def __init__(self, parent): wx.Frame.__init__(self, parent) makegrid(self) class TestDialog(wx.Dialog): def __init__(self, parent): wx.Dialog.__init__(self, parent) makegrid(self) class Test(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) btn1 = wx.Button(self, label="Show Frame") btn2 = wx.Button(self, label="Show Dialog") sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) sizer.Add(btn1, flag=wx.EXPAND) sizer.Add(btn2, flag=wx.EXPAND) btn1.Bind(wx.EVT_BUTTON, self.OnShowFrame) btn2.Bind(wx.EVT_BUTTON, self.OnShowDialog) def OnShowFrame(self, event): TestFrame(self).Show() def OnShowDialog(self, event): TestDialog(self).ShowModal() app = wx.PySimpleApp() app.TopWindow = Test() app.TopWindow.Show() app.MainLoop() I have tried this on the following configurations: Windows 7 with Python 2.5.4 and wxPython 2.8.10.1 Windows XP with Python 2.5.2 and wxPython 2.8.7.1 Windows XP with Python 2.6.0 and wxPython 2.8.9.1 Ubuntu 9.04 with Python 2.6.2 and wxPython 2.8.9.1 The wxDialog wasn't slow only on Ubuntu.
[ "I got a reply on the wxPython-users mailing list, the problem can be fixed by calling Layout explicitly before the dialog is shown.\n\nThis is really weird...\nMy guess is that this is due to\n Windows and wxWidgets not dealing very\n well with overlapping siblings, and so\n when the sizer is doing the initial\n layout and moving all the panels from\n (0,0) to where they need to be that\n something about the dialog is causing\n all of them to be refreshed and\n repainted at each move. If you\n instead do the initial layout before\n the dialog is shown then it is just as\n fast as the frame.\nYou can do this by adding a call to window.Layout() at the end of\n makegrid.\n-- Robin Dunn\n\n" ]
[ 2 ]
[]
[]
[ "python", "windows", "wxpython" ]
stackoverflow_0001198067_python_windows_wxpython.txt
Q: TypeError: can't multiply sequence by non-int of type 'str' >>> Enter muzzle velocity (m/2): 60 Enter angle (degrees): 45 Traceback (most recent call last): File "F:/Python31/Lib/idlelib/test", line 9, in <module> range() File "F:/Python31/Lib/idlelib/test", line 7, in range Distance = float(decimal((2*(x*x))((decimal(math.zsin(y)))*(decimal(math.acos(y)))))/2) TypeError: can't multiply sequence by non-int of type 'str' I'm only new, so don't be too harsh if this is really obvious, but why am i getting this error? A: You should convert the data you get from console to integers: x = int(x) y = int(y) Distance = float(decimal((2*(x*x))((decimal(math.zsin(y)))*(decimal(math.acos(y)))))/2) A: >>> '60' * '60' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can't multiply sequence by non-int of type 'str' You are trying to multiply two strings together. You must convert the string input from the user to a number using int() or float(). Also, I'm not sure what you're doing with decimal; it looks like you're trying to call the module (the type is in the module, decimal.Decimal) but there's not much point in converting to a Decimal after doing some floating point math and then converting back to a float. In the future, post the code that causes the problem (and keep the interaction and traceback). But first try and shrink the code as much as possible while making sure it still causes the error. This is an important step in debugging. A: You are using raw_input() for getting the input. Instead use input(). It will return an Int. Make sure that you input only numbers or input() will raise an error (say in case of a string). Also, it would be nice if you name your variables properly. x and y don't convey much. (velocity and angle would be so much better)
TypeError: can't multiply sequence by non-int of type 'str'
>>> Enter muzzle velocity (m/2): 60 Enter angle (degrees): 45 Traceback (most recent call last): File "F:/Python31/Lib/idlelib/test", line 9, in <module> range() File "F:/Python31/Lib/idlelib/test", line 7, in range Distance = float(decimal((2*(x*x))((decimal(math.zsin(y)))*(decimal(math.acos(y)))))/2) TypeError: can't multiply sequence by non-int of type 'str' I'm only new, so don't be too harsh if this is really obvious, but why am i getting this error?
[ "You should convert the data you get from console to integers:\nx = int(x)\ny = int(y)\nDistance = float(decimal((2*(x*x))((decimal(math.zsin(y)))*(decimal(math.acos(y)))))/2)\n\n", ">>> '60' * '60'\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: can't multiply sequence by non-int of type 'str'\n\nYou are trying to multiply two strings together. You must convert the string input from the user to a number using int() or float().\nAlso, I'm not sure what you're doing with decimal; it looks like you're trying to call the module (the type is in the module, decimal.Decimal) but there's not much point in converting to a Decimal after doing some floating point math and then converting back to a float.\nIn the future, post the code that causes the problem (and keep the interaction and traceback). But first try and shrink the code as much as possible while making sure it still causes the error. This is an important step in debugging.\n", "You are using raw_input() for getting the input.\nInstead use input(). It will return an Int. Make sure that you input only numbers or input() will raise an error (say in case of a string).\nAlso, it would be nice if you name your variables properly. x and y don't convey much. (velocity and angle would be so much better)\n" ]
[ 10, 6, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001204744_python.txt
Q: importing modules with submodules from deep in a library here at office we have a library named after the company name and inside of it sublibraries, per project more or less, and in each sublibrary there might be more modules or libraries. we are using Django and this makes our hierarchy a couple of steps deeper... I am a bit perplex about the differences among the following import instructions: 1: import company.productline.specific.models, company.productline.base.models specific, base = company.productline.specific, company.productline.base 2: import company.productline.specific.models, company.productline.base.models from company.productline import specific, base 3: from company.productline import specific, base import company.productline.specific.models, company.productline.base.models the first style imports only the models? what are then the names specific and base made available in the current namespace? what happens in the initialization of modules if one imports first submodules and only afterwards the containing libraries? maybe the neatest style is the last one, where it is clear (at least to me) that I first import the two modules and putting their names directly in the current namespace and that the second import adds the model submodule to both modules just imported. on the other hand, (1) allows me to import only the inner modules and to refer to them in a compact though clear way (specific.models and base.models) not so sure whether this is question, but I'm curious to read comments. A: The three examples above are all equivalent in practice. All of them are weird, though. There is no reason to do from company.productline import specific and import company.productline.specific.models You can (most of the time) just access models by specific.models after the first import. It seems reasonable to in this case do from company.productline import base from company.productline import specific And then access these like base.models, specific.whatever, etc. If "specific" is a you also need to do a import model in __init__.py to be able to access specific.module. A: so I have looked into it a bit deeper, using this further useless package: A:(__init__.py: print 'importing A', B:(__init__.py: print 'importing B', C1:(__init__.py: print 'importing C1', D:(__init__.py: print 'importing D')) C2:(__init__.py: print 'importing C2', D:(__init__.py: print 'importing D')))) notice that C1 and C2 contain two different modules, both named D in their different namespaces. I will need them both, I don't want to use the whole path A.B.C1.D and A.B.C2.D because that looks too clumsy and I can't put them both in the current namespace because one would overwrite the other and -no- I don't like the idea of changing their names. what I want is to have C1 and C2 in the current namespace and to load the two included D modules. oh, yes: and I want the code to be readable. I tried the two forms from A.B import C1 and the much uglier one import A.B.C1 C1 = A.B.C1 and I would conclude that they are equivalent: Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> from A.B import C1 importing A importing B importing C1 >>> C1 <module 'A.B.C1' from 'A/B/C1/__init__.pyc'> >>> Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> import A.B.C1 importing A importing B importing C1 >>> C1=A.B.C1 >>> C1 <module 'A.B.C1' from 'A/B/C1/__init__.pyc'> >>> Importing package C1 or C2 does not import included modules just because they are there. and too bad that the form from A.B import C1.D is not syntactically accepted: that would be a nice compact thing to have. on the other hand, I am offered the opportunity to do so in A.B.C1.__init__.py if I please. so if I append the line import D to the __init__.py in A.B.C1, this is what happens: Python 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2 >>> from A.B import C1 importing A importing B importing C1 importing D >>> importing included modules is probably best done at the end of the package initialization. considering all this and in given some specific django behaviour (that makes it difficult/impossible to automatically import models while importing the package), I think I prefer style 3.
importing modules with submodules from deep in a library
here at office we have a library named after the company name and inside of it sublibraries, per project more or less, and in each sublibrary there might be more modules or libraries. we are using Django and this makes our hierarchy a couple of steps deeper... I am a bit perplex about the differences among the following import instructions: 1: import company.productline.specific.models, company.productline.base.models specific, base = company.productline.specific, company.productline.base 2: import company.productline.specific.models, company.productline.base.models from company.productline import specific, base 3: from company.productline import specific, base import company.productline.specific.models, company.productline.base.models the first style imports only the models? what are then the names specific and base made available in the current namespace? what happens in the initialization of modules if one imports first submodules and only afterwards the containing libraries? maybe the neatest style is the last one, where it is clear (at least to me) that I first import the two modules and putting their names directly in the current namespace and that the second import adds the model submodule to both modules just imported. on the other hand, (1) allows me to import only the inner modules and to refer to them in a compact though clear way (specific.models and base.models) not so sure whether this is question, but I'm curious to read comments.
[ "The three examples above are all equivalent in practice. All of them are weird, though. There is no reason to do \nfrom company.productline import specific\n\nand\nimport company.productline.specific.models\n\nYou can (most of the time) just access models by specific.models after the first import.\nIt seems reasonable to in this case do\nfrom company.productline import base\nfrom company.productline import specific\n\nAnd then access these like base.models, specific.whatever, etc. If \"specific\" is a you also need to do a import model in __init__.py to be able to access specific.module.\n", "so I have looked into it a bit deeper, using this further useless package:\nA:(__init__.py: print 'importing A',\n B:(__init__.py: print 'importing B',\n C1:(__init__.py: print 'importing C1',\n D:(__init__.py: print 'importing D'))\n C2:(__init__.py: print 'importing C2',\n D:(__init__.py: print 'importing D'))))\n\nnotice that C1 and C2 contain two different modules, both named D in their different namespaces. I will need them both, I don't want to use the whole path A.B.C1.D and A.B.C2.D because that looks too clumsy and I can't put them both in the current namespace because one would overwrite the other and -no- I don't like the idea of changing their names. what I want is to have C1 and C2 in the current namespace and to load the two included D modules. \noh, yes: and I want the code to be readable.\nI tried the two forms\nfrom A.B import C1\n\nand the much uglier one\nimport A.B.C1\nC1 = A.B.C1\n\nand I would conclude that they are equivalent:\n\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2\n>>> from A.B import C1\nimporting A\nimporting B\nimporting C1\n>>> C1\n<module 'A.B.C1' from 'A/B/C1/__init__.pyc'>\n>>> \n\n\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2\n>>> import A.B.C1\nimporting A\nimporting B\nimporting C1\n>>> C1=A.B.C1\n>>> C1\n<module 'A.B.C1' from 'A/B/C1/__init__.pyc'>\n>>> \n\nImporting package C1 or C2 does not import included modules just because they are there. and too bad that the form from A.B import C1.D is not syntactically accepted: that would be a nice compact thing to have.\non the other hand, I am offered the opportunity to do so in A.B.C1.__init__.py if I please. so if I append the line import D to the __init__.py in A.B.C1, this is what happens:\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:56:41) [GCC 4.3.3] on linux2\n>>> from A.B import C1\nimporting A\nimporting B\nimporting C1\nimporting D\n>>> \n\nimporting included modules is probably best done at the end of the package initialization.\n\nconsidering all this and in given some specific django behaviour (that makes it difficult/impossible to automatically import models while importing the package), I think I prefer style 3.\n" ]
[ 0, 0 ]
[]
[]
[ "coding_style", "python" ]
stackoverflow_0001199743_coding_style_python.txt
Q: Scheduling issues in python I'm using python to interface a hardware usb sniffer device with the python API provided by the vendor and I'm trying to read (usb packets) from the device in a separate thread in an infinite loop (which works fine). The problem is that my main loop does not seem to ever get scheduled again (my read loop gets all the attention). The code looks much like this: from threading import Thread import time usb_device = 0 def usb_dump(usb_device): while True: #time.sleep(0.001) packet = ReadUSBDevice(usb_device) print "packet pid: %s" % packet.pid class DumpThread(Thread): def run(self): usb_dump() usb_device = OpenUSBDevice() t = DumpThread() t.start() print "Sleep 1" time.sleep(1) print "End" CloseUSBDevice(usb_device) sys.exit(0) (I could paste actual code, but since you need the hardware device I figure it won't help much). I'm expecting this code to start dumping usb packets for about a second before the main thread terminates the entire program. However, all I see is "Sleep 1" and then the usb_dump() procedure runs forever. If I uncomment the "time.sleep(0.001)" statement in the inner loop of the usb_dump() procedure things start working the way I expect, but then the python code becomes unable to keep up with all the packets coming in :-( The vendor tells me that this is an python scheduler problem and not their api's fault and therefor won't help me: «However, it seems like you are experiencing some nuances when using threading in Python. By putting the time.sleep in the DumpThread thread, you are explicitly signaling to the Python threading system to give up control. Otherwise, it is up the Python interpreter to determine when to switch threads and it usually does that after a certain number of byte code instructions have been executed.» Can somebody confirm that python is the problem here? Is there another way to make the DumpThread release control? Any other ideas? A: Your vendor would be right if yours was pure python code; however, C extensions may release the GIL, and therefore allows for actual multithreading. In particular, time.sleep does release the GIL (you can check it directly from the source code, here - look at floatsleep implementation); so your code should not have any problem. As a further proof, I have made also a simple test, just removing the calls to USB, and it actually works as expected: from threading import Thread import time import sys usb_device = 0 def usb_dump(): for i in range(100): time.sleep(0.001) print "dumping usb" class DumpThread(Thread): def run(self): usb_dump() t = DumpThread() t.start() print "Sleep 1" time.sleep(1) print "End" sys.exit(0) Finally, just a couple of notes on the code you posted: usb_device is not being passed to the thread. You need to pass it as a parameter or (argh!) tell the thread to get it from the global namespace. Instead of forcing sys.exit(), it could be better to just signal the thread to stop, and then closing USB device. I suspect your code could get some multithreading issue, as it is now. If you need just a periodic poll, threading.Timer class may be a better solution for you. [Update] About the latest point: as told in the comment, I think a Timer would better fit the semantic of your function (a periodic poll) and would automatically avoid issues with the GIL not being released by the vendor code. A: I'm assuming you wrote a Python C module that exposes the ReadUSBDevice function, and that it's intended to block until a USB packet is received, then return it. The native ReadUSBDevice implementation needs to release the Python GIL while it's waiting for a USB packet, and then reacquire it when it receives one. This allows other Python threads to run while you're executing native code. http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock While you've unlocked the GIL, you can't access Python. Release the GIL, run the blocking function, then when you know you have something to return back to Python, re-acquire it. If you don't do this, then no other Python threads can execute while your native blocking is going on. If this is a vendor-supplied Python module, failing to release the GIL during native blocking activity is a bug. Note that if you're receiving many packets, and actually processing them in Python, then other threads should still run. Multiple threads which are actually running Python code won't run in parallel, but it'll frequently switch between threads, giving them all a chance to run. This doesn't work if native code is blocking without releasing the GIL. edit: I see you mentioned this is a vendor-supplied library. If you don't have source, a quick way to see if they're releasing the GIL: start the ReadUSBDevice thread while no USB activity is happening, so ReadUSBDevice simply sits around waiting for data. If they're releasing the GIL, the other threads should run unimpeded. If they're not, it'll block the whole interpreter. That would be a serious bug. A: I think the vendor is correct. Assuming this is CPython, there is no true parallel threading; only one thread can execute at a time. This is because of the implementation of the global interpreter lock. You may be able to achieve an acceptable solution by using the multiprocessing module, which effectively sidesteps the garbage collector's lock by spawning true sub-processes. Another possibility that may help is to modify the scheduler's switching behaviour.
Scheduling issues in python
I'm using python to interface a hardware usb sniffer device with the python API provided by the vendor and I'm trying to read (usb packets) from the device in a separate thread in an infinite loop (which works fine). The problem is that my main loop does not seem to ever get scheduled again (my read loop gets all the attention). The code looks much like this: from threading import Thread import time usb_device = 0 def usb_dump(usb_device): while True: #time.sleep(0.001) packet = ReadUSBDevice(usb_device) print "packet pid: %s" % packet.pid class DumpThread(Thread): def run(self): usb_dump() usb_device = OpenUSBDevice() t = DumpThread() t.start() print "Sleep 1" time.sleep(1) print "End" CloseUSBDevice(usb_device) sys.exit(0) (I could paste actual code, but since you need the hardware device I figure it won't help much). I'm expecting this code to start dumping usb packets for about a second before the main thread terminates the entire program. However, all I see is "Sleep 1" and then the usb_dump() procedure runs forever. If I uncomment the "time.sleep(0.001)" statement in the inner loop of the usb_dump() procedure things start working the way I expect, but then the python code becomes unable to keep up with all the packets coming in :-( The vendor tells me that this is an python scheduler problem and not their api's fault and therefor won't help me: «However, it seems like you are experiencing some nuances when using threading in Python. By putting the time.sleep in the DumpThread thread, you are explicitly signaling to the Python threading system to give up control. Otherwise, it is up the Python interpreter to determine when to switch threads and it usually does that after a certain number of byte code instructions have been executed.» Can somebody confirm that python is the problem here? Is there another way to make the DumpThread release control? Any other ideas?
[ "Your vendor would be right if yours was pure python code; however, C extensions may release the GIL, and therefore allows for actual multithreading.\nIn particular, time.sleep does release the GIL (you can check it directly from the source code, here - look at floatsleep implementation); so your code should not have any problem.\nAs a further proof, I have made also a simple test, just removing the calls to USB, and it actually works as expected:\nfrom threading import Thread\nimport time\nimport sys\n\nusb_device = 0\n\ndef usb_dump():\n for i in range(100):\n time.sleep(0.001)\n print \"dumping usb\"\n\nclass DumpThread(Thread):\n def run(self):\n usb_dump()\n\nt = DumpThread()\nt.start()\nprint \"Sleep 1\"\ntime.sleep(1)\nprint \"End\"\nsys.exit(0)\n\nFinally, just a couple of notes on the code you posted:\n\nusb_device is not being passed to the thread. You need to pass it as a parameter or (argh!) tell the thread to get it from the global namespace.\nInstead of forcing sys.exit(), it could be better to just signal the thread to stop, and then closing USB device. I suspect your code could get some multithreading issue, as it is now.\nIf you need just a periodic poll, threading.Timer class may be a better solution for you.\n\n[Update] About the latest point: as told in the comment, I think a Timer would better fit the semantic of your function (a periodic poll) and would automatically avoid issues with the GIL not being released by the vendor code.\n", "I'm assuming you wrote a Python C module that exposes the ReadUSBDevice function, and that it's intended to block until a USB packet is received, then return it.\nThe native ReadUSBDevice implementation needs to release the Python GIL while it's waiting for a USB packet, and then reacquire it when it receives one. This allows other Python threads to run while you're executing native code.\nhttp://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock\nWhile you've unlocked the GIL, you can't access Python. Release the GIL, run the blocking function, then when you know you have something to return back to Python, re-acquire it.\nIf you don't do this, then no other Python threads can execute while your native blocking is going on. If this is a vendor-supplied Python module, failing to release the GIL during native blocking activity is a bug.\nNote that if you're receiving many packets, and actually processing them in Python, then other threads should still run. Multiple threads which are actually running Python code won't run in parallel, but it'll frequently switch between threads, giving them all a chance to run. This doesn't work if native code is blocking without releasing the GIL.\nedit: I see you mentioned this is a vendor-supplied library. If you don't have source, a quick way to see if they're releasing the GIL: start the ReadUSBDevice thread while no USB activity is happening, so ReadUSBDevice simply sits around waiting for data. If they're releasing the GIL, the other threads should run unimpeded. If they're not, it'll block the whole interpreter. That would be a serious bug.\n", "I think the vendor is correct. Assuming this is CPython, there is no true parallel threading; only one thread can execute at a time. This is because of the implementation of the global interpreter lock.\nYou may be able to achieve an acceptable solution by using the multiprocessing module, which effectively sidesteps the garbage collector's lock by spawning true sub-processes.\nAnother possibility that may help is to modify the scheduler's switching behaviour.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "multithreading", "python", "scheduling" ]
stackoverflow_0001205328_multithreading_python_scheduling.txt
Q: Django - Repeating a form field n times in one form I have a Django form with several fields in it one of which needs to be repeated n times (where n is not known at design time) how would I go about coding this (if it is possible at all)? e.g. instead of :- Class PaymentsForm(forms.form): invoice = forms.CharField(widget=ValueHiddenInput()) total = forms.CharField(widget=ValueHiddenInput()) item_name_1 = forms.CharField(widget=ValueHiddenInput()) item_name_2 = forms.CharField(widget=ValueHiddenInput()) . . . item_name_n = forms.CharField(widget=ValueHiddenInput()) I need something like :- Class PaymentsForm(forms.form): invoice = forms.CharField(widget=ValueHiddenInput()) total = forms.CharField(widget=ValueHiddenInput()) item_name[n] = forms.CharField(widget=ValueHiddenInput()) Thanks, Richard. A: You can create the repeated fields in the __init__ method of your form: class PaymentsForm(forms.Form): invoice = forms.CharField(widget=forms.HiddenInput()) total = forms.CharField(widget=forms.HiddenInput()) def __init__(self, *args, **kwargs): super(PaymentsForm, self).__init__(*args, **kwargs) for i in xrange(10): self.fields['item_name_%d' % i] = forms.CharField(widget=forms.HiddenInput()) More about dynamic forms can be found e.g. here edit: to answer the question in your comment: just give the number of repetitions as an argument to the __init__ method, something like this: def __init__(self, repetitions, *args, **kwargs): super(PaymentsForm, self).__init__(*args, **kwargs) for i in xrange(repetitions): self.fields['item_name_%d' % i] = forms.CharField(widget=forms.HiddenInput()) and then in your view (or wherever you create the form): payments_form = PaymentsForm(10) A: Use formsets.
Django - Repeating a form field n times in one form
I have a Django form with several fields in it one of which needs to be repeated n times (where n is not known at design time) how would I go about coding this (if it is possible at all)? e.g. instead of :- Class PaymentsForm(forms.form): invoice = forms.CharField(widget=ValueHiddenInput()) total = forms.CharField(widget=ValueHiddenInput()) item_name_1 = forms.CharField(widget=ValueHiddenInput()) item_name_2 = forms.CharField(widget=ValueHiddenInput()) . . . item_name_n = forms.CharField(widget=ValueHiddenInput()) I need something like :- Class PaymentsForm(forms.form): invoice = forms.CharField(widget=ValueHiddenInput()) total = forms.CharField(widget=ValueHiddenInput()) item_name[n] = forms.CharField(widget=ValueHiddenInput()) Thanks, Richard.
[ "You can create the repeated fields in the __init__ method of your form:\nclass PaymentsForm(forms.Form):\n invoice = forms.CharField(widget=forms.HiddenInput())\n total = forms.CharField(widget=forms.HiddenInput())\n\n def __init__(self, *args, **kwargs):\n super(PaymentsForm, self).__init__(*args, **kwargs)\n for i in xrange(10):\n self.fields['item_name_%d' % i] = forms.CharField(widget=forms.HiddenInput())\n\nMore about dynamic forms can be found e.g. here\nedit: to answer the question in your comment: just give the number of repetitions as an argument to the __init__ method, something like this:\n def __init__(self, repetitions, *args, **kwargs):\n super(PaymentsForm, self).__init__(*args, **kwargs)\n for i in xrange(repetitions):\n self.fields['item_name_%d' % i] = forms.CharField(widget=forms.HiddenInput())\n\nand then in your view (or wherever you create the form):\npayments_form = PaymentsForm(10)\n\n", "Use formsets.\n" ]
[ 10, 4 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001205626_django_django_forms_python.txt
Q: How to use Python's Easygui module to pick files and insert filenames into code I'm trying to use Python's easygui module to select a file and then insert it's name into a program I wrote (see code below). So I want to insert filename 1 and 2 where it says insert filename1, etc.. Any help would be greatly appreciated. Thanks! import easygui import csv msg='none' title='select a 90m distance csv file' filetypes=['*.csv'] default='*' filename1= easygui.fileopenbox() filename2= easygui.fileopenbox() dist90m_GIS_filename=(open('**insert filename1'**,'rb')) datafile_filename=(open(**insert filename2'**,'rb')) GIS_FH=csv.reader(dist90m_GIS_filename) DF_FH=csv.reader(datafile_filename) dist90m=[] for line in GIS_FH: dist90m.append(line[3]) data1=[] data2=[] for line in DF_FH: data1.append(','.join(line[0:57])) data2.append(','.join(line[58:63])) outfile=(open('X:\\herring_schools\\python_tests\\excel_test_out.csv','w')) i=0 for row in data1: row=row+','+dist90m[i]+','+data2[i]+'\n' outfile.write(row) i=i+1 outfile.close() A: I'm going to assume you're new to programming. If I misunderstood your question, I apologize. In your code, after the lines: filename1 = easygui.fileopenbox() filename2 = easygui.fileopenbox() The selected file names are stored in the variables filename1 and filename2. You can use those variables to open file handles like this: dist90m_GIS_filename=(open(filename1,'rb')) datafile_filename=(open(filename2,'rb')) Notice how I simply wrote filename1 where you wrote **insert filename1**. This is the whole point of variables. You use them where you need their value.
How to use Python's Easygui module to pick files and insert filenames into code
I'm trying to use Python's easygui module to select a file and then insert it's name into a program I wrote (see code below). So I want to insert filename 1 and 2 where it says insert filename1, etc.. Any help would be greatly appreciated. Thanks! import easygui import csv msg='none' title='select a 90m distance csv file' filetypes=['*.csv'] default='*' filename1= easygui.fileopenbox() filename2= easygui.fileopenbox() dist90m_GIS_filename=(open('**insert filename1'**,'rb')) datafile_filename=(open(**insert filename2'**,'rb')) GIS_FH=csv.reader(dist90m_GIS_filename) DF_FH=csv.reader(datafile_filename) dist90m=[] for line in GIS_FH: dist90m.append(line[3]) data1=[] data2=[] for line in DF_FH: data1.append(','.join(line[0:57])) data2.append(','.join(line[58:63])) outfile=(open('X:\\herring_schools\\python_tests\\excel_test_out.csv','w')) i=0 for row in data1: row=row+','+dist90m[i]+','+data2[i]+'\n' outfile.write(row) i=i+1 outfile.close()
[ "I'm going to assume you're new to programming. If I misunderstood your question, I apologize.\nIn your code, after the lines:\nfilename1 = easygui.fileopenbox()\nfilename2 = easygui.fileopenbox()\n\nThe selected file names are stored in the variables filename1 and filename2. You can use those variables to open file handles like this:\ndist90m_GIS_filename=(open(filename1,'rb'))\ndatafile_filename=(open(filename2,'rb'))\n\nNotice how I simply wrote filename1 where you wrote **insert filename1**. This is the whole point of variables. You use them where you need their value.\n" ]
[ 2 ]
[]
[]
[ "easygui", "file", "python" ]
stackoverflow_0001202902_easygui_file_python.txt
Q: removing extensions in subdirectories I need to remove the extension ".tex": ./1-aoeeu/1.tex ./2-thst/2.tex ./3-oeu/3.tex ./4-uoueou/4.tex ./5-aaa/5.tex ./6-oeua/6.tex ./7-oue/7.tex Please, do it with some tools below: Sed and find Ruby Python My Poor Try: $find . -maxdepth 2 -name "*.tex" -ok mv `sed '[email protected]@@g' {}` {} + A: A Python script to do the same: import os.path, shutil def remove_ext(arg, dirname, fnames): argfiles = (os.path.join(dirname, f) for f in fnames if f.endswith(arg)) for f in argfiles: shutil.move(f, f[:-len(arg)]) os.path.walk('/some/path', remove_ext, '.tex') A: One way, not necessarily the fastest (but at least the quickest developed): pax> for i in *.c */*.c */*/*.c ; do ...> j=$(echo "$i" | sed 's/\.c$//') ...> echo mv "$i" "$j" ...> done It's equivalent since your maxdepth is 2. The script is just echoing the mv command at the moment (for test purposes) and working on C files (since I had no tex files to test with). Or, you can use find with all its power thus: pax> find . -maxdepth 2 -name '*.tex' | while read line ; do ...> j=$(echo "$line" | sed 's/\.tex$//') ...> mv "$line" "$j" ...> done A: Using bash, find and mv from your base directory. for i in $(find . -type f -maxdepth 2 -name "*.tex"); do mv $i $(echo "$i" | sed 's|.tex$||'); done Variation 2 based on other answers here. find . -type f -maxdepth 2 -name "*.tex" | while read line; do mv "$line" "${line%%.tex}"; done PS: I did not get the part about escaping '.' by pax... A: Using "for i in" may cause "too many parameters" errrors A better approach is to pipe find onto the next process. Example: find . -type f -name "*.tex" | while read file do mv $file ${file%%tex}g done (Note: Wont handle files with spaces) A: There's an excellent Perl rename script that ships with some distributions, and otherwise you can find it on the web. (I'm not sure where it resides officially, but this is it). Check if your rename was written by Larry Wall (AUTHOR section of man rename). It will let you do something like: find . [-maxdepth 2] -name "*.tex" -exec rename 's/\.tex//' '{}' \; Using -exec is simplest here because there's only one action to perform, and it's not too expensive to invoke rename multiple times. If you need to do multiple things, use the "while read" form: find . [-maxdepth 2] -name "*.tex" | while read texfile; do rename 's/\.tex//' $texfile; done If you have something you want to invoke only once: find . [-maxdepth 2] -name "*.tex" | xargs rename 's/\.tex//' That last one makes clear how useful rename is - if everything's already in the same place, you've got a quick regexp renamer.
removing extensions in subdirectories
I need to remove the extension ".tex": ./1-aoeeu/1.tex ./2-thst/2.tex ./3-oeu/3.tex ./4-uoueou/4.tex ./5-aaa/5.tex ./6-oeua/6.tex ./7-oue/7.tex Please, do it with some tools below: Sed and find Ruby Python My Poor Try: $find . -maxdepth 2 -name "*.tex" -ok mv `sed '[email protected]@@g' {}` {} +
[ "A Python script to do the same:\nimport os.path, shutil\n\ndef remove_ext(arg, dirname, fnames):\n argfiles = (os.path.join(dirname, f) for f in fnames if f.endswith(arg))\n for f in argfiles:\n shutil.move(f, f[:-len(arg)])\n\nos.path.walk('/some/path', remove_ext, '.tex')\n\n", "One way, not necessarily the fastest (but at least the quickest developed):\n\n pax> for i in *.c */*.c */*/*.c ; do\n ...> j=$(echo \"$i\" | sed 's/\\.c$//')\n ...> echo mv \"$i\" \"$j\"\n ...> done\n\nIt's equivalent since your maxdepth is 2. The script is just echoing the mv command at the moment (for test purposes) and working on C files (since I had no tex files to test with).\nOr, you can use find with all its power thus:\n\n pax> find . -maxdepth 2 -name '*.tex' | while read line ; do\n ...> j=$(echo \"$line\" | sed 's/\\.tex$//')\n ...> mv \"$line\" \"$j\"\n ...> done\n\n", "Using bash, find and mv from your base directory. \n for i in $(find . -type f -maxdepth 2 -name \"*.tex\"); \n do \n mv $i $(echo \"$i\" | sed 's|.tex$||'); \n done\n\n\nVariation 2 based on other answers here.\nfind . -type f -maxdepth 2 -name \"*.tex\" | while read line; \ndo \n mv \"$line\" \"${line%%.tex}\"; \ndone\n\n\nPS: I did not get the part about escaping '.' by pax...\n", "Using \"for i in\" may cause \"too many parameters\" errrors\nA better approach is to pipe find onto the next process.\nExample:\nfind . -type f -name \"*.tex\" | while read file\ndo\n mv $file ${file%%tex}g\ndone\n\n(Note: Wont handle files with spaces)\n", "There's an excellent Perl rename script that ships with some distributions, and otherwise you can find it on the web. (I'm not sure where it resides officially, but this is it). Check if your rename was written by Larry Wall (AUTHOR section of man rename). It will let you do something like:\nfind . [-maxdepth 2] -name \"*.tex\" -exec rename 's/\\.tex//' '{}' \\;\n\nUsing -exec is simplest here because there's only one action to perform, and it's not too expensive to invoke rename multiple times. If you need to do multiple things, use the \"while read\" form:\nfind . [-maxdepth 2] -name \"*.tex\" | while read texfile; do rename 's/\\.tex//' $texfile; done\n\nIf you have something you want to invoke only once:\nfind . [-maxdepth 2] -name \"*.tex\" | xargs rename 's/\\.tex//'\n\nThat last one makes clear how useful rename is - if everything's already in the same place, you've got a quick regexp renamer.\n" ]
[ 4, 1, 0, 0, 0 ]
[]
[]
[ "find", "python", "ruby", "sed" ]
stackoverflow_0001204617_find_python_ruby_sed.txt
Q: Is there a Python shortcut for variable checking and assignment? I'm finding myself typing the following a lot (developing for Django, if that's relevant): if testVariable then: myVariable = testVariable else: # something else Alternatively, and more commonly (i.e. building up a parameters list) if 'query' in request.POST.keys() then: myVariable = request.POST['query'] else: # something else, probably looking at other keys Is there a shortcut I just don't know about that simplifies this? Something with the kind of logic myVariable = assign_if_exists(testVariable)? A: Assuming you want to leave myVariable untouched to its previous value in the "not exist" case, myVariable = testVariable or myVariable deals with the first case, and myVariable = request.POST.get('query', myVariable) deals with the second one. Neither has much to do with "exist", though (which is hardly a Python concept;-): the first one is about true or false, the second one about presence or absence of a key in a collection. A: The first instance is stated oddly... Why set a boolean to another boolean? What you may mean is to set myVariable to testVariable when testVariable is not a zero length string or not None or not something that happens to evaluate to False. If so, I prefer the more explicit formulations myVariable = testVariable if bool(testVariable) else somethingElse myVariable = testVariable if testVariable is not None else somethingElse When indexing into a dictionary, simply use get. myVariable = request.POST.get('query',"No Query")
Is there a Python shortcut for variable checking and assignment?
I'm finding myself typing the following a lot (developing for Django, if that's relevant): if testVariable then: myVariable = testVariable else: # something else Alternatively, and more commonly (i.e. building up a parameters list) if 'query' in request.POST.keys() then: myVariable = request.POST['query'] else: # something else, probably looking at other keys Is there a shortcut I just don't know about that simplifies this? Something with the kind of logic myVariable = assign_if_exists(testVariable)?
[ "Assuming you want to leave myVariable untouched to its previous value in the \"not exist\" case,\nmyVariable = testVariable or myVariable\n\ndeals with the first case, and\nmyVariable = request.POST.get('query', myVariable)\n\ndeals with the second one. Neither has much to do with \"exist\", though (which is hardly a Python concept;-): the first one is about true or false, the second one about presence or absence of a key in a collection.\n", "The first instance is stated oddly... Why set a boolean to another boolean? \nWhat you may mean is to set myVariable to testVariable when testVariable is not a zero length string or not None or not something that happens to evaluate to False.\nIf so, I prefer the more explicit formulations\nmyVariable = testVariable if bool(testVariable) else somethingElse\n\nmyVariable = testVariable if testVariable is not None else somethingElse\n\nWhen indexing into a dictionary, simply use get.\nmyVariable = request.POST.get('query',\"No Query\")\n\n" ]
[ 25, 7 ]
[]
[]
[ "django", "idioms", "python" ]
stackoverflow_0001207333_django_idioms_python.txt
Q: How do I limit the border size on a matplotlib graph? I'm making some pretty big graphs, and the whitespace in the border is taking up a lot of pixels that would be better used by data. It seems that the border grows as the graph grows. Here are the guts of my graphing code: import matplotlib from pylab import figure fig = figure() ax = fig.add_subplot(111) ax.plot_date((dates, dates), (highs, lows), '-', color='black') ax.plot_date(dates, closes, '-', marker='_', color='black') ax.set_title('Title') ax.grid(True) fig.set_figheight(96) fig.set_figwidth(24) Is there a way to reduce the size of the border? Maybe a setting somewhere that would allow me to keep the border at a constant 2 inches or so? A: Since it looks like you're just using a single subplot, you may want to skip add_subplot and go straight to add_axes. This will allow you to give the size of the axes (in figure-relative coordinates), so you can make it as large as you want within the figure. In your case, this would mean your code would look something like import matplotlib.pyplot as plt fig = plt.figure() # add_axes takes [left, bottom, width, height] border_width = 0.05 ax_size = [0+border_width, 0+border_width, 1-2*border_width, 1-2*border-width] ax = fig.add_axes(ax_size) ax.plot_date((dates, dates), (highs, lows), '-', color='black') ax.plot_date(dates, closes, '-', marker='_', color='black') ax.set_title('Title') ax.grid(True) fig.set_figheight(96) fig.set_figwidth(24) If you wanted, you could even put the parameters to set_figheight/set_figwidth directly in the figure() call. A: Try the subplots_adjust API: subplots_adjust(*args, **kwargs) fig.subplots_adjust(left=None, bottom=None, right=None, wspace=None, hspace=None) Update the SubplotParams with kwargs (defaulting to rc where None) and update the subplot locations
How do I limit the border size on a matplotlib graph?
I'm making some pretty big graphs, and the whitespace in the border is taking up a lot of pixels that would be better used by data. It seems that the border grows as the graph grows. Here are the guts of my graphing code: import matplotlib from pylab import figure fig = figure() ax = fig.add_subplot(111) ax.plot_date((dates, dates), (highs, lows), '-', color='black') ax.plot_date(dates, closes, '-', marker='_', color='black') ax.set_title('Title') ax.grid(True) fig.set_figheight(96) fig.set_figwidth(24) Is there a way to reduce the size of the border? Maybe a setting somewhere that would allow me to keep the border at a constant 2 inches or so?
[ "Since it looks like you're just using a single subplot, you may want to skip add_subplot and go straight to add_axes. This will allow you to give the size of the axes (in figure-relative coordinates), so you can make it as large as you want within the figure. In your case, this would mean your code would look something like\n import matplotlib.pyplot as plt\n\n fig = plt.figure()\n\n # add_axes takes [left, bottom, width, height]\n border_width = 0.05\n ax_size = [0+border_width, 0+border_width, \n 1-2*border_width, 1-2*border-width]\n ax = fig.add_axes(ax_size)\n ax.plot_date((dates, dates), (highs, lows), '-', color='black')\n ax.plot_date(dates, closes, '-', marker='_', color='black')\n\n ax.set_title('Title')\n ax.grid(True)\n fig.set_figheight(96)\n fig.set_figwidth(24)\n\nIf you wanted, you could even put the parameters to set_figheight/set_figwidth directly in the figure() call.\n", "Try the subplots_adjust API:\n\nsubplots_adjust(*args, **kwargs)\nfig.subplots_adjust(left=None, bottom=None, right=None, wspace=None, hspace=None)\nUpdate the SubplotParams with kwargs (defaulting to rc where None) and update the subplot locations\n\n" ]
[ 7, 5 ]
[]
[]
[ "graph", "matplotlib", "plot", "python" ]
stackoverflow_0001203639_graph_matplotlib_plot_python.txt
Q: Adding a field to a structured numpy array What is the cleanest way to add a field to a structured numpy array? Can it be done destructively, or is it necessary to create a new array and copy over the existing fields? Are the contents of each field stored contiguously in memory so that such copying can be done efficiently? A: If you're using numpy 1.3, there's also numpy.lib.recfunctions.append_fields(). For many installations, you'll need to import numpy.lib.recfunctions to access this. import numpy will not allow one to see the numpy.lib.recfunctions A: import numpy def add_field(a, descr): """Return a new array that is like "a", but has additional fields. Arguments: a -- a structured numpy array descr -- a numpy type description of the new fields The contents of "a" are copied over to the appropriate fields in the new array, whereas the new fields are uninitialized. The arguments are not modified. >>> sa = numpy.array([(1, 'Foo'), (2, 'Bar')], \ dtype=[('id', int), ('name', 'S3')]) >>> sa.dtype.descr == numpy.dtype([('id', int), ('name', 'S3')]) True >>> sb = add_field(sa, [('score', float)]) >>> sb.dtype.descr == numpy.dtype([('id', int), ('name', 'S3'), \ ('score', float)]) True >>> numpy.all(sa['id'] == sb['id']) True >>> numpy.all(sa['name'] == sb['name']) True """ if a.dtype.fields is None: raise ValueError, "`A' must be a structured numpy array" b = numpy.empty(a.shape, dtype=a.dtype.descr + descr) for name in a.dtype.names: b[name] = a[name] return b
Adding a field to a structured numpy array
What is the cleanest way to add a field to a structured numpy array? Can it be done destructively, or is it necessary to create a new array and copy over the existing fields? Are the contents of each field stored contiguously in memory so that such copying can be done efficiently?
[ "If you're using numpy 1.3, there's also numpy.lib.recfunctions.append_fields(). \nFor many installations, you'll need to import numpy.lib.recfunctions to access this. import numpy will not allow one to see the numpy.lib.recfunctions\n", "import numpy\n\ndef add_field(a, descr):\n \"\"\"Return a new array that is like \"a\", but has additional fields.\n\n Arguments:\n a -- a structured numpy array\n descr -- a numpy type description of the new fields\n\n The contents of \"a\" are copied over to the appropriate fields in\n the new array, whereas the new fields are uninitialized. The\n arguments are not modified.\n\n >>> sa = numpy.array([(1, 'Foo'), (2, 'Bar')], \\\n dtype=[('id', int), ('name', 'S3')])\n >>> sa.dtype.descr == numpy.dtype([('id', int), ('name', 'S3')])\n True\n >>> sb = add_field(sa, [('score', float)])\n >>> sb.dtype.descr == numpy.dtype([('id', int), ('name', 'S3'), \\\n ('score', float)])\n True\n >>> numpy.all(sa['id'] == sb['id'])\n True\n >>> numpy.all(sa['name'] == sb['name'])\n True\n \"\"\"\n if a.dtype.fields is None:\n raise ValueError, \"`A' must be a structured numpy array\"\n b = numpy.empty(a.shape, dtype=a.dtype.descr + descr)\n for name in a.dtype.names:\n b[name] = a[name]\n return b\n\n" ]
[ 20, 8 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001201817_numpy_python.txt
Q: How to change a GtkTreeView style in Python? I have an app written in python that presents some of its data in a tree view. By default, the tree view is a floaty white affair with little floaty triangles to expand the nodes. Is it possible to change this style to be more like a Windows explorer tree view? Specifically, I'd like to have vertical lines indicating parentage of the nodes. If this is possible, how would it be done? A: For lines linking the arrows there is a method in gtk.TreeView for that, see http://library.gnome.org/devel/pygtk/stable/class-gtktreeview.html#method-gtktreeview--set-enable-tree-lines A: you need to create a custom CellRenderers for this. the below links might help. http://www.pygtk.org/pygtk2tutorial/ch-TreeViewWidget.html http://www.pygtk.org/pygtk2tutorial/sec-CellRenderers.html
How to change a GtkTreeView style in Python?
I have an app written in python that presents some of its data in a tree view. By default, the tree view is a floaty white affair with little floaty triangles to expand the nodes. Is it possible to change this style to be more like a Windows explorer tree view? Specifically, I'd like to have vertical lines indicating parentage of the nodes. If this is possible, how would it be done?
[ "For lines linking the arrows there is a method in gtk.TreeView for that, see http://library.gnome.org/devel/pygtk/stable/class-gtktreeview.html#method-gtktreeview--set-enable-tree-lines\n", "you need to create a custom CellRenderers for this. the below links might help.\nhttp://www.pygtk.org/pygtk2tutorial/ch-TreeViewWidget.html\nhttp://www.pygtk.org/pygtk2tutorial/sec-CellRenderers.html\n" ]
[ 3, 1 ]
[]
[]
[ "gtk", "gtktreeview", "pygtk", "python" ]
stackoverflow_0001207250_gtk_gtktreeview_pygtk_python.txt
Q: "Interfaces" in Python: Yea or Nay? So I'm starting a project using Python after spending a significant amount of time in static land. I've seen some projects that make "interfaces" which are really just classes without any implementations. Before, I'd scoff at the idea and ignore that section of those projects. But now, I'm beginning to warm up to the idea. Just so we're clear, an interface in Python would look something like this: class ISomething(object): def some_method(): pass def some_other_method(some_argument): pass Notice that you aren't passing self to any of the methods, thus requiring that the method be overriden to be called. I see this as a nice form of documentation and completeness testing. So what is everyone here's opinion on the idea? Have I been brainwashed by all the C# programming I've done, or is this a good idea? A: I'm not sure what the point of that is. Interfaces (of this form, anyway) are largely to work around the lack of multiple inheritance. But Python has MI, so why not just make an abstract class? class Something(object): def some_method(self): raise NotImplementedError() def some_other_method(self, some_argument): raise NotImplementedError() A: In Python 2.6 and later, you can use abstract base classes instead. These are useful, because you can then test to see if something implements a given ABC by using "isinstance". As usual in Python, the concept is not as strictly enforced as it would be in a strict language, but it's handy. Moreover there are nice idiomatic ways of declaring abstract methods with decorators - see the link above for examples. A: There are some cases where interfaces can be very handy. Twisted makes fairly extensive use of Zope interfaces, and in a project I was working on Zope interfaces worked really well. Enthought's traits packaged recently added interfaces, but I don't have any experience with them. Beware overuse though -- duck typing and protocols are a fundamental aspect of Python, only use interfaces if they're absolutely necessary. A: The pythonic way is to "Ask for forgiveness rather than receive permission". Interfaces are all about receiving permission to perform some operation on an object. Python prefers this: def quacker(duck): try: duck.quack(): except AttributeError: raise ThisAintADuckException A: I don't think interfaces would add anything to the code environment. Method definition enforcing happens without them. If an object expected to be have like Foo and have method bar(), and it does't, it will throw an AttributeError. Simply making sure an interface method gets defined doesn't guarantee its correctness; behavioral unit tests need to be in place anyway. It's just as effective to write a "read this or die" page describing what methods your object needs to have to be compatible with what you're plugging it in as having elaborate docstrings in an interface class, since you're probably going to have tests for it anyway. One of those tests can be standard for all compatible objects that will check the invocation and return type of each base method. A: Seems kind of unnecessary to me - when I'm writing classes like that I usually just make the base class (your ISomething) with no methods, and mention in the actual documentation which methods subclasses are supposed to override. A: You can create an interface in a dynamically typed language, but there's no enforcement of the interface at compile time. A statically typed language's compiler will warn you if you forget to implement (or mistype!) a method of an interface. Since you receive no such help in a dynamically typed language, your interface declaration serves only as documentation. (Which isn't necessarily bad, it's just that your interface declaration provides no runtime advantage versus writing comments.) A: I'm about to do something similar with my Python project, the only things I would add are: Extra long, in-depth doc strings for each interface and all the abstract methods. I would add in all the required arguments so there's a definitive list. Raise an exception instead of 'pass'. Prefix all methods so they are obviously part of the interface - interface Foo: def foo_method1() A: I personally use interfaces a lot in conjunction with the Zope Component Architecture (ZCA). The advantage is not so much to have interfaces but to be able to use them with adapters and utilities (singletons). E.g. you could create an adapter which can take a class which implements ISomething but adapts it to the some interface ISomethingElse. Basically it's a wrapper. The original class would be: class MyClass(object): implements(ISomething) def do_something(self): return "foo" Then imagine interface ISomethingElse has a method do_something_else(). An adapter could look like this: class SomethingElseAdapter(object): implements(ISomethingElse) adapts(ISomething) def __init__(self, context): self.context = context def do_something_else(): return self.context.do_something()+"bar" You then would register that adapter with the component registry and you could then use it like this: >>> obj = MyClass() >>> print obj.do_something() "foo" >>> adapter = ISomethingElse(obj) >>> print adapter.do_something_else() "foobar" What that gives you is the ability to extend the original class with functionality which the class does not provide directly. You can do that without changing that class (it might be in a different product/library) and you could simply exchange that adapter by a different implementation without changing the code which uses it. It's all done by registration of components in initialization time. This of course is mainly useful for frameworks/libraries. I think it takes some time to get used to it but I really don't want to live without it anymore. But as said before it's also true that you need to think exactly where it makes sense and where it doesn't. Of course interfaces on it's own can also already be useful as documentation of the API. It's also useful for unit tests where you can test if your class actually implements that interface. And last but not least I like starting by writing the interface and some doctests to get the idea of what I am actually about to code. For more information you can check out my little introduction to it and there is a quite extensive description of it's API. A: Glyph Lefkowitz (of Twisted fame) just recently wrote an article on this topic. Personally I do not feel the need for interfaces, but YMMV. A: Have you looked at PyProtocols? it has a nice interface implementation that you should look at.
"Interfaces" in Python: Yea or Nay?
So I'm starting a project using Python after spending a significant amount of time in static land. I've seen some projects that make "interfaces" which are really just classes without any implementations. Before, I'd scoff at the idea and ignore that section of those projects. But now, I'm beginning to warm up to the idea. Just so we're clear, an interface in Python would look something like this: class ISomething(object): def some_method(): pass def some_other_method(some_argument): pass Notice that you aren't passing self to any of the methods, thus requiring that the method be overriden to be called. I see this as a nice form of documentation and completeness testing. So what is everyone here's opinion on the idea? Have I been brainwashed by all the C# programming I've done, or is this a good idea?
[ "I'm not sure what the point of that is. Interfaces (of this form, anyway) are largely to work around the lack of multiple inheritance. But Python has MI, so why not just make an abstract class?\nclass Something(object):\n def some_method(self):\n raise NotImplementedError()\n def some_other_method(self, some_argument):\n raise NotImplementedError()\n\n", "In Python 2.6 and later, you can use abstract base classes instead. These are useful, because you can then test to see if something implements a given ABC by using \"isinstance\". As usual in Python, the concept is not as strictly enforced as it would be in a strict language, but it's handy. Moreover there are nice idiomatic ways of declaring abstract methods with decorators - see the link above for examples.\n", "There are some cases where interfaces can be very handy. Twisted makes fairly extensive use of Zope interfaces, and in a project I was working on Zope interfaces worked really well. Enthought's traits packaged recently added interfaces, but I don't have any experience with them.\nBeware overuse though -- duck typing and protocols are a fundamental aspect of Python, only use interfaces if they're absolutely necessary.\n", "The pythonic way is to \"Ask for forgiveness rather than receive permission\". Interfaces are all about receiving permission to perform some operation on an object. Python prefers this:\ndef quacker(duck):\n try:\n duck.quack():\n except AttributeError:\n raise ThisAintADuckException\n\n", "I don't think interfaces would add anything to the code environment.\n\nMethod definition enforcing happens without them. If an object expected to be have like Foo and have method bar(), and it does't, it will throw an AttributeError.\nSimply making sure an interface method gets defined doesn't guarantee its correctness; behavioral unit tests need to be in place anyway. \nIt's just as effective to write a \"read this or die\" page describing what methods your object needs to have to be compatible with what you're plugging it in as having elaborate docstrings in an interface class, since you're probably going to have tests for it anyway. One of those tests can be standard for all compatible objects that will check the invocation and return type of each base method.\n\n", "Seems kind of unnecessary to me - when I'm writing classes like that I usually just make the base class (your ISomething) with no methods, and mention in the actual documentation which methods subclasses are supposed to override.\n", "You can create an interface in a dynamically typed language, but there's no enforcement of the interface at compile time. A statically typed language's compiler will warn you if you forget to implement (or mistype!) a method of an interface. Since you receive no such help in a dynamically typed language, your interface declaration serves only as documentation. (Which isn't necessarily bad, it's just that your interface declaration provides no runtime advantage versus writing comments.)\n", "I'm about to do something similar with my Python project, the only things I would add are:\n\nExtra long, in-depth doc strings for each interface and all the abstract methods.\nI would add in all the required arguments so there's a definitive list.\nRaise an exception instead of 'pass'.\nPrefix all methods so they are obviously part of the interface - interface Foo: def foo_method1()\n\n", "I personally use interfaces a lot in conjunction with the Zope Component Architecture (ZCA). The advantage is not so much to have interfaces but to be able to use them with adapters and utilities (singletons).\nE.g. you could create an adapter which can take a class which implements ISomething but adapts it to the some interface ISomethingElse. Basically it's a wrapper.\nThe original class would be:\nclass MyClass(object):\n implements(ISomething)\n\n def do_something(self):\n return \"foo\"\n\nThen imagine interface ISomethingElse has a method do_something_else(). An adapter could look like this:\nclass SomethingElseAdapter(object):\n implements(ISomethingElse)\n adapts(ISomething)\n\n def __init__(self, context):\n self.context = context\n\n def do_something_else():\n return self.context.do_something()+\"bar\"\n\nYou then would register that adapter with the component registry and you could then use it like this:\n>>> obj = MyClass()\n>>> print obj.do_something()\n\"foo\"\n>>> adapter = ISomethingElse(obj)\n>>> print adapter.do_something_else()\n\"foobar\"\n\nWhat that gives you is the ability to extend the original class with functionality which the class does not provide directly. You can do that without changing that class (it might be in a different product/library) and you could simply exchange that adapter by a different implementation without changing the code which uses it. It's all done by registration of components in initialization time.\nThis of course is mainly useful for frameworks/libraries.\nI think it takes some time to get used to it but I really don't want to live without it anymore. But as said before it's also true that you need to think exactly where it makes sense and where it doesn't. Of course interfaces on it's own can also already be useful as documentation of the API. It's also useful for unit tests where you can test if your class actually implements that interface. And last but not least I like starting by writing the interface and some doctests to get the idea of what I am actually about to code.\nFor more information you can check out my little introduction to it and there is a quite extensive description of it's API. \n", "Glyph Lefkowitz (of Twisted fame) just recently wrote an article on this topic. Personally I do not feel the need for interfaces, but YMMV.\n", "Have you looked at PyProtocols? it has a nice interface implementation that you should look at.\n" ]
[ 30, 12, 11, 10, 7, 5, 4, 3, 1, 1, 1 ]
[]
[]
[ "coding_style", "documentation", "interface", "python" ]
stackoverflow_0000552058_coding_style_documentation_interface_python.txt
Q: Can pydoc generate subdirectories? Is there any way to get pydoc's writedocs() function to create subdirectories for packages? For instance, let's say I have the following modules to document: foo.py dir/bar.py dir/__init__.py When I run pydoc.writedocs(), I get the following files: foo.html dir.bar.html I would like to get: foo.html dir/bar.html Is there any way to do this? A: pydoc.writedocs just loops calling writedoc, which is documented (and implemented) to "write a file in the current directory". The only way out that I can see is by making a modified version and forcing it (i.e., sigh, monkeypatching it) into the module, or monkeypatching some key aspect of it, namely where 'open' opens for writing the HTML files it's asked to open. Specifically, in your code, you could do something like: import pydoc def monkey_open(name, option): if option == 'w' and name.endswith('.html'): name_pieces = name.split('.') name_pieces[-2:] = '.'.join(name_pieces[-2:]) name = '/'.join(name_pieces) return open(name, option) pydoc.open = monkey_open Not an elegant or extremely robust solution, but "needs must"... pydoc just isn't designed to allow you to do what you want, so the thing needs to be "shoehorned in" a bit.
Can pydoc generate subdirectories?
Is there any way to get pydoc's writedocs() function to create subdirectories for packages? For instance, let's say I have the following modules to document: foo.py dir/bar.py dir/__init__.py When I run pydoc.writedocs(), I get the following files: foo.html dir.bar.html I would like to get: foo.html dir/bar.html Is there any way to do this?
[ "pydoc.writedocs just loops calling writedoc, which is documented (and implemented) to \"write a file in the current directory\". The only way out that I can see is by making a modified version and forcing it (i.e., sigh, monkeypatching it) into the module, or monkeypatching some key aspect of it, namely where 'open' opens for writing the HTML files it's asked to open. Specifically, in your code, you could do something like:\nimport pydoc\n\ndef monkey_open(name, option):\n if option == 'w' and name.endswith('.html'):\n name_pieces = name.split('.')\n name_pieces[-2:] = '.'.join(name_pieces[-2:])\n name = '/'.join(name_pieces)\n return open(name, option)\n\npydoc.open = monkey_open\n\nNot an elegant or extremely robust solution, but \"needs must\"... pydoc just isn't designed to allow you to do what you want, so the thing needs to be \"shoehorned in\" a bit.\n" ]
[ 1 ]
[]
[]
[ "pydoc", "python" ]
stackoverflow_0001208990_pydoc_python.txt
Q: Python: Why does `sys.exit(msg)` called from a thread not print `msg` to stderr? Today I ran against the fact, that sys.exit() called from a child-thread does not kill the main process. I did not know this before, and this is okay, but I needed long time to realize this. It would have saved much much time, if sys.exit(msg) would have printed msg to stderr. But it did not. It turned out that it wasn't a real bug in my application; it called sys.exit(msg) with a meaningful error in a volitional way -- but I just could not see this. In the docs for sys.exit() it is stated: "[...] any other object is printed to sys.stderr and results in an exit code of 1" This is not true for a call from a child-thread, where sys.exit() obviously behaves as thread.exit(): "Raise the SystemExit exception. When not caught, this will cause the thread to exit silently" I think when a programmer wants sys.exit(msg) to print an error message, then this should just be printed -- independent of the place from where it is called. Why not? I currently don't see any reason. At least there should be a hint in the docs for sys.exit() that the message is not printed from threads. What do you think? Why are error messages concealed from threads? Does this make sense? Best regards, Jan-Philip Gehrcke A: I agree that the Python docs are incorrect, or maybe more precisely incomplete, regarding sys.exit and SystemExit when called/raised by threads other than the main one; please open a doc issue on the Python online tracker so this can be addressed in a future iteration of the docs (probably a near-future one -- doc fixes are easier and smoother than code fixes;-). The remedy is pretty easy, of course -- just wrap any function you're using as the target of a threading.Thread with a decorator that does a try/except SystemExit, e: around it, and performs the "write to stderr" extra functionality you require (or, maybe better, uses a logging.error call instead) before terminating. But, with the doc issue that you correctly point out, it's hard to think about doing that unless and until one has met with the problem and in fact has had to spend some time in debugging to pin it down, as you've had to do (on the collective behalf of the core python developers -- sorry!). A: Not all threads in python are equal. Calling sys.exit from a thread, does not actually exit the system. Thus, calling sys.exit() from a child thread is nonsensical, so it makes sense that it does not behave the way you expect. This page talks more about threading objects, and the differences between threads and the special "main" thread.
Python: Why does `sys.exit(msg)` called from a thread not print `msg` to stderr?
Today I ran against the fact, that sys.exit() called from a child-thread does not kill the main process. I did not know this before, and this is okay, but I needed long time to realize this. It would have saved much much time, if sys.exit(msg) would have printed msg to stderr. But it did not. It turned out that it wasn't a real bug in my application; it called sys.exit(msg) with a meaningful error in a volitional way -- but I just could not see this. In the docs for sys.exit() it is stated: "[...] any other object is printed to sys.stderr and results in an exit code of 1" This is not true for a call from a child-thread, where sys.exit() obviously behaves as thread.exit(): "Raise the SystemExit exception. When not caught, this will cause the thread to exit silently" I think when a programmer wants sys.exit(msg) to print an error message, then this should just be printed -- independent of the place from where it is called. Why not? I currently don't see any reason. At least there should be a hint in the docs for sys.exit() that the message is not printed from threads. What do you think? Why are error messages concealed from threads? Does this make sense? Best regards, Jan-Philip Gehrcke
[ "I agree that the Python docs are incorrect, or maybe more precisely incomplete, regarding sys.exit and SystemExit when called/raised by threads other than the main one; please open a doc issue on the Python online tracker so this can be addressed in a future iteration of the docs (probably a near-future one -- doc fixes are easier and smoother than code fixes;-).\nThe remedy is pretty easy, of course -- just wrap any function you're using as the target of a threading.Thread with a decorator that does a try/except SystemExit, e: around it, and performs the \"write to stderr\" extra functionality you require (or, maybe better, uses a logging.error call instead) before terminating. But, with the doc issue that you correctly point out, it's hard to think about doing that unless and until one has met with the problem and in fact has had to spend some time in debugging to pin it down, as you've had to do (on the collective behalf of the core python developers -- sorry!).\n", "Not all threads in python are equal. Calling sys.exit from a thread, does not actually exit the system. Thus, calling sys.exit() from a child thread is nonsensical, so it makes sense that it does not behave the way you expect.\nThis page talks more about threading objects, and the differences between threads and the special \"main\" thread.\n" ]
[ 7, 0 ]
[]
[]
[ "exit", "multithreading", "python", "sys" ]
stackoverflow_0001209155_exit_multithreading_python_sys.txt
Q: Creating dynamic images with WSGI, no files involved I would like to send dynamically created images to my users, such as charts, graphs etc. These images are "throw-away" images, they will be only sent to one user and then destroyed, hence the "no files involved". I would like to send the image directly to the user, without saving it on the file system first. With PHP this could be achieved by linking an image in your HTML files to a PHP script such as: edit: SO swallowed my image tag: <img src="someScript.php?param1=xyz"> The script then sent the correct headers (filetype=>jpeg etc) to the browser and directly wrote the image back to the client, without temporarily saving it to the file system. How could I do something like this with a WSGI application. Currently I am using Python's internal SimpleWSGI Server. I am aware that this server was mainly meant for demonstration purposes and not for actual use, as it lacks multi threading capabilities, so please don't point this out to me, I am aware of that, and for now it fulfills my requirements :) Is it really as simple as putting the URL into the image tags and handling the request with WSGI, or is there a better practise? Has anyone had any experience with this and could give me a few pointers (no 32Bit ones please) Thanks, Tom A: It is not related to WSGI or php or any other specific web technology. consider <img src="someScript.php?param1=xyz"> in general for url someScript.php?param1=xyz server should return data of image type and it would work Consider this example: from wsgiref.simple_server import make_server def serveImage(environ, start_response): status = '200 OK' headers = [('Content-type', 'image/png')] start_response(status, headers) return open("about.png", "rb").read() httpd = make_server('', 8000, serveImage) httpd.serve_forever() here any url pointing to serveImage will return a valid image and you can use it in any img tag or any other tag place where a image can be used e.g. css or background images Image data can be generated on the fly using many third party libraries e.g. PIL etc e.g see examples of generating images dynamically using python imaging library http://lost-theory.org/python/dynamicimg.html A: YES. It is as simple as putting the url in the page. <img src="url_to_my_application"> And your application just have to return it with the correct mimetype, just like on PHP or anything else. Simplest example possible: def application(environ, start_response): data = open('test.jpg', 'rb').read() # simulate entire image on memory start_response('200 OK', [('content-type': 'image/jpeg'), ('content-length', str(len(data)))]) return [data] Of course, if you use a framework/helper library, it might have helper functions that will make it easier for you. I'd like to add as a side comment that multi-threading capabilities are not quintessential on a web server. If correctly done, you don't need threads to have a good performance. If you have a well-developed event loop that switches between different requests, and write your request-handling code in a threadless-friendly manner (by returning control to the server as often as possible), you can get even better performance than using threads, since they don't make anything run faster and add overhead. See twisted.web for a good python web server implementation that doesn't use threads. A: For a fancy example that uses this technique, please see the BNF railroad diagram WHIFF mini-demo. You can get the source from the WHIFF wsgi toolkit download. A: You should consider using and paying attention to ETag headers. It's a CGI script, not WSGI, but the ideas are translatable: sparklines source -- it happens to always return the same image for the same parameters, so it practices extreme caching.
Creating dynamic images with WSGI, no files involved
I would like to send dynamically created images to my users, such as charts, graphs etc. These images are "throw-away" images, they will be only sent to one user and then destroyed, hence the "no files involved". I would like to send the image directly to the user, without saving it on the file system first. With PHP this could be achieved by linking an image in your HTML files to a PHP script such as: edit: SO swallowed my image tag: <img src="someScript.php?param1=xyz"> The script then sent the correct headers (filetype=>jpeg etc) to the browser and directly wrote the image back to the client, without temporarily saving it to the file system. How could I do something like this with a WSGI application. Currently I am using Python's internal SimpleWSGI Server. I am aware that this server was mainly meant for demonstration purposes and not for actual use, as it lacks multi threading capabilities, so please don't point this out to me, I am aware of that, and for now it fulfills my requirements :) Is it really as simple as putting the URL into the image tags and handling the request with WSGI, or is there a better practise? Has anyone had any experience with this and could give me a few pointers (no 32Bit ones please) Thanks, Tom
[ "It is not related to WSGI or php or any other specific web technology. consider\n<img src=\"someScript.php?param1=xyz\">\n\nin general for url someScript.php?param1=xyz server should return data of image type and it would work\nConsider this example:\nfrom wsgiref.simple_server import make_server\n\ndef serveImage(environ, start_response):\n status = '200 OK'\n headers = [('Content-type', 'image/png')]\n start_response(status, headers)\n\n return open(\"about.png\", \"rb\").read()\n\nhttpd = make_server('', 8000, serveImage)\nhttpd.serve_forever()\n\nhere any url pointing to serveImage will return a valid image and you can use it in any img tag or any other tag place where a image can be used e.g. css or background images\nImage data can be generated on the fly using many third party libraries e.g. PIL etc\ne.g see examples of generating images dynamically using python imaging library\nhttp://lost-theory.org/python/dynamicimg.html\n", "YES. It is as simple as putting the url in the page.\n<img src=\"url_to_my_application\">\n\nAnd your application just have to return it with the correct mimetype, just like on PHP or anything else. Simplest example possible:\ndef application(environ, start_response):\n data = open('test.jpg', 'rb').read() # simulate entire image on memory\n start_response('200 OK', [('content-type': 'image/jpeg'), \n ('content-length', str(len(data)))])\n return [data]\n\nOf course, if you use a framework/helper library, it might have helper functions that will make it easier for you.\nI'd like to add as a side comment that multi-threading capabilities are not quintessential on a web server. If correctly done, you don't need threads to have a good performance.\nIf you have a well-developed event loop that switches between different requests, and write your request-handling code in a threadless-friendly manner (by returning control to the server as often as possible), you can get even better performance than using threads, since they don't make anything run faster and add overhead. \nSee twisted.web for a good python web server implementation that doesn't use threads.\n", "For a fancy example that uses this technique, please see\nthe BNF railroad\ndiagram WHIFF mini-demo. You can get the source from the WHIFF wsgi toolkit download.\n", "You should consider using and paying attention to ETag headers. It's a CGI script, not WSGI, but the ideas are translatable: sparklines source -- it happens to always return the same image for the same parameters, so it practices extreme caching.\n" ]
[ 9, 2, 0, 0 ]
[]
[]
[ "image_processing", "python", "wsgi" ]
stackoverflow_0001001068_image_processing_python_wsgi.txt