content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python/PySerial and CPU usage I've created a script to monitor the output of a serial port that receives 3-4 lines of data every half hour - the script runs fine and grabs everything that comes off the port which at the end of the day is what matters... What bugs me, however, is that the cpu usage seems rather high for a program that's just monitoring a single serial port, 1 core will always be at 100% usage while this script is running. I'm basically running a modified version of the code in this question: pyserial - How to Read Last Line Sent from Serial Device I've tried polling the inWaiting() function at regular intervals and having it sleep when inWaiting() is 0 - I've tried intervals from 1 second down to 0.001 seconds (basically, as often as I can without driving up the cpu usage) - this will succeed in grabbing the first line but seems to miss the rest of the data. Adjusting the timeout of the serial port doesn't seem to have any effect on cpu usage, nor does putting the listening function into it's own thread (not that I really expected a difference but it was worth trying). Should python/pyserial be using this much cpu? (this seems like overkill) Am I wasting my time on this quest / Should I just bite the bullet and schedule the script to sleep for the periods that I know no data will be coming? A: Maybe you could issue a blocking read(1) call, and when it succeeds use read(inWaiting()) to get the right number of remaining bytes. A: Would a system style solution be better? Create the python script and have it executed via Cron/Scheduled Task? pySerial shouldn't be using that much CPU but if its just sitting there polling for an hour I can see how it may happen. Sleeping may be a better option in conjunction with periodic wakeup and polls.
Python/PySerial and CPU usage
I've created a script to monitor the output of a serial port that receives 3-4 lines of data every half hour - the script runs fine and grabs everything that comes off the port which at the end of the day is what matters... What bugs me, however, is that the cpu usage seems rather high for a program that's just monitoring a single serial port, 1 core will always be at 100% usage while this script is running. I'm basically running a modified version of the code in this question: pyserial - How to Read Last Line Sent from Serial Device I've tried polling the inWaiting() function at regular intervals and having it sleep when inWaiting() is 0 - I've tried intervals from 1 second down to 0.001 seconds (basically, as often as I can without driving up the cpu usage) - this will succeed in grabbing the first line but seems to miss the rest of the data. Adjusting the timeout of the serial port doesn't seem to have any effect on cpu usage, nor does putting the listening function into it's own thread (not that I really expected a difference but it was worth trying). Should python/pyserial be using this much cpu? (this seems like overkill) Am I wasting my time on this quest / Should I just bite the bullet and schedule the script to sleep for the periods that I know no data will be coming?
[ "Maybe you could issue a blocking read(1) call, and when it succeeds use read(inWaiting()) to get the right number of remaining bytes.\n", "Would a system style solution be better? Create the python script and have it executed via Cron/Scheduled Task?\npySerial shouldn't be using that much CPU but if its just sitting there polling for an hour I can see how it may happen. Sleeping may be a better option in conjunction with periodic wakeup and polls.\n" ]
[ 16, 0 ]
[]
[]
[ "cpu_usage", "pyserial", "python" ]
stackoverflow_0001328606_cpu_usage_pyserial_python.txt
Q: Stable python serialization (e.g. no pickle module relocation issues) I am considering the use of Quantities to define a number together with its unit. This value most likely will have to be stored on the disk. As you are probably aware, pickling has one major issue: if you relocate the module around, unpickling will not be able to resolve the class, and you will not be able to unpickle the information. There are workarounds for this behavior, but they are, indeed, workarounds. A solution I fantasized for this issue would be to create a string encoding uniquely a given unit. Once you obtain this encoding from the disk, you pass it to a factory method in the Quantities module, which decodes it to a proper unit instance. The advantage is that even if you relocate the module around, everything will still work, as long as you pass the magic string token to the factory method. Is this a known concept? A: Looks like an application of Wheeler's First Principle, "all problems in computer science can be solved by another level of indirection" (the Second Principle adds "but that will usually create another problem";-). Essentially what you need to do is an indirection to identify the type -- entity-within-type will be fine with pickling-like approaches (you can study the sources of pickle.py and copy_reg.py for all the fine details of the latter). Specifically, I believe that what you want to do is subclass pickle.Pickler and override the save_inst method. Where the current version says: if self.bin: save(cls) for arg in args: save(arg) write(OBJ) else: for arg in args: save(arg) write(INST + cls.__module__ + '\n' + cls.__name__ + '\n') you want to write something different than just the class's module and name -- some kind of unique identifier (made up of two string) for the class, probably held in your own registry or registries; and similarly for the save_global method. It's even easier for your subclass of Unpickler, because the _instantiate part is already factored out in its own method: you only need to override find_class, which is: def find_class(self, module, name): # Subclasses may override this __import__(module) mod = sys.modules[module] klass = getattr(mod, name) return klass it must take two strings and return a class object; you can do that through your registries, again. Like always when registries are involved, you need to think about how to ensure you register all objects (classes) of interest, etc, etc. One popular strategy here is to leave pickling alone, but ensure that all moves of classes, renames of modules, etc, are recorded somewhere permanent; this way, just the subclassed unpickler can do all the work, and it can most conveniently do it all in the overridden find_class -- bypassing all issues of registration. I gather you consider this a "workaround" but to me it seems just an extremely simple, powerful and convenient implementation of the "one more level of indirection" concept, which avoids the "one more problem" issue;-).
Stable python serialization (e.g. no pickle module relocation issues)
I am considering the use of Quantities to define a number together with its unit. This value most likely will have to be stored on the disk. As you are probably aware, pickling has one major issue: if you relocate the module around, unpickling will not be able to resolve the class, and you will not be able to unpickle the information. There are workarounds for this behavior, but they are, indeed, workarounds. A solution I fantasized for this issue would be to create a string encoding uniquely a given unit. Once you obtain this encoding from the disk, you pass it to a factory method in the Quantities module, which decodes it to a proper unit instance. The advantage is that even if you relocate the module around, everything will still work, as long as you pass the magic string token to the factory method. Is this a known concept?
[ "Looks like an application of Wheeler's First Principle, \"all problems in computer science can be solved by another level of indirection\" (the Second Principle adds \"but that will usually create another problem\";-). Essentially what you need to do is an indirection to identify the type -- entity-within-type will be fine with pickling-like approaches (you can study the sources of pickle.py and copy_reg.py for all the fine details of the latter).\nSpecifically, I believe that what you want to do is subclass pickle.Pickler and override the save_inst method. Where the current version says:\n if self.bin:\n save(cls)\n for arg in args:\n save(arg)\n write(OBJ)\n else:\n for arg in args:\n save(arg)\n write(INST + cls.__module__ + '\\n' + cls.__name__ + '\\n')\n\nyou want to write something different than just the class's module and name -- some kind of unique identifier (made up of two string) for the class, probably held in your own registry or registries; and similarly for the save_global method.\nIt's even easier for your subclass of Unpickler, because the _instantiate part is already factored out in its own method: you only need to override find_class, which is:\ndef find_class(self, module, name):\n # Subclasses may override this\n __import__(module)\n mod = sys.modules[module]\n klass = getattr(mod, name)\n return klass\n\nit must take two strings and return a class object; you can do that through your registries, again.\nLike always when registries are involved, you need to think about how to ensure you register all objects (classes) of interest, etc, etc. One popular strategy here is to leave pickling alone, but ensure that all moves of classes, renames of modules, etc, are recorded somewhere permanent; this way, just the subclassed unpickler can do all the work, and it can most conveniently do it all in the overridden find_class -- bypassing all issues of registration. I gather you consider this a \"workaround\" but to me it seems just an extremely simple, powerful and convenient implementation of the \"one more level of indirection\" concept, which avoids the \"one more problem\" issue;-).\n" ]
[ 1 ]
[]
[]
[ "pickle", "python", "serialization" ]
stackoverflow_0001328581_pickle_python_serialization.txt
Q: How do I use colour with Windows command prompt using Python? I'm trying to patch a waf issue, where the Windows command prompt output isn't coloured when it's supposed to be. I'm trying to figure out how to actually implement this patch, but I'm having trouble finding sufficient resources - could someone point me in right direction? Update 1 Please don't suggest anything that requires Cygwin. A: It is possible thanks to ctypes and SetConsoleTextAttribute Here is an example from ctypes import * STD_OUTPUT_HANDLE_ID = c_ulong(0xfffffff5) windll.Kernel32.GetStdHandle.restype = c_ulong std_output_hdl = windll.Kernel32.GetStdHandle(STD_OUTPUT_HANDLE_ID) for color in xrange(16): windll.Kernel32.SetConsoleTextAttribute(std_output_hdl, color) print "hello" A: If you're keen on using normal cmd.exe consoles for the Python interactive interpreter, see this recipe. If you're OK with using special windows simulating a console, for example because you also need more advanced curses functionality anyway, then @TheLobster's suggestion of wcurses is just fine.
How do I use colour with Windows command prompt using Python?
I'm trying to patch a waf issue, where the Windows command prompt output isn't coloured when it's supposed to be. I'm trying to figure out how to actually implement this patch, but I'm having trouble finding sufficient resources - could someone point me in right direction? Update 1 Please don't suggest anything that requires Cygwin.
[ "It is possible thanks to ctypes and SetConsoleTextAttribute\nHere is an example\nfrom ctypes import *\nSTD_OUTPUT_HANDLE_ID = c_ulong(0xfffffff5)\nwindll.Kernel32.GetStdHandle.restype = c_ulong\nstd_output_hdl = windll.Kernel32.GetStdHandle(STD_OUTPUT_HANDLE_ID)\nfor color in xrange(16):\n windll.Kernel32.SetConsoleTextAttribute(std_output_hdl, color)\n print \"hello\"\n\n", "If you're keen on using normal cmd.exe consoles for the Python interactive interpreter, see this recipe. If you're OK with using special windows simulating a console, for example because you also need more advanced curses functionality anyway, then @TheLobster's suggestion of wcurses is just fine.\n" ]
[ 21, 3 ]
[]
[]
[ "command_prompt", "python", "waf", "windows" ]
stackoverflow_0001328643_command_prompt_python_waf_windows.txt
Q: How to freeze/grayish window in pygtk? I want main window to "gray, freeze, stop working", when some other window is opened. Is there some default way to do it? Pretty much the same as gtk.Dialog is working. EDIT: Currently I'm just replacing all contents by a text line, but I guess there should be better way. A: You really shouldn't try to make a program become unresponsive. If what you want to do is stop the user from using the window, make the dialog modal: gtk.Dialog.set_modal(True)
How to freeze/grayish window in pygtk?
I want main window to "gray, freeze, stop working", when some other window is opened. Is there some default way to do it? Pretty much the same as gtk.Dialog is working. EDIT: Currently I'm just replacing all contents by a text line, but I guess there should be better way.
[ "You really shouldn't try to make a program become unresponsive.\nIf what you want to do is stop the user from using the window, make the dialog modal: gtk.Dialog.set_modal(True)\n" ]
[ 3 ]
[]
[]
[ "freeze", "pygtk", "python", "window" ]
stackoverflow_0001329076_freeze_pygtk_python_window.txt
Q: Search function with PyGTKsourceview I'm writing a small html editor in python mostly for personal use and have integrated a gtksourceview2 object into my Python code. All the mayor functions seem to work more or less, but I'm having trouble getting a search function to work. Obvioiusly the GUI work is already done, but I can't figure out how to somehow buildin methods of the GTKsourceview.Buffer object (http://www.gnome.org/~gianmt/pygtksourceview2/class-gtksourcebuffer2.html) to actually search through the text in it. Does anybody have a suggestion? I find the documentation not very verbose and can't really find a working example on the web. Thanks in advance. A: The reference for the C API can probably be helpful, including this chapter that I found "Searching in a GtkSourceBuffer". As is the reference for the superclass gtk.TextBuffer A: Here is the python doc, I couldn't find any up-to-date documentation so I stuffed it in my dropbox. Here is the link. What you want to look at is at is the gtk.iter_forward_search and gtk.iter_backward_search functions.
Search function with PyGTKsourceview
I'm writing a small html editor in python mostly for personal use and have integrated a gtksourceview2 object into my Python code. All the mayor functions seem to work more or less, but I'm having trouble getting a search function to work. Obvioiusly the GUI work is already done, but I can't figure out how to somehow buildin methods of the GTKsourceview.Buffer object (http://www.gnome.org/~gianmt/pygtksourceview2/class-gtksourcebuffer2.html) to actually search through the text in it. Does anybody have a suggestion? I find the documentation not very verbose and can't really find a working example on the web. Thanks in advance.
[ "The reference for the C API can probably be helpful, including this chapter that I found \"Searching in a GtkSourceBuffer\".\nAs is the reference for the superclass gtk.TextBuffer\n", "Here is the python doc, I couldn't find any up-to-date documentation so I stuffed it in my dropbox. Here is the link. What you want to look at is at is the gtk.iter_forward_search and gtk.iter_backward_search functions.\n" ]
[ 1, 1 ]
[]
[]
[ "pygtk", "python" ]
stackoverflow_0001327906_pygtk_python.txt
Q: Django_tagging (v0.3/pre): Configuration issue I am trying to use the django-tagging in one of my project and run into some errors. I can play with tags in the shell but couldn't assign them from admin interface. What I want to do is add "tag" functionality to a model and add/remove tags from Admin interface. Why is it the "tags" are seen by shell and not by "admin" interface? What is going on? Model.py: import tagging class Department(models.Model): tags = TagField() Admin.py: class DepartmentAdmin(admin.ModelAdmin): list_display = ('name', 'tags') --> works .... fields = ['name', 'tags'] --> throws error Error OperationalError at /admin/department/1/ (1054, "Unknown column 'schools_department.tags' in 'field list'") I looked at the docs and couldn't find further information Useful Tips Overview Txt A: The TagField requires an actual database column on your model; it uses this to cache the tags as entered. If you add a TagField to a model that already has a database table, you will need to add the column to the database table, just as with adding any other type of field. Either use a schema migration tool (like South or django-evolution) or run the appropriate SQL ALTER TABLE command manually.
Django_tagging (v0.3/pre): Configuration issue
I am trying to use the django-tagging in one of my project and run into some errors. I can play with tags in the shell but couldn't assign them from admin interface. What I want to do is add "tag" functionality to a model and add/remove tags from Admin interface. Why is it the "tags" are seen by shell and not by "admin" interface? What is going on? Model.py: import tagging class Department(models.Model): tags = TagField() Admin.py: class DepartmentAdmin(admin.ModelAdmin): list_display = ('name', 'tags') --> works .... fields = ['name', 'tags'] --> throws error Error OperationalError at /admin/department/1/ (1054, "Unknown column 'schools_department.tags' in 'field list'") I looked at the docs and couldn't find further information Useful Tips Overview Txt
[ "The TagField requires an actual database column on your model; it uses this to cache the tags as entered. If you add a TagField to a model that already has a database table, you will need to add the column to the database table, just as with adding any other type of field. Either use a schema migration tool (like South or django-evolution) or run the appropriate SQL ALTER TABLE command manually.\n" ]
[ 4 ]
[]
[]
[ "django", "django_admin", "python", "tagging" ]
stackoverflow_0001326512_django_django_admin_python_tagging.txt
Q: Python Scrapy , how to define a pipeline for an item? I am using scrapy to crawl different sites, for each site I have an Item (different information is extracted) Well, for example I have a generic pipeline (most of information is the same) but now I am crawling some google search response and the pipeline must be different. For example: GenericItem uses GenericPipeline But the GoogleItem uses GoogleItemPipeline, but when the spider is crawling it tries to use GenericPipeline instead of GoogleItemPipeline....how can I specify which pipeline Google spider must use? A: Now only one way - check Item type in pipeline and process it or return "as is" pipelines.py: from grabbers.items import FeedItem class StoreFeedPost(object): def process_item(self, domain, item): if isinstance(item, FeedItem): #process it... return item items.py: from scrapy.item import ScrapedItem class FeedItem(ScrapedItem): pass
Python Scrapy , how to define a pipeline for an item?
I am using scrapy to crawl different sites, for each site I have an Item (different information is extracted) Well, for example I have a generic pipeline (most of information is the same) but now I am crawling some google search response and the pipeline must be different. For example: GenericItem uses GenericPipeline But the GoogleItem uses GoogleItemPipeline, but when the spider is crawling it tries to use GenericPipeline instead of GoogleItemPipeline....how can I specify which pipeline Google spider must use?
[ "Now only one way - check Item type in pipeline and process it or return \"as is\"\npipelines.py:\nfrom grabbers.items import FeedItem\n\nclass StoreFeedPost(object):\n\n def process_item(self, domain, item):\n if isinstance(item, FeedItem):\n #process it...\n\n return item\n\nitems.py:\nfrom scrapy.item import ScrapedItem\n\nclass FeedItem(ScrapedItem):\n pass\n\n" ]
[ 16 ]
[]
[]
[ "python", "scrapy", "screen_scraping" ]
stackoverflow_0001056651_python_scrapy_screen_scraping.txt
Q: How do I efficiently do a bulk insert-or-update with SQLAlchemy? I'm using SQLAlchemy with a Postgres backend to do a bulk insert-or-update. To try to improve performance, I'm attempting to commit only once every thousand rows or so: trans = engine.begin() for i, rec in enumerate(records): if i % 1000 == 0: trans.commit() trans = engine.begin() try: inserter.execute(...) except sa.exceptions.SQLError: my_table.update(...).execute() trans.commit() However, this isn't working. It seems that when the INSERT fails, it leaves things in a weird state that prevents the UPDATE from happening. Is it automatically rolling back the transaction? If so, can this be stopped? I don't want my entire transaction rolled back in the event of a problem, which is why I'm trying to catch the exception in the first place. The error message I'm getting, BTW, is "sqlalchemy.exc.InternalError: (InternalError) current transaction is aborted, commands ignored until end of transaction block", and it happens on the update().execute() call. A: You're hitting some weird Postgresql-specific behavior: if an error happens in a transaction, it forces the whole transaction to be rolled back. I consider this a Postgres design bug; it takes quite a bit of SQL contortionism to work around in some cases. One workaround is to do the UPDATE first. Detect if it actually modified a row by looking at cursor.rowcount; if it didn't modify any rows, it didn't exist, so do the INSERT. (This will be faster if you update more frequently than you insert, of course.) Another workaround is to use savepoints: SAVEPOINT a; INSERT INTO ....; -- on error: ROLLBACK TO SAVEPOINT a; UPDATE ...; -- on success: RELEASE SAVEPOINT a; This has a serious problem for production-quality code: you have to detect the error accurately. Presumably you're expecting to hit a unique constraint check, but you may hit something unexpected, and it may be next to impossible to reliably distinguish the expected error from the unexpected one. If this hits the error condition incorrectly, it'll lead to obscure problems where nothing will be updated or inserted and no error will be seen. Be very careful with this. You can narrow down the error case by looking at Postgresql's error code to make sure it's the error type you're expecting, but the potential problem is still there. Finally, if you really want to do batch-insert-or-update, you actually want to do many of them in a few commands, not one item per command. This requires trickier SQL: SELECT nested inside an INSERT, filtering out the right items to insert and update. A: This error is from PostgreSQL. PostgreSQL doesn't allow you to execute commands in the same transaction if one command creates an error. To fix this you can use nested transactions (implemented using SQL savepoints) via conn.begin_nested(). Heres something that might work. I made the code use explicit connections, factored out the chunking part and made the code use the context manager to manage transactions correctly. from itertools import chain, islice def chunked(seq, chunksize): """Yields items from an iterator in chunks.""" it = iter(seq) while True: yield chain([it.next()], islice(it, chunksize-1)) conn = engine.commit() for chunk in chunked(records, 1000): with conn.begin(): for rec in chunk: try: with conn.begin_nested(): conn.execute(inserter, ...) except sa.exceptions.SQLError: conn.execute(my_table.update(...)) This still won't have stellar performance though due to nested transaction overhead. If you want better performance try to detect which rows will create errors beforehand with a select query and use executemany support (execute can take a list of dicts if all inserts use the same columns). If you need to handle concurrent updates, you'll still need to do error handling either via retrying or falling back to one by one inserts.
How do I efficiently do a bulk insert-or-update with SQLAlchemy?
I'm using SQLAlchemy with a Postgres backend to do a bulk insert-or-update. To try to improve performance, I'm attempting to commit only once every thousand rows or so: trans = engine.begin() for i, rec in enumerate(records): if i % 1000 == 0: trans.commit() trans = engine.begin() try: inserter.execute(...) except sa.exceptions.SQLError: my_table.update(...).execute() trans.commit() However, this isn't working. It seems that when the INSERT fails, it leaves things in a weird state that prevents the UPDATE from happening. Is it automatically rolling back the transaction? If so, can this be stopped? I don't want my entire transaction rolled back in the event of a problem, which is why I'm trying to catch the exception in the first place. The error message I'm getting, BTW, is "sqlalchemy.exc.InternalError: (InternalError) current transaction is aborted, commands ignored until end of transaction block", and it happens on the update().execute() call.
[ "You're hitting some weird Postgresql-specific behavior: if an error happens in a transaction, it forces the whole transaction to be rolled back. I consider this a Postgres design bug; it takes quite a bit of SQL contortionism to work around in some cases.\nOne workaround is to do the UPDATE first. Detect if it actually modified a row by looking at cursor.rowcount; if it didn't modify any rows, it didn't exist, so do the INSERT. (This will be faster if you update more frequently than you insert, of course.)\nAnother workaround is to use savepoints:\nSAVEPOINT a;\nINSERT INTO ....;\n-- on error:\nROLLBACK TO SAVEPOINT a;\nUPDATE ...;\n-- on success:\nRELEASE SAVEPOINT a;\n\nThis has a serious problem for production-quality code: you have to detect the error accurately. Presumably you're expecting to hit a unique constraint check, but you may hit something unexpected, and it may be next to impossible to reliably distinguish the expected error from the unexpected one. If this hits the error condition incorrectly, it'll lead to obscure problems where nothing will be updated or inserted and no error will be seen. Be very careful with this. You can narrow down the error case by looking at Postgresql's error code to make sure it's the error type you're expecting, but the potential problem is still there.\nFinally, if you really want to do batch-insert-or-update, you actually want to do many of them in a few commands, not one item per command. This requires trickier SQL: SELECT nested inside an INSERT, filtering out the right items to insert and update.\n", "This error is from PostgreSQL. PostgreSQL doesn't allow you to execute commands in the same transaction if one command creates an error. To fix this you can use nested transactions (implemented using SQL savepoints) via conn.begin_nested(). Heres something that might work. I made the code use explicit connections, factored out the chunking part and made the code use the context manager to manage transactions correctly.\nfrom itertools import chain, islice\ndef chunked(seq, chunksize):\n \"\"\"Yields items from an iterator in chunks.\"\"\"\n it = iter(seq)\n while True:\n yield chain([it.next()], islice(it, chunksize-1))\n\nconn = engine.commit()\nfor chunk in chunked(records, 1000):\n with conn.begin():\n for rec in chunk:\n try:\n with conn.begin_nested():\n conn.execute(inserter, ...)\n except sa.exceptions.SQLError:\n conn.execute(my_table.update(...))\n\nThis still won't have stellar performance though due to nested transaction overhead. If you want better performance try to detect which rows will create errors beforehand with a select query and use executemany support (execute can take a list of dicts if all inserts use the same columns). If you need to handle concurrent updates, you'll still need to do error handling either via retrying or falling back to one by one inserts.\n" ]
[ 5, 4 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001330475_python_sqlalchemy.txt
Q: Study Objective-C , Ruby OR Python? I am working on C++ since last 4-5 years . Recently I have bought iphone and macbook and want do do some programming for iphone. So I have started reading one book about Objective-C. I have also learn that we can program with Ruby and Python on MAC. So my question is which one to study? Which language you guys see the FUTURE??? Can we program with these languages on other platforms? Or are these only limited on MAC? I am just a beginner in objective-C.Need some expert thoughts which way to go. AC A: If you want to program for iphone then you should use objective-C. The entire iphone API is based on objective-C, and you have the benefits of using interface builder and IDE support from Xcode. A: I use all the languages C++, Ruby, Python and Objective-C. I like each one in different ways. If you want to get into Mac and iPhone development as others I recommend Objective-C. One of the benefits not mentioned is that Objective-C is a proper superset of C (C++ is almost a superset), that means you can bring over all your C programming knowledge from doing C++ to Objective-C programming. In fact you can also mix in C++ code in Objective-C code. You can't do that in a seamless way in Python and Ruby. The reason why you can do this is that Objective-C is actually a very simple language. Originally it was just C with a custom made preprocessor which took statements like this: [rectangle setX: 10 y: 10 width: 20 height: 20]; and converted it to this before compiling: objc_msgSend(rectangle, "setX:y:width:height:", 10, 10, 20, 20); Apart from that Ruby, Python and Objective-C are very similar in their object model at least compared to C++. In C++ classes are created at compile time. In Objective-C, Ruby and Python classes are things created at runtime. I wrote some stuff on why Obj-C is cool here A: Objective-C is the only way to program an iPhone if you want to produce native programs that can be sold in the App Store. Some of the more advanced concepts in Objective-C are now being added to languages like C# (eg: extension methods in C# v3.0). Learning to think in Objective-C will be useful, the OO model you learn will be applicable to most other languages and environments as an addition to your C++ experience. Ruby's object model is closer to that of Objective-C than is Python so I suggest also learning Ruby but not until you have your Objective-C skills down solidly. Note that you can use Objective-C++ and use C++ for all but your GUI code by having .mm suffixes on your files - this works on both iPhone and Mac. Given your C++ experience, that help you be productive. If you want to program iPhone, don't bother learning the new Objective-C 2.0 memory management but you can still use the Properties model (iPhone effectively has a subset of the Objective-C 2.0 runtime). A: Which language you guys see the FUTURE??? Future of what? iPhone development? Objective-C. Web Services? Python/Ruby in parallel for a while. At least until people start trying to do maintenance on large Ruby applications and get frustrated with it's opacity. Real-time game engine development? Embedded applications? Future of what? "Can we program with these languages on other platforms? Or are these only limited on MAC?" Ruby and Python: Yes. These are designed to run on any platform that supports C. Objective-C: Yes. It's open source, it's in the GCC, it should work almost anywhere. Learning a new language is not a zero-sum game. You can learn more than one language; learning Objective-C now does not prevent you from learning Python or Ruby in the future. A: As a Perlite, I'm just going to point out that OS X has Perl as well as Python or Ruby. As far as Perl/Python/Ruby goes, programs are almost completely cross-platform. It is fairly easy to run a Perl/Python/Ruby program on any platform and it works more or less the same. There may be some minor differences, but they're not major. Objective-C, while not strictly confined to OS X, is only really used in OpenStep-based environments, which generally means OS X and the iPhone. The only Objective-C compiler I know of is gcc, and I imagine you can write Objective-C on Linux, but I don't know if Windows support is very good (if it exists). As for which is the language of the "future", all 3 (or 4) languages will be used very widely in the future. No one can really predict this kind of thing, and none of the languages are really going to die off (unless Apple switches to a new language as a "standard" for making Mac programs), so you'll be pretty safe with any of them. My advice: try them all out and see which one you think most suits your style, and learn that one. A: As has been noted by others, if you want to program the iPhone, Objective-C is the way to go. Objective-C is pretty Mac-specific; of course, the Gnu Objective-C compiler is avaialble for other platforms as well, and there is also GnuStep, but I think the main applicability of Objective-C today is for programming Macs and iPhones. Python and Ruby on the other hand are available on a large number of platforms (including both Windows and many Unix-dialects). Personally, I prefer Python, but I would say both languages are very usable and pretty easy to approach. Note also that both Python and Ruby have Objective-C bridges available, which allows you to write quite fancy Cococa applications in any of those languages. A: If you program with Objective-C, your main goal should be writing Cocoa applications on the Mac. Beyond that, it has little use. Ruby and Python are useful scripting languages, and there are also bridges to write Cocoa applications. If you want to write apps on the Mac, I would start with Objective-C. There is more support available. In terms of the future, it seems like a lot of people are jumping on the Ruby bandwagon at the moment. Good luck. A: To program on Mac OS X, you really do need a good foundation in Objective-C. The vast majority of documentation will assume Objective-C. Even if you choose to program some applications in some other language, you will be better off having a good understanding of it. A: Ruby. With Ruby you will be able to do both web development (Rails/Sinatra/etc.) and very soon program on the MAC/Iphone platform with the Macruby project. Why not get the best of both worlds? Tommy A: Just my two cents...As I'm sure you're aware, Apple and others in the respective communities are doing a lot of work with Ruby and Python, for both Mac and iPhone development. Objective-C will pretty much get you into Apple arenas only these days (though maybe that's not a bad thing;) However, if you are only going to learn one language in the foreseeable future, think about where you will be using it, and what for. Ruby and Python will get you a lot further if you are looking beyond solely Mac desktop and iPhone. A: I have written small games, interpreters, and tons of awessome stuff in Ruby. I Wouldn't recommend It to write intensive AI programs for instance, but It's fun to learn and powerful for most applications. Even when I do most of my work in C++ Ruby is my favorite language for subjective reasons. Objective C as most people said Is a must in iPhone development, and fun if You're enthusiastic about learning languages. I haven't tried Python, but I hear nothing but good things about It, and PyGames Is quite popular. I would learn the three ( well...I would skip objective C unless You're curious about getting into iPhone development), the most languages you know, the best professional You will be. As a good professor of mine always said..It's not about being the master in just one language, It's about knowing the pros and cons of each one to choose the right one according to the particular problem You want to solve. Cheers ! A: Objective-C is only Mac/iPhone, and I recommend you to learn if you want to develop applications for Mac/iPhone. Python is everything and it's future, but python more preferable for web development. Python is Google :) Python is web, games, science, graphics, desktop, etc. Also it's very good choice if you are C/C++ developer. Not sure if i can recommend you to learn Ruby...
Study Objective-C , Ruby OR Python?
I am working on C++ since last 4-5 years . Recently I have bought iphone and macbook and want do do some programming for iphone. So I have started reading one book about Objective-C. I have also learn that we can program with Ruby and Python on MAC. So my question is which one to study? Which language you guys see the FUTURE??? Can we program with these languages on other platforms? Or are these only limited on MAC? I am just a beginner in objective-C.Need some expert thoughts which way to go. AC
[ "If you want to program for iphone then you should use objective-C. The entire iphone API is based on objective-C, and you have the benefits of using interface builder and IDE support from Xcode.\n", "I use all the languages C++, Ruby, Python and Objective-C. I like each one in different ways. If you want to get into Mac and iPhone development as others I recommend Objective-C. \nOne of the benefits not mentioned is that Objective-C is a proper superset of C (C++ is almost a superset), that means you can bring over all your C programming knowledge from doing C++ to Objective-C programming. In fact you can also mix in C++ code in Objective-C code. \nYou can't do that in a seamless way in Python and Ruby. The reason why you can do this is that Objective-C is actually a very simple language.\nOriginally it was just C with a custom made preprocessor which took statements like this:\n[rectangle setX: 10 y: 10 width: 20 height: 20];\n\nand converted it to this before compiling:\n objc_msgSend(rectangle, \"setX:y:width:height:\", 10, 10, 20, 20);\n\nApart from that Ruby, Python and Objective-C are very similar in their object model at least compared to C++. In C++ classes are created at compile time. In Objective-C, Ruby and Python classes are things created at runtime. \nI wrote some stuff on why Obj-C is cool here\n", "Objective-C is the only way to program an iPhone if you want to produce native programs that can be sold in the App Store.\nSome of the more advanced concepts in Objective-C are now being added to languages like C# (eg: extension methods in C# v3.0). Learning to think in Objective-C will be useful, the OO model you learn will be applicable to most other languages and environments as an addition to your C++ experience.\nRuby's object model is closer to that of Objective-C than is Python so I suggest also learning Ruby but not until you have your Objective-C skills down solidly. \nNote that you can use Objective-C++ and use C++ for all but your GUI code by having .mm suffixes on your files - this works on both iPhone and Mac. Given your C++ experience, that help you be productive.\nIf you want to program iPhone, don't bother learning the new Objective-C 2.0 memory management but you can still use the Properties model (iPhone effectively has a subset of the Objective-C 2.0 runtime).\n", "Which language you guys see the FUTURE???\nFuture of what? iPhone development? Objective-C. \nWeb Services? Python/Ruby in parallel for a while. At least until people start trying to do maintenance on large Ruby applications and get frustrated with it's opacity.\nReal-time game engine development? Embedded applications? Future of what?\n\"Can we program with these languages on other platforms? Or are these only limited on MAC?\"\nRuby and Python: Yes. These are designed to run on any platform that supports C.\nObjective-C: Yes. It's open source, it's in the GCC, it should work almost anywhere. \nLearning a new language is not a zero-sum game. You can learn more than one language; learning Objective-C now does not prevent you from learning Python or Ruby in the future.\n", "As a Perlite, I'm just going to point out that OS X has Perl as well as Python or Ruby.\nAs far as Perl/Python/Ruby goes, programs are almost completely cross-platform. It is fairly easy to run a Perl/Python/Ruby program on any platform and it works more or less the same. There may be some minor differences, but they're not major.\nObjective-C, while not strictly confined to OS X, is only really used in OpenStep-based environments, which generally means OS X and the iPhone. The only Objective-C compiler I know of is gcc, and I imagine you can write Objective-C on Linux, but I don't know if Windows support is very good (if it exists).\nAs for which is the language of the \"future\", all 3 (or 4) languages will be used very widely in the future. No one can really predict this kind of thing, and none of the languages are really going to die off (unless Apple switches to a new language as a \"standard\" for making Mac programs), so you'll be pretty safe with any of them.\nMy advice: try them all out and see which one you think most suits your style, and learn that one.\n", "As has been noted by others, if you want to program the iPhone, Objective-C is the way to go.\nObjective-C is pretty Mac-specific; of course, the Gnu Objective-C compiler is avaialble for other platforms as well, and there is also GnuStep, but I think the main applicability of Objective-C today is for programming Macs and iPhones.\nPython and Ruby on the other hand are available on a large number of platforms (including both Windows and many Unix-dialects). Personally, I prefer Python, but I would say both languages are very usable and pretty easy to approach.\nNote also that both Python and Ruby have Objective-C bridges available, which allows you to write quite fancy Cococa applications in any of those languages.\n", "If you program with Objective-C, your main goal should be writing Cocoa applications on the Mac. Beyond that, it has little use. Ruby and Python are useful scripting languages, and there are also bridges to write Cocoa applications.\nIf you want to write apps on the Mac, I would start with Objective-C. There is more support available.\nIn terms of the future, it seems like a lot of people are jumping on the Ruby bandwagon at the moment. Good luck.\n", "To program on Mac OS X, you really do need a good foundation in Objective-C. The vast majority of documentation will assume Objective-C. Even if you choose to program some applications in some other language, you will be better off having a good understanding of it.\n", "Ruby. With Ruby you will be able to do both web development (Rails/Sinatra/etc.) and very soon program on the MAC/Iphone platform with the Macruby project. Why not get the best of both worlds?\nTommy\n", "Just my two cents...As I'm sure you're aware, Apple and others in the respective communities are doing a lot of work with Ruby and Python, for both Mac and iPhone development. Objective-C will pretty much get you into Apple arenas only these days (though maybe that's not a bad thing;) However, if you are only going to learn one language in the foreseeable future, think about where you will be using it, and what for. Ruby and Python will get you a lot further if you are looking beyond solely Mac desktop and iPhone.\n", "I have written small games, interpreters, and tons of awessome stuff in Ruby. I Wouldn't recommend It to write intensive AI programs for instance, but It's fun to learn and powerful for most applications. Even when I do most of my work in C++ Ruby is my favorite language for subjective reasons.\nObjective C as most people said Is a must in iPhone development, and fun if You're enthusiastic about learning languages.\nI haven't tried Python, but I hear nothing but good things about It, and PyGames Is quite popular.\nI would learn the three ( well...I would skip objective C unless You're curious about getting into iPhone development), the most languages you know, the best professional You will be. As a good professor of mine always said..It's not about being the master in just one language, It's about knowing the pros and cons of each one to choose the right one according to the particular problem You want to solve.\nCheers !\n", "Objective-C is only Mac/iPhone, and I recommend you to learn if you want to develop applications for Mac/iPhone.\nPython is everything and it's future, but python more preferable for web development. Python is Google :) Python is web, games, science, graphics, desktop, etc. Also it's very good choice if you are C/C++ developer.\nNot sure if i can recommend you to learn Ruby...\n" ]
[ 10, 8, 7, 7, 4, 3, 2, 2, 2, 2, 2, 1 ]
[]
[]
[ "objective_c", "programming_languages", "python", "ruby" ]
stackoverflow_0000550474_objective_c_programming_languages_python_ruby.txt
Q: How do I link relative to a Pylons application root? In Pylons I have a mako template linking to /static/resource.css. How do I automatically link to /pylons/static/resource.css when I decide to map the application to a subdirectory on my web server? A: If you want your static file links to be relative to your app root, wrap them like this in your templates (assuming Mako and Pylons 0.9.7): ${url('/static/resource.css')} The root path of your app will be prepended. No need to define specific routes for each file. A: What you want are static routes: map.connect('resource', '/static/resource.css', _static=True)
How do I link relative to a Pylons application root?
In Pylons I have a mako template linking to /static/resource.css. How do I automatically link to /pylons/static/resource.css when I decide to map the application to a subdirectory on my web server?
[ "If you want your static file links to be relative to your app root, wrap them like this in your templates (assuming Mako and Pylons 0.9.7):\n${url('/static/resource.css')}\n\nThe root path of your app will be prepended. No need to define specific routes for each file.\n", "What you want are static routes:\nmap.connect('resource', '/static/resource.css', _static=True)\n\n" ]
[ 2, 1 ]
[]
[]
[ "mako", "pylons", "python" ]
stackoverflow_0001201555_mako_pylons_python.txt
Q: How Can I Empty the Used Memory With Python? I have just written a .psf file in Python for executing an optimization algorithm for Abaqus package, but after some analysis it stops. Could you please help me and write Python code to free the memory? Thanks A: You don't really explicitly free memory in Python. What you do is stop referencing it, and it gets freed automatically. Although del does this, it's very rare that you really need to use it in a well designed application. So this is really a question of how not to use so much memory in Python. I'd say the main hint there is to try to refactor your program to use generators, so that you don't have to hold all the data in memory at once. A: There are really only two python options. You can ask the garbage collector to please run. That may or may not do anything useful. Or you can delete a large container. del my_var_name If the memory is not really allocated by Python, you will need to use the interfaces of whatever module you are using to free it up. A: by stopping using it when you do not need, python has garbage collector. Set the attributes, and variables to None when you are done with them.
How Can I Empty the Used Memory With Python?
I have just written a .psf file in Python for executing an optimization algorithm for Abaqus package, but after some analysis it stops. Could you please help me and write Python code to free the memory? Thanks
[ "You don't really explicitly free memory in Python. What you do is stop referencing it, and it gets freed automatically. Although del does this, it's very rare that you really need to use it in a well designed application.\nSo this is really a question of how not to use so much memory in Python. I'd say the main hint there is to try to refactor your program to use generators, so that you don't have to hold all the data in memory at once.\n", "There are really only two python options. You can ask the garbage collector to please run. That may or may not do anything useful. Or you can delete a large container. \ndel my_var_name\n\nIf the memory is not really allocated by Python, you will need to use the interfaces of whatever module you are using to free it up.\n", "by stopping using it when you do not need, python has garbage collector. Set the attributes, and variables to None when you are done with them.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "memory", "memory_management", "python" ]
stackoverflow_0001331033_memory_memory_management_python.txt
Q: Python list filtering: remove subsets from list of lists Using Python how do you reduce a list of lists by an ordered subset match [[..],[..],..]? In the context of this question a list L is a subset of list M if M contains all members of L, and in the same order. For example, the list [1,2] is a subset of the list [1,2,3], but not of the list [2,1,3]. Example input: a. [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] b. [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] Expected result: a. [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] b. [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]] Further Examples: L = [[1, 2, 3, 4, 5, 6, 7], [1, 2, 5, 6]] - No reduce L = [[1, 2, 3, 4, 5, 6, 7], [1, 2, 3], [1, 2, 4, 8]] - Yes reduce L = [[1, 2, 3, 4, 5, 6, 7], [7, 6, 5, 4, 3, 2, 1]] - No reduce (Sorry for causing confusion with the incorrect data set.) A: This could be simplified, but: l = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] l2 = l[:] for m in l: for n in l: if set(m).issubset(set(n)) and m != n: l2.remove(m) break print l2 [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] A: This code should be rather memory efficient. Beyond storing your initial list of lists, this code uses negligible extra memory (no temporary sets or copies of lists are created). def is_subset(needle,haystack): """ Check if needle is ordered subset of haystack in O(n) """ if len(haystack) < len(needle): return False index = 0 for element in needle: try: index = haystack.index(element, index) + 1 except ValueError: return False else: return True def filter_subsets(lists): """ Given list of lists, return new list of lists without subsets """ for needle in lists: if not any(is_subset(needle, haystack) for haystack in lists if needle is not haystack): yield needle my_lists = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] print list(filter_subsets(my_lists)) >>> [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] And, just for fun, a one-liner: def filter_list(L): return [x for x in L if not any(set(x)<=set(y) for y in L if x is not y)] A: A list is a superlist if it is not a subset of any other list. It's a subset of another list if every element of the list can be found, in order, in another list. Here's my code: def is_sublist_of_any_list(cand, lists): # Compare candidate to a single list def is_sublist_of_list(cand, target): try: i = 0 for c in cand: i = 1 + target.index(c, i) return True except ValueError: return False # See if candidate matches any other list return any(is_sublist_of_list(cand, target) for target in lists if len(cand) <= len(target)) # Compare candidates to all other lists def super_lists(lists): return [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])] if __name__ == '__main__': lists = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] superlists = super_lists(lists) print superlists Here are the results: [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] Edit: Results for your later data set. >>> lists = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] >>> superlists = super_lists(lists) >>> expected = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [5 0, 69], [2, 3, 21], [1, 2, 4, 8]] >>> assert(superlists == expected) >>> print superlists [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8]] A: Edit: I really need to improve my reading comprehension. Here's the answer to what was actually asked. It exploits the fact that "A is super of B" implies "len(A) > len(B) or A == B". def advance_to(it, value): """Advances an iterator until it matches the given value. Returns False if not found.""" for item in it: if item == value: return True return False def has_supersequence(seq, super_sequences): """Checks if the given sequence has a supersequence in the list of supersequences.""" candidates = map(iter, super_sequences) for next_item in seq: candidates = [seq for seq in candidates if advance_to(seq, next_item)] return len(candidates) > 0 def find_supersequences(sequences): """Finds the supersequences in the given list of sequences. Sequence A is a supersequence of sequence B if B can be created by removing items from A.""" super_seqs = [] for candidate in sorted(sequences, key=len, reverse=True): if not has_supersequence(candidate, super_seqs): super_seqs.append(candidate) return super_seqs print(find_supersequences([[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]])) #Output: [[1, 2, 3, 4, 5, 6, 7], [1, 2, 4, 8], [2, 3, 21]] If you need to also preserve the original order of the sequences, then the find_supersequences() function needs to keep track of the positions of the sequences and sort the output afterwards. A: list0=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] for list1 in list0[:]: for list2 in list0: if list2!=list1: len1=len(list1) c=0 for n in list2: if n==list1[c]: c+=1 if c==len1: list0.remove(list1) break This filters list0 in place using a copy of it. This is good if the result is expected to be about the same size as the original, there is only a few "subset" to remove. If the result is expected to be small and the original is large, you might prefer this one who is more memory freindly as it doesn't copy the original list. list0=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] result=[] for list1 in list0: subset=False for list2 in list0: if list2!=list1: len1=len(list1) c=0 for n in list2: if n==list1[c]: c+=1 if c==len1: subset=True break if subset: break if not subset: result.append(list1) A: This seems to work: original=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] target=[[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] class SetAndList: def __init__(self,aList): self.list=aList self.set=set(aList) self.isUnique=True def compare(self,aList): s=set(aList) if self.set.issubset(s): #print self.list,'superceded by',aList self.isUnique=False def listReduce(lists): temp=[] for l in lists: for t in temp: t.compare(l) temp.append( SetAndList(l) ) return [t.list for t in temp if t.isUnique] print listReduce(original) print target This prints the calculated list and the target for visual comparison. Uncomment the print line in the compare method to see how various lists get superceded. Tested with python 2.6.2 A: I implemented a different issubseq because yours doesn't say that [1, 2, 4, 5, 6] is a subsequence of [1, 2, 3, 4, 5, 6, 7], for example (besides being painfully slow). The solution I came up with looks like this: def is_subseq(a, b): if len(a) > len(b): return False start = 0 for el in a: while start < len(b): if el == b[start]: break start = start + 1 else: return False return True def filter_partial_matches(sets): return [s for s in sets if all([not(is_subseq(s, ss)) for ss in sets if s != ss])] A simple test case, given your inputs and outputs: >>> test = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] >>> another_test = [[1, 2, 3, 4], [2, 4, 3], [3, 4, 5]] >>> filter_partial_matches(test) [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] >>> filter_partial_matches(another_test) [[1, 2, 3, 4], [2, 4, 3], [3, 4, 5]] Hope it helps! A: Refined answer after new test case: original= [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] class SetAndList: def __init__(self,aList): self.list=aList self.set=set(aList) self.isUnique=True def compare(self,other): if self.set.issubset(other.set): #print self.list,'superceded by',other.list self.isUnique=False def listReduce(lists): temp=[] for l in lists: s=SetAndList(l) for t in temp: t.compare(s) s.compare(t) temp.append( s ) temp=[t for t in temp if t.isUnique] return [t.list for t in temp if t.isUnique] print listReduce(original) You didn't give the required output, but I'm guessing this is right, as [1,2,3] does not appear in the output. A: Thanks to all who suggested solutions and coping with my sometimes erroneous data sets. Using @hughdbrown solution I modified it to what I wanted: The modification was to use a sliding window over the target to ensure the subset sequence was found. I think I should have used a more appropriate word than 'Set' to describe my problem. def is_sublist_of_any_list(cand, lists): # Compare candidate to a single list def is_sublist_of_list(cand, target): try: i = 0 try: start = target.index(cand[0]) except: return False while start < (len(target) + len(cand)) - start: if cand == target[start:len(cand)]: return True else: start = target.index(cand[0], start + 1) except ValueError: return False # See if candidate matches any other list return any(is_sublist_of_list(cand, target) for target in lists if len(cand) <= len(target)) # Compare candidates to all other lists def super_lists(lists): a = [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])] return a lists = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] expect = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]] def test(): out = super_lists(list(lists)) print "In : ", lists print "Out : ", out assert (out == expect) Result: In : [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] Out : [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]] A: So what you really wanted was to know if a list was a substring, so to speak, of another, with all the matching elements consecutive. Here is code that converts the candidate and the target list to comma-separated strings and does a substring comparison to see if the candidate appears within the target list def is_sublist_of_any_list(cand, lists): def comma_list(l): return "," + ",".join(str(x) for x in l) + "," cand = comma_list(cand) return any(cand in comma_list(target) for target in lists if len(cand) <= len(target)) def super_lists(lists): return [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])] The function comma_list() puts leading and trailing commas on the list to ensure that integers are fully delimited. Otherwise, [1] would be a subset of [100], for example.
Python list filtering: remove subsets from list of lists
Using Python how do you reduce a list of lists by an ordered subset match [[..],[..],..]? In the context of this question a list L is a subset of list M if M contains all members of L, and in the same order. For example, the list [1,2] is a subset of the list [1,2,3], but not of the list [2,1,3]. Example input: a. [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] b. [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]] Expected result: a. [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]] b. [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]] Further Examples: L = [[1, 2, 3, 4, 5, 6, 7], [1, 2, 5, 6]] - No reduce L = [[1, 2, 3, 4, 5, 6, 7], [1, 2, 3], [1, 2, 4, 8]] - Yes reduce L = [[1, 2, 3, 4, 5, 6, 7], [7, 6, 5, 4, 3, 2, 1]] - No reduce (Sorry for causing confusion with the incorrect data set.)
[ "This could be simplified, but:\nl = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\nl2 = l[:]\n\nfor m in l:\n for n in l:\n if set(m).issubset(set(n)) and m != n:\n l2.remove(m)\n break\n\nprint l2\n[[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]]\n\n", "This code should be rather memory efficient. Beyond storing your initial list of lists, this code uses negligible extra memory (no temporary sets or copies of lists are created).\ndef is_subset(needle,haystack):\n \"\"\" Check if needle is ordered subset of haystack in O(n) \"\"\"\n\n if len(haystack) < len(needle): return False\n\n index = 0\n for element in needle:\n try:\n index = haystack.index(element, index) + 1\n except ValueError:\n return False\n else:\n return True\n\ndef filter_subsets(lists):\n \"\"\" Given list of lists, return new list of lists without subsets \"\"\"\n\n for needle in lists:\n if not any(is_subset(needle, haystack) for haystack in lists\n if needle is not haystack):\n yield needle\n\nmy_lists = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], \n [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] \nprint list(filter_subsets(my_lists))\n\n>>> [[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]]\n\nAnd, just for fun, a one-liner:\ndef filter_list(L):\n return [x for x in L if not any(set(x)<=set(y) for y in L if x is not y)]\n\n", "A list is a superlist if it is not a subset of any other list. It's a subset of another list if every element of the list can be found, in order, in another list.\nHere's my code:\ndef is_sublist_of_any_list(cand, lists):\n # Compare candidate to a single list\n def is_sublist_of_list(cand, target):\n try:\n i = 0\n for c in cand:\n i = 1 + target.index(c, i)\n return True\n except ValueError:\n return False\n # See if candidate matches any other list\n return any(is_sublist_of_list(cand, target) for target in lists if len(cand) <= len(target))\n\n# Compare candidates to all other lists\ndef super_lists(lists):\n return [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])]\n\nif __name__ == '__main__':\n lists = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\n superlists = super_lists(lists)\n print superlists\n\nHere are the results:\n[[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]]\n\nEdit: Results for your later data set.\n>>> lists = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17,\n 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2,\n 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\n>>> superlists = super_lists(lists)\n>>> expected = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [5\n0, 69], [2, 3, 21], [1, 2, 4, 8]]\n>>> assert(superlists == expected)\n>>> print superlists\n[[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3,\n21], [1, 2, 4, 8]]\n\n", "Edit: I really need to improve my reading comprehension. Here's the answer to what was actually asked. It exploits the fact that \"A is super of B\" implies \"len(A) > len(B) or A == B\". \ndef advance_to(it, value):\n \"\"\"Advances an iterator until it matches the given value. Returns False\n if not found.\"\"\"\n for item in it:\n if item == value:\n return True\n return False\n\ndef has_supersequence(seq, super_sequences):\n \"\"\"Checks if the given sequence has a supersequence in the list of\n supersequences.\"\"\" \n candidates = map(iter, super_sequences)\n for next_item in seq:\n candidates = [seq for seq in candidates if advance_to(seq, next_item)]\n return len(candidates) > 0\n\ndef find_supersequences(sequences):\n \"\"\"Finds the supersequences in the given list of sequences.\n\n Sequence A is a supersequence of sequence B if B can be created by removing\n items from A.\"\"\"\n super_seqs = []\n for candidate in sorted(sequences, key=len, reverse=True):\n if not has_supersequence(candidate, super_seqs):\n super_seqs.append(candidate)\n return super_seqs\n\nprint(find_supersequences([[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3],\n [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]))\n#Output: [[1, 2, 3, 4, 5, 6, 7], [1, 2, 4, 8], [2, 3, 21]]\n\nIf you need to also preserve the original order of the sequences, then the find_supersequences() function needs to keep track of the positions of the sequences and sort the output afterwards.\n", "list0=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\n\nfor list1 in list0[:]:\n for list2 in list0:\n if list2!=list1:\n len1=len(list1)\n c=0\n for n in list2:\n if n==list1[c]:\n c+=1\n if c==len1:\n list0.remove(list1)\n break\n\nThis filters list0 in place using a copy of it. This is good if the result is expected to be about the same size as the original, there is only a few \"subset\" to remove.\nIf the result is expected to be small and the original is large, you might prefer this one who is more memory freindly as it doesn't copy the original list.\nlist0=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\nresult=[]\n\nfor list1 in list0:\n subset=False\n for list2 in list0:\n if list2!=list1:\n len1=len(list1)\n c=0\n for n in list2:\n if n==list1[c]:\n c+=1\n if c==len1:\n subset=True\n break\n if subset:\n break\n if not subset:\n result.append(list1)\n\n", "This seems to work:\noriginal=[[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\n\ntarget=[[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]]\n\nclass SetAndList:\n def __init__(self,aList):\n self.list=aList\n self.set=set(aList)\n self.isUnique=True\n def compare(self,aList):\n s=set(aList)\n if self.set.issubset(s):\n #print self.list,'superceded by',aList\n self.isUnique=False\n\ndef listReduce(lists):\n temp=[]\n for l in lists:\n for t in temp:\n t.compare(l)\n temp.append( SetAndList(l) )\n\n return [t.list for t in temp if t.isUnique]\n\nprint listReduce(original)\nprint target\n\nThis prints the calculated list and the target for visual comparison.\nUncomment the print line in the compare method to see how various lists get superceded.\nTested with python 2.6.2\n", "I implemented a different issubseq because yours doesn't say that [1, 2, 4, 5, 6] is a subsequence of [1, 2, 3, 4, 5, 6, 7], for example (besides being painfully slow). The solution I came up with looks like this:\n def is_subseq(a, b):\n if len(a) > len(b): return False\n start = 0\n for el in a:\n while start < len(b):\n if el == b[start]:\n break\n start = start + 1\n else:\n return False\n return True\n\ndef filter_partial_matches(sets):\n return [s for s in sets if all([not(is_subseq(s, ss)) for ss in sets if s != ss])]\n\nA simple test case, given your inputs and outputs:\n>>> test = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]]\n>>> another_test = [[1, 2, 3, 4], [2, 4, 3], [3, 4, 5]]\n>>> filter_partial_matches(test)\n[[1, 2, 4, 8], [2, 3, 21], [1, 2, 3, 4, 5, 6, 7]]\n>>> filter_partial_matches(another_test)\n[[1, 2, 3, 4], [2, 4, 3], [3, 4, 5]]\n\nHope it helps!\n", "Refined answer after new test case:\noriginal= [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\n\nclass SetAndList:\n def __init__(self,aList):\n self.list=aList\n self.set=set(aList)\n self.isUnique=True\n def compare(self,other):\n if self.set.issubset(other.set):\n #print self.list,'superceded by',other.list\n self.isUnique=False\n\ndef listReduce(lists):\n temp=[]\n for l in lists:\n s=SetAndList(l)\n for t in temp:\n t.compare(s)\n s.compare(t)\n temp.append( s )\n temp=[t for t in temp if t.isUnique]\n\n return [t.list for t in temp if t.isUnique]\n\nprint listReduce(original)\n\nYou didn't give the required output, but I'm guessing this is right, as [1,2,3] does not appear in the output.\n", "Thanks to all who suggested solutions and coping with my sometimes erroneous data sets. Using @hughdbrown solution I modified it to what I wanted:\nThe modification was to use a sliding window over the target to ensure the subset sequence was found. I think I should have used a more appropriate word than 'Set' to describe my problem.\ndef is_sublist_of_any_list(cand, lists):\n # Compare candidate to a single list\n def is_sublist_of_list(cand, target):\n try:\n i = 0 \n try:\n start = target.index(cand[0])\n except:\n return False\n\n while start < (len(target) + len(cand)) - start:\n if cand == target[start:len(cand)]:\n return True\n else:\n start = target.index(cand[0], start + 1)\n except ValueError:\n return False\n\n # See if candidate matches any other list\n return any(is_sublist_of_list(cand, target) for target in lists if len(cand) <= len(target))\n\n# Compare candidates to all other lists\ndef super_lists(lists):\n a = [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])]\n return a\n\nlists = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\nexpect = [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\n\ndef test():\n out = super_lists(list(lists))\n\n print \"In : \", lists\n print \"Out : \", out\n\n assert (out == expect)\n\nResult:\nIn : [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [1], [1, 2, 3, 4], [1, 2], [17, 18, 19, 22, 41, 48], [2, 3], [1, 2, 3], [50, 69], [1, 2, 3], [2, 3, 21], [1, 2, 3], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\nOut : [[2, 16, 17], [1, 2, 3, 4, 5, 6, 7], [17, 18, 19, 22, 41, 48], [50, 69], [2, 3, 21], [1, 2, 4, 8], [1, 2, 4, 5, 6]]\n\n", "So what you really wanted was to know if a list was a substring, so to speak, of another, with all the matching elements consecutive. Here is code that converts the candidate and the target list to comma-separated strings and does a substring comparison to see if the candidate appears within the target list\ndef is_sublist_of_any_list(cand, lists):\n def comma_list(l):\n return \",\" + \",\".join(str(x) for x in l) + \",\"\n cand = comma_list(cand)\n return any(cand in comma_list(target) for target in lists if len(cand) <= len(target))\n\n\ndef super_lists(lists):\n return [cand for i, cand in enumerate(lists) if not is_sublist_of_any_list(cand, lists[:i] + lists[i+1:])]\n\nThe function comma_list() puts leading and trailing commas on the list to ensure that integers are fully delimited. Otherwise, [1] would be a subset of [100], for example.\n" ]
[ 9, 6, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001318935_list_python.txt
Q: How to create probability density function graph using csv dictreader, matplotlib and numpy? I'm trying to create a simple probability density function(pdf) graph using data from one column of a csv file using csv dictreader, matplotlib and numpy... Is there an easy way to use CSV DictReader combined with numpy arrays? Below is code that doesn't work. The error message is TypeError: len() of unsized object, which I'm guessing is related to the fact that my data is not in numpy array format? Also my data has negative and positive numbers. Thanks in advance! import easygui import csv import scipy.stats from numpy import* from pylab import* filename= easygui.fileopenbox(msg='Altitude outlier graph', title='select file', filetypes=['*.csv'], default='X:\\') alt_file=open(filename) x=[] for row in csv.DictReader(alt_file): x.append(float(row['Dist_90m(nmi)'])) a=scipy.stats.pdf_moments(x) prob, bins, patches= hist(a, 10,align='left',facecolor='green') ylabel('probability density function') show() A: The line a=scipy.stats.pdf_moments(x) "Return[s] the Gaussian expanded pdf function given the list of central moments (first one is mean)." That is to say, a is a function, and you must take its value somehow. So I modified the line: prob, bins, patches= hist([a(i/100.0) for i in xrange(0,100,1)], 10, align='left', facecolor='green') And produced this graph with my sample data. Now my statistics are pretty rusty, and I am not sure if you normally take a pdf over 0-1, but you can figure it out from there. If you do need to go over a range of floating points, range and xrange do not produce floats, so one easy way around that is to generate large numbers and divide down; hence a(i/100.0) instead of a(i) for i in xrange(0, 1, 0.01). A: Thanks for all the help!! The following code produces a graph of the probability density function: I'm still having some issues formating it but I think this is a good start. import easygui import csv import scipy.stats import numpy from pylab import* filename= easygui.fileopenbox(msg='Altitude outlier graph', title='select file', filetypes=['*.csv'], default='X:\\herring_schools\\') alt_file=open(filename) a=[] for row in csv.DictReader(alt_file): a.append(row['Dist_90m(nmi)']) y= numpy.array(a, float) pdf, bins, patches=hist(y, bins=6, align='left',range=None, normed=True) ylabel('probability density function') xlabel('Distance from 90m contour line(nm)') ylim([0,1]) show()
How to create probability density function graph using csv dictreader, matplotlib and numpy?
I'm trying to create a simple probability density function(pdf) graph using data from one column of a csv file using csv dictreader, matplotlib and numpy... Is there an easy way to use CSV DictReader combined with numpy arrays? Below is code that doesn't work. The error message is TypeError: len() of unsized object, which I'm guessing is related to the fact that my data is not in numpy array format? Also my data has negative and positive numbers. Thanks in advance! import easygui import csv import scipy.stats from numpy import* from pylab import* filename= easygui.fileopenbox(msg='Altitude outlier graph', title='select file', filetypes=['*.csv'], default='X:\\') alt_file=open(filename) x=[] for row in csv.DictReader(alt_file): x.append(float(row['Dist_90m(nmi)'])) a=scipy.stats.pdf_moments(x) prob, bins, patches= hist(a, 10,align='left',facecolor='green') ylabel('probability density function') show()
[ "The line\na=scipy.stats.pdf_moments(x)\n\n\"Return[s] the Gaussian expanded pdf function given the list of central moments (first one is mean).\"\nThat is to say, a is a function, and you must take its value somehow.\nSo I modified the line:\nprob, bins, patches= hist([a(i/100.0) for i in xrange(0,100,1)], 10, align='left', facecolor='green')\n\nAnd produced this graph with my sample data.\nNow my statistics are pretty rusty, and I am not sure if you normally take a pdf over 0-1, but you can figure it out from there.\nIf you do need to go over a range of floating points, range and xrange do not produce floats, so one easy way around that is to generate large numbers and divide down; hence a(i/100.0) instead of a(i) for i in xrange(0, 1, 0.01).\n\n", "Thanks for all the help!! The following code produces a graph of the probability density function: I'm still having some issues formating it but I think this is a good start.\nimport easygui\nimport csv\nimport scipy.stats\nimport numpy\nfrom pylab import*\n\nfilename= easygui.fileopenbox(msg='Altitude outlier graph', title='select file', filetypes=['*.csv'], default='X:\\\\herring_schools\\\\')\nalt_file=open(filename) \n\na=[]\nfor row in csv.DictReader(alt_file):\n a.append(row['Dist_90m(nmi)'])\ny= numpy.array(a, float) \n\npdf, bins, patches=hist(y, bins=6, align='left',range=None, normed=True)\nylabel('probability density function')\nxlabel('Distance from 90m contour line(nm)')\nylim([0,1])\nshow()\n\n" ]
[ 4, 0 ]
[]
[]
[ "csv", "matplotlib", "numpy", "python", "scipy" ]
stackoverflow_0001329105_csv_matplotlib_numpy_python_scipy.txt
Q: Soaplib functions with default arguments I have to write soaplib method, that has many arguments. The idea is that the user should able able to choose, which arguments he wants to provide. Is that even possible? I know it is possible in python generally, but there is an error, when i try to set it up like normal python method with default arguments. A: Create complex type class Parameters(ClassSerializer): class types: param1 = primitive.String param2 = primitive.String param3 = primitive.String ... @soapmethod(Parameters, _returns=primitive.String, _outVariableName='return') def soSomething(self, parameters): if parameters.param1 and parameters.param1 != "": # or something like this # ... elif ...
Soaplib functions with default arguments
I have to write soaplib method, that has many arguments. The idea is that the user should able able to choose, which arguments he wants to provide. Is that even possible? I know it is possible in python generally, but there is an error, when i try to set it up like normal python method with default arguments.
[ "Create complex type\nclass Parameters(ClassSerializer):\n class types:\n param1 = primitive.String\n param2 = primitive.String\n param3 = primitive.String\n\n...\n\n@soapmethod(Parameters, _returns=primitive.String, _outVariableName='return')\ndef soSomething(self, parameters):\n if parameters.param1 and parameters.param1 != \"\": # or something like this\n # ...\n elif ...\n\n" ]
[ 0 ]
[]
[]
[ "default_value", "python", "soap" ]
stackoverflow_0001227547_default_value_python_soap.txt
Q: Prevent python imports compiling I have have a python file that imports a few frequently changed python files. I have had trouble with the imported files not recompiling when I change them. How do I stop them compiling? A: I don't think that's possible - its the way Python works. The best you could do, I think, is to have some kind of automated script which deletes *.pyc files at first. Or you could have a development module which automatically compiles all imports - try the compile module. I've personally not had this trouble before, but try checking the timestamps on the files. You could try running touch on all the Python files in the directory. (find -name \\*.py -exec touch \\{\\} \\;) A: There are some modules which might help you: The py_compile module (http://effbot.org/librarybook/py-compile.htm) will allow you to explicitly compile modules (without running them like the 'import' statement does). import py_compile py_compile.compile("my_module.py") Also, the compileall module (http://effbot.org/librarybook/compileall.htm) will compile all the modules found in a directory. import compileall compileall.compile_dir(".", force=1) A: In python 2.6, you should be able to supply the -B option. A: You are looking for compileall compileall.compile_dir(dir[, maxlevels[, ddir[, force[, rx[, quiet]]]]]) Recursively descend the directory tree named by dir, compiling all .py files along the way.
Prevent python imports compiling
I have have a python file that imports a few frequently changed python files. I have had trouble with the imported files not recompiling when I change them. How do I stop them compiling?
[ "I don't think that's possible - its the way Python works. The best you could do, I think, is to have some kind of automated script which deletes *.pyc files at first. Or you could have a development module which automatically compiles all imports - try the compile module.\nI've personally not had this trouble before, but try checking the timestamps on the files. You could try running touch on all the Python files in the directory. (find -name \\\\*.py -exec touch \\\\{\\\\} \\\\;)\n", "There are some modules which might help you:\nThe py_compile module (http://effbot.org/librarybook/py-compile.htm) will allow you to explicitly compile modules (without running them like the 'import' statement does).\nimport py_compile\npy_compile.compile(\"my_module.py\")\n\nAlso, the compileall module (http://effbot.org/librarybook/compileall.htm) will compile all the modules found in a directory.\nimport compileall\ncompileall.compile_dir(\".\", force=1)\n\n", "In python 2.6, you should be able to supply the -B option.\n", "You are looking for compileall\n\ncompileall.compile_dir(dir[,\n maxlevels[, ddir[, force[, rx[,\n quiet]]]]])\nRecursively descend the directory tree named by dir, compiling all .py\n files along the way.\n\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "compilation", "import", "python" ]
stackoverflow_0001331235_compilation_import_python.txt
Q: Why does weakproxy not always preserve equivalence in python? MySQLDb uses weak proxy to prevent circular dependencies between cursors and connections. But you would expect from the documentation on weakref that you could still tests for equivalence. Yet: In [36]: interactive.cursor.connection.thread_id() Out[36]: 4267758 In [37]: interactive.web_logic.conns.primary.thread_id() Out[37]: 4267758 In [38]: interactive.cursor.connection == interactive.web_logic.conns.primary Out[38]: False In [39]: interactive.cursor.connection Out[39]: <weakproxy at 0x3881c60 to Connection at 0x94c010> In [40]: interactive.web_logic.conns.primary Out[40]: <_mysql.connection open to 'xendb01' at 94c010> How do I tell if the connections are the same ? A: I've long found weakref.proxy's design and implementation to be somewhat shaky. Witness...: >>> import weakref >>> ob=set(range(23)) >>> rob=weakref.proxy(ob) >>> rob==ob False >>> rob.__eq__(ob) True ...DEFINITELY peculiar! In practice what I use from weakref are weak-key or sometimes weak-value dictionaries; but weakref.ref is sounder than the proxy wrapper on top of it: >>> wr=weakref.ref(ob) >>> wr()==ob True The need to "call" the ref to get the object (or None if the object has since disappeared) unfortunately makes it non-transparent (so a DB API module couldn't do it while staying compliant to the API). I don't understand why MySqlDb wants weak cursor->connection referencing at all, but if they do I see why they felt they had to use proxies rather than refs. However, one pays a very high price for that transparency! Btw, the "explicit __eq__" trick (or an equivalent one with __cmp__, depending on the type of the underlying object) may help you, even though it's definitely inelegant! A: Wrap the non proxy with weakref.proxy and use the identity operator: >>> interactive.cursor.connection is weakref.proxy(interactive.web_logic.conns.primary) True Calling weakref.proxy() twice will return the same proxy object. A: If the object is a standard weakref, you need to call it to get the object itself. import weakref class Test(object): pass a = Test() b = weakref.ref(a) a is b() # True a == b() # True Using weakrefs here seems wrong, though: if I construct a connection, create a cursor from it, and discard the connection object, the cursor should remain valid. There shouldn't be a circular dependency unless the connection is keeping a list of all cursors, in which case that is what should be the weakref.
Why does weakproxy not always preserve equivalence in python?
MySQLDb uses weak proxy to prevent circular dependencies between cursors and connections. But you would expect from the documentation on weakref that you could still tests for equivalence. Yet: In [36]: interactive.cursor.connection.thread_id() Out[36]: 4267758 In [37]: interactive.web_logic.conns.primary.thread_id() Out[37]: 4267758 In [38]: interactive.cursor.connection == interactive.web_logic.conns.primary Out[38]: False In [39]: interactive.cursor.connection Out[39]: <weakproxy at 0x3881c60 to Connection at 0x94c010> In [40]: interactive.web_logic.conns.primary Out[40]: <_mysql.connection open to 'xendb01' at 94c010> How do I tell if the connections are the same ?
[ "I've long found weakref.proxy's design and implementation to be somewhat shaky. Witness...:\n>>> import weakref\n>>> ob=set(range(23))\n>>> rob=weakref.proxy(ob)\n>>> rob==ob\nFalse\n>>> rob.__eq__(ob)\nTrue\n\n...DEFINITELY peculiar! In practice what I use from weakref are weak-key or sometimes weak-value dictionaries; but weakref.ref is sounder than the proxy wrapper on top of it:\n>>> wr=weakref.ref(ob)\n>>> wr()==ob\nTrue\n\nThe need to \"call\" the ref to get the object (or None if the object has since disappeared) unfortunately makes it non-transparent (so a DB API module couldn't do it while staying compliant to the API). I don't understand why MySqlDb wants weak cursor->connection referencing at all, but if they do I see why they felt they had to use proxies rather than refs. However, one pays a very high price for that transparency!\nBtw, the \"explicit __eq__\" trick (or an equivalent one with __cmp__, depending on the type of the underlying object) may help you, even though it's definitely inelegant!\n", "Wrap the non proxy with weakref.proxy and use the identity operator:\n>>> interactive.cursor.connection is weakref.proxy(interactive.web_logic.conns.primary)\nTrue\n\nCalling weakref.proxy() twice will return the same proxy object.\n", "If the object is a standard weakref, you need to call it to get the object itself.\nimport weakref\nclass Test(object): pass\na = Test()\nb = weakref.ref(a)\na is b() # True\na == b() # True\n\nUsing weakrefs here seems wrong, though: if I construct a connection, create a cursor from it, and discard the connection object, the cursor should remain valid. There shouldn't be a circular dependency unless the connection is keeping a list of all cursors, in which case that is what should be the weakref.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python", "weak_references" ]
stackoverflow_0001331800_python_weak_references.txt
Q: QSortFilterProxyModel.mapToSource crashes. No info why I have the following code: proxy_index = self.log_list.filter_proxy_model.createIndex(index, COL_REV) model_index = self.log_list.filter_proxy_model.mapToSource(proxy_index) revno = self.log_list.model.data(model_index,QtCore.Qt.DisplayRole) self.setEditText(revno.toString()) The code crashed on the second line. There is no exception raised. No trace back. No warnings. How do I fix this? A: It may be that you're using the proxy model's createIndex() method incorrectly. Usually, the createIndex() method is called as part of a model's index() method implementation. Have you tried calling the proxy model's index() method to get a proxy index then mapping that to the source? Perhaps you could show the code in context or explain what you are trying to do. A: I've run into the same problem, but fortunately using the index () method instead of createIndex () as David recommends does the magic. In general it's a bad idea to mess around with the internal pointer of QModelIndex outside the index () method. Even when using your own Model messing around the internal pointer leads often to unexpected bahavior since Qts View code is pretty obscure to the user.
QSortFilterProxyModel.mapToSource crashes. No info why
I have the following code: proxy_index = self.log_list.filter_proxy_model.createIndex(index, COL_REV) model_index = self.log_list.filter_proxy_model.mapToSource(proxy_index) revno = self.log_list.model.data(model_index,QtCore.Qt.DisplayRole) self.setEditText(revno.toString()) The code crashed on the second line. There is no exception raised. No trace back. No warnings. How do I fix this?
[ "It may be that you're using the proxy model's createIndex() method incorrectly. Usually, the createIndex() method is called as part of a model's index() method implementation.\nHave you tried calling the proxy model's index() method to get a proxy index then mapping that to the source?\nPerhaps you could show the code in context or explain what you are trying to do.\n", "I've run into the same problem, but fortunately using the index () method instead of createIndex () as David recommends does the magic.\nIn general it's a bad idea to mess around with the internal pointer of QModelIndex outside the index () method. Even when using your own Model messing around the internal pointer leads often to unexpected bahavior since Qts View code is pretty obscure to the user.\n" ]
[ 2, 0 ]
[]
[]
[ "pyqt", "pyqt4", "python", "qt", "qt4" ]
stackoverflow_0000671340_pyqt_pyqt4_python_qt_qt4.txt
Q: Python multiprocessing for bulk file/conversion operation on Windows I have written a python script which watches a directory for new subdirectories, and then acts on each subdirectory in a loop. We have an external process which creates these subdirectories. Inside each subdirectory is a text file and a number of images. There is one record (line) in the text file for each image. For each subdirectory my script scans the text file, then calls a few external programs, one detects blank images (custom exe), then a call to "mogrify" (part of ImageMagick) which resizes and converts the images and finally a call to 7-zip which pacakges all of the converted images and text file into a single archive. The script runs fine, but is currently sequential. Looping over each subdirectory one at a time. It seems to me that this would be a good chance to do some multi-processing, since this is being run on a dual-CPU machine (8 cores total). The processing of a given subdirectory is independent of all others...they are self-contained. Currently I am just creating a list of sub-directories using a call to os.listdir() and then looping over that list. I figure I could move all of the per-subdirectory code (conversions, etc) into a separate function, and then somehow create a separate process to handle each subdirectory. Since I am somewhat new to Python, some suggestions on how to approach such multiprocessing would be appreciated. I am on Vista x64 running Python 2.6. A: I agree that the design of this sounds like it could benefit from concurrency. Take a look at the multiprocessing module. You may also want to look at the threading module, and compare speeds. It's difficult to tell exactly how many cores are necessary to gain a benefit from multiprocessing vs. threading and eight cores is well within the range where threading might be faster (yes, despite the GIL). From a design perspective, my biggest recommendation is to avoid interaction between processes entirely if possible. Have one central thread look for the event that triggers process creation (I'm guessing it's a subdirectory creation?) and then spawn a process to handle the subdirectory. From there on out, the spawned process should not interact with any other processes, ever. From your description it seems like this should be possible. Lastly, I'd like to add in a word of encouragement for moving to Python 3.0. There is a lot of talk of staying with 2.x but 3.0 does make some real improvements, and as more and more people start moving to Python 3.0, it's going to be more difficult to get tools and support for 2.x.
Python multiprocessing for bulk file/conversion operation on Windows
I have written a python script which watches a directory for new subdirectories, and then acts on each subdirectory in a loop. We have an external process which creates these subdirectories. Inside each subdirectory is a text file and a number of images. There is one record (line) in the text file for each image. For each subdirectory my script scans the text file, then calls a few external programs, one detects blank images (custom exe), then a call to "mogrify" (part of ImageMagick) which resizes and converts the images and finally a call to 7-zip which pacakges all of the converted images and text file into a single archive. The script runs fine, but is currently sequential. Looping over each subdirectory one at a time. It seems to me that this would be a good chance to do some multi-processing, since this is being run on a dual-CPU machine (8 cores total). The processing of a given subdirectory is independent of all others...they are self-contained. Currently I am just creating a list of sub-directories using a call to os.listdir() and then looping over that list. I figure I could move all of the per-subdirectory code (conversions, etc) into a separate function, and then somehow create a separate process to handle each subdirectory. Since I am somewhat new to Python, some suggestions on how to approach such multiprocessing would be appreciated. I am on Vista x64 running Python 2.6.
[ "I agree that the design of this sounds like it could benefit from concurrency. Take a look at the multiprocessing module. You may also want to look at the threading module, and compare speeds. It's difficult to tell exactly how many cores are necessary to gain a benefit from multiprocessing vs. threading and eight cores is well within the range where threading might be faster (yes, despite the GIL).\nFrom a design perspective, my biggest recommendation is to avoid interaction between processes entirely if possible. Have one central thread look for the event that triggers process creation (I'm guessing it's a subdirectory creation?) and then spawn a process to handle the subdirectory. From there on out, the spawned process should not interact with any other processes, ever. From your description it seems like this should be possible.\nLastly, I'd like to add in a word of encouragement for moving to Python 3.0. There is a lot of talk of staying with 2.x but 3.0 does make some real improvements, and as more and more people start moving to Python 3.0, it's going to be more difficult to get tools and support for 2.x.\n" ]
[ 0 ]
[]
[]
[ "multiprocessing", "python", "windows" ]
stackoverflow_0001332583_multiprocessing_python_windows.txt
Q: Overloading failUnlessEqual in unittest.TestCase I want to overload failUnlessEqual in unittest.TestCase so I created a new TestCase class: import unittest class MyTestCase(unittest.TestCase): def failUnlessEqual(self, first, second, msg=None): if msg: msg += ' Expected: %r - Received %r' % (first, second) unittest.TestCase.failUnlessEqual(self, first, second, msg) And I am using it like: class test_MyTest(MyTestCase): def testi(self): i = 1 self.assertEqual(i, 2, 'Checking value of i') def testx(self): x = 1 self.assertEqual(x, 2, 'Checking value of i') This is what I get when I run the tests >>> unittest.main() FF ====================================================================== FAIL: testi (__main__.test_MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<stdin>", line 4, in testi AssertionError: Checking value of i ====================================================================== FAIL: testx (__main__.test_MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<stdin>", line 7, in testx AssertionError: Checking value of x ---------------------------------------------------------------------- Ran 2 tests in 0.000s FAILED (failures=2) I was expecting that the the message would be 'Checking value of x Expecting: 2 - Received: 1' MyTestCase class is not being used at all. Can you tell me what I am doing wrong? A: You are calling assertEqual, but define failUnlessEqual. So why would you expect that your method is called - you are calling a different method, after all? Perhaps you have looked at the definition of TestCase, and seen the line assertEqual = assertEquals = failUnlessEqual This means that the method assertEqual has the same definition as failUnlessEqual. Unfortunately, that does not mean that overriding failUnlessEqual will also override assertEqual - assertEqual remains an alias for the failUnlessEqual definition in the base class. To make it work correctly, you need to repeat the assignments in your subclass, thereby redefining all three names.
Overloading failUnlessEqual in unittest.TestCase
I want to overload failUnlessEqual in unittest.TestCase so I created a new TestCase class: import unittest class MyTestCase(unittest.TestCase): def failUnlessEqual(self, first, second, msg=None): if msg: msg += ' Expected: %r - Received %r' % (first, second) unittest.TestCase.failUnlessEqual(self, first, second, msg) And I am using it like: class test_MyTest(MyTestCase): def testi(self): i = 1 self.assertEqual(i, 2, 'Checking value of i') def testx(self): x = 1 self.assertEqual(x, 2, 'Checking value of i') This is what I get when I run the tests >>> unittest.main() FF ====================================================================== FAIL: testi (__main__.test_MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<stdin>", line 4, in testi AssertionError: Checking value of i ====================================================================== FAIL: testx (__main__.test_MyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<stdin>", line 7, in testx AssertionError: Checking value of x ---------------------------------------------------------------------- Ran 2 tests in 0.000s FAILED (failures=2) I was expecting that the the message would be 'Checking value of x Expecting: 2 - Received: 1' MyTestCase class is not being used at all. Can you tell me what I am doing wrong?
[ "You are calling assertEqual, but define failUnlessEqual. So why would you expect that your method is called - you are calling a different method, after all?\nPerhaps you have looked at the definition of TestCase, and seen the line\nassertEqual = assertEquals = failUnlessEqual\n\nThis means that the method assertEqual has the same definition as failUnlessEqual. Unfortunately, that does not mean that overriding failUnlessEqual will also override assertEqual - assertEqual remains an alias for the failUnlessEqual definition in the base class.\nTo make it work correctly, you need to repeat the assignments in your subclass, thereby redefining all three names.\n" ]
[ 2 ]
[]
[]
[ "overloading", "python", "unit_testing" ]
stackoverflow_0001332656_overloading_python_unit_testing.txt
Q: import csv file into mysql database using django web application i try to upload a csv file into my web application and store it into mysql database but failed.Please can anyone help me? my user.py script: def import_contact(request): if request.method == 'POST': form = UploadContactForm(request.POST, request.FILES) if form.is_valid(): csvfile = request.FILES['file'] print csvfile csvfile.read() testReader = csv.reader(csvfile,delimiter=' ', quotechar='|') for row in testReader: print "|".join(row) return HttpResponseRedirect('/admin') else: form = UploadContactForm() vars = RequestContext(request, { 'form': form }) return render_to_response('admin/import_contact.html', vars) my forms.py script: class UploadContactForm(forms.Form): file = forms.FileField(label='File:', error_messages = {'required': 'File required'}) A: Since you haven't provided the code for the getcsv function, I'll have to use my crystal ball here a bit. One reason why the print in the for row in testReader: loop isn't working is that getcsv may already processes the file. Use the seek method to reset the objects position in the file to zero again. That way the for loop will process it properly. Another reason why there's nothing stored in the database might be that in the code you've supplied there doesn't seem to be a reference to a model. So how should Django know what it should store and where?
import csv file into mysql database using django web application
i try to upload a csv file into my web application and store it into mysql database but failed.Please can anyone help me? my user.py script: def import_contact(request): if request.method == 'POST': form = UploadContactForm(request.POST, request.FILES) if form.is_valid(): csvfile = request.FILES['file'] print csvfile csvfile.read() testReader = csv.reader(csvfile,delimiter=' ', quotechar='|') for row in testReader: print "|".join(row) return HttpResponseRedirect('/admin') else: form = UploadContactForm() vars = RequestContext(request, { 'form': form }) return render_to_response('admin/import_contact.html', vars) my forms.py script: class UploadContactForm(forms.Form): file = forms.FileField(label='File:', error_messages = {'required': 'File required'})
[ "Since you haven't provided the code for the getcsv function, I'll have to use my crystal ball here a bit.\nOne reason why the print in the for row in testReader: loop isn't working is that getcsv may already processes the file. Use the seek method to reset the objects position in the file to zero again. That way the for loop will process it properly.\nAnother reason why there's nothing stored in the database might be that in the code you've supplied there doesn't seem to be a reference to a model. So how should Django know what it should store and where?\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001332077_django_python.txt
Q: Display Django form inputs on thanks page I'm attempting to take 4-5 fields from a large django form and display them on the thanks page. I want to disply the values with a good degree of control, as i'll be building an iFrame with parameterd querystrings based on the form inputs. Currently I have: forms.py ---- -*- encoding: utf-8 -*- from django import forms from django.forms import extras, ModelForm from django.utils.safestring import mark_safe from siteapp.compare.models import Compare HOWMUCH_CHOICES = ( ('', '--------------------------'), ('20000', '20,000'), ('30000', '30,000'), ... ('2000000', '2,000,000'), ) HOWLONG_CHOICES = ( ('', '--------------------------'), ('1', '1 Year'), ... ('39', '39 Years'), ('40', '40 Years'), ) ...etc. class ComparisonForm(forms.Form): js = mark_safe(u"document.compareForm.how_much.selectedindex = 4;") how_much = forms.ChoiceField(choices=HOWMUCH_CHOICES, widget=forms.Select(attrs={'onMouseOver':'setDefaults()','class':'required validate-selection','title':'Select value of cover required','onLoad': js})) how_long = forms.ChoiceField(choices=HOWLONG_CHOICES, widget=forms.Select(attrs={'class':'required validate-selection','title':'Select length of cover required'})) who_for = forms.ChoiceField(choices=WHOFOR_CHOICES, widget=forms.Select(attrs={'class':'required validate-selection','title':'Select whether you require cover for a partner also'})) ... class Meta: model = Compare models.py ----- class Compare(models.Model): how_much = models.CharField(max_length=28,choices=HOWMUCH_CHOICES#,default='100000' ) how_long = models.CharField(max_length=28,choices=HOWLONG_CHOICES) who_for = models.CharField(max_length=28,choices=WHOFOR_CHOICES) ... partner_date_of_birth = models.DateField(blank=True) def __unicode__(self): return self.name views.py---- def qc_comparison(request): return render_to_response('compare/compare.html', locals(), context_instance=RequestContext(request)) urls.py---- (r'^compare/thanks/$', 'django.views.generic.simple.direct_to_template', {'template': 'compare/thanks.html'}), I'm trying to dig out the best way to do this from the documentation, but it's not clear how to pass the variables correctly to a thanks page. A: The values available to the template are provided by the view. The render_to_response function provides a dictionary of values that are passed to the template. See this. For no good reason, you've provided locals(). Not sure why. You want to provide a dictionary like request.POST -- not locals(). Your locals() will have request. That means your template can use request.POST to access the form. A: I'm not sure if I understand what you are trying to do. But if you want to render form values in plain text you can try django-renderformplain. Just initalize your form with POST (or GET) data as you would in any other form processing view, and pass your form instance in the context.
Display Django form inputs on thanks page
I'm attempting to take 4-5 fields from a large django form and display them on the thanks page. I want to disply the values with a good degree of control, as i'll be building an iFrame with parameterd querystrings based on the form inputs. Currently I have: forms.py ---- -*- encoding: utf-8 -*- from django import forms from django.forms import extras, ModelForm from django.utils.safestring import mark_safe from siteapp.compare.models import Compare HOWMUCH_CHOICES = ( ('', '--------------------------'), ('20000', '20,000'), ('30000', '30,000'), ... ('2000000', '2,000,000'), ) HOWLONG_CHOICES = ( ('', '--------------------------'), ('1', '1 Year'), ... ('39', '39 Years'), ('40', '40 Years'), ) ...etc. class ComparisonForm(forms.Form): js = mark_safe(u"document.compareForm.how_much.selectedindex = 4;") how_much = forms.ChoiceField(choices=HOWMUCH_CHOICES, widget=forms.Select(attrs={'onMouseOver':'setDefaults()','class':'required validate-selection','title':'Select value of cover required','onLoad': js})) how_long = forms.ChoiceField(choices=HOWLONG_CHOICES, widget=forms.Select(attrs={'class':'required validate-selection','title':'Select length of cover required'})) who_for = forms.ChoiceField(choices=WHOFOR_CHOICES, widget=forms.Select(attrs={'class':'required validate-selection','title':'Select whether you require cover for a partner also'})) ... class Meta: model = Compare models.py ----- class Compare(models.Model): how_much = models.CharField(max_length=28,choices=HOWMUCH_CHOICES#,default='100000' ) how_long = models.CharField(max_length=28,choices=HOWLONG_CHOICES) who_for = models.CharField(max_length=28,choices=WHOFOR_CHOICES) ... partner_date_of_birth = models.DateField(blank=True) def __unicode__(self): return self.name views.py---- def qc_comparison(request): return render_to_response('compare/compare.html', locals(), context_instance=RequestContext(request)) urls.py---- (r'^compare/thanks/$', 'django.views.generic.simple.direct_to_template', {'template': 'compare/thanks.html'}), I'm trying to dig out the best way to do this from the documentation, but it's not clear how to pass the variables correctly to a thanks page.
[ "The values available to the template are provided by the view.\nThe render_to_response function provides a dictionary of values that are passed to the template. See this.\nFor no good reason, you've provided locals(). Not sure why.\nYou want to provide a dictionary like request.POST -- not locals().\n\nYour locals() will have request. That means your template can use request.POST to access the form.\n", "I'm not sure if I understand what you are trying to do.\nBut if you want to render form values in plain text you can try django-renderformplain. Just initalize your form with POST (or GET) data as you would in any other form processing view, and pass your form instance in the context.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0001292951_django_forms_python.txt
Q: Renaming a HTML file with Python A bit of background: When I save a web page from e.g. IE8 as "webpage, complete", the images and such that the page contains are placed in a subfolder with the postfix "_files". This convention allows Windows to synchronize the .htm file and the accompanying folder. Now, in order to keep the synchronization intact, when I rename the HTML file from my Python script I want the "_files" folder to be renamed also. Is there an easy way to do this, or will I need to - rename the .htm file - rename the _files folder - parse the .htm file and replace all references to the old _files folder name with the new name? A: There is just one easy way: Have IE save the file again under the new name. But if you want to do it later, you must parse the HTML. In this case, BeautifulSoup is your friend. A: If you rename the folder, I'm not sure how you can get around parsing the .htm file and replacing instances of _files with the new suffix. Perhaps you can use a folder alias (shortcut?) but then that's not a very clean solution. A: you can use simple string replace on your HTML file without parsing it, it can of course be troublesome if the text being replaced is mentioned in the HTML itself.. os.rename("test.html", "test2.html") os.rename("test_files", "test2_files") with open("test2.html", "r") as f: s = f.read().replace("test_files", "test2_files") with open("test2.html", "w") as f: f.write(s)
Renaming a HTML file with Python
A bit of background: When I save a web page from e.g. IE8 as "webpage, complete", the images and such that the page contains are placed in a subfolder with the postfix "_files". This convention allows Windows to synchronize the .htm file and the accompanying folder. Now, in order to keep the synchronization intact, when I rename the HTML file from my Python script I want the "_files" folder to be renamed also. Is there an easy way to do this, or will I need to - rename the .htm file - rename the _files folder - parse the .htm file and replace all references to the old _files folder name with the new name?
[ "There is just one easy way: Have IE save the file again under the new name. But if you want to do it later, you must parse the HTML. In this case, BeautifulSoup is your friend.\n", "If you rename the folder, I'm not sure how you can get around parsing the .htm file and replacing instances of _files with the new suffix. Perhaps you can use a folder alias (shortcut?) but then that's not a very clean solution.\n", "you can use simple string replace on your HTML file without parsing it, it can of course be troublesome if the text being replaced is mentioned in the HTML itself.. \nos.rename(\"test.html\", \"test2.html\")\nos.rename(\"test_files\", \"test2_files\")\n\nwith open(\"test2.html\", \"r\") as f:\n s = f.read().replace(\"test_files\", \"test2_files\")\n\nwith open(\"test2.html\", \"w\") as f:\n f.write(s)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "html", "python" ]
stackoverflow_0001332876_html_python.txt
Q: Problem passing bash output to a python script I'm fairly new to programming and I searched the internet for a way to pass bash output to a Python script. I came up with this in bash. XAS_SVN=`svn info` ssh hudson@test "python/runtests.py $XAS_SVN" And this in python. import sys print sys.argv[1] When I echo $SVN_INFO I get the result. Path: . URL: //svn/rnd-projects/testing/python Repository Root: //svn/rnd-projects Repository UUID: d07d5450-0a36-4e07-90d2-9411ff75afe9 Revision: 140 Node Kind: directory Schedule: normal Last Changed Author: Roy van der Valk Last Changed Rev: 140 Last Changed Date: 2009-06-09 14:13:29 +0200 (Tue, 09 Jun 2009) However the python script just prints the following. Path: Why is this and how can I solve this? I the $SVN_INFO variable not of string type? I know about the subprocess module and the Popen function, but I don't think this is a solution since the python script runs on another server. A: Since you have spaces in the variable, you need to escape them or read all the arguments in your script: print ' '.join(sys.argv[1:]) But it might be better to use stdin/stdout to communicate, especially if there can be some characters susceptible to be interpreted by the shell in the output (like "`$'). In the python script do: for l in sys.stdin: sys.stdout.write(l) and in shell: svn info | ssh hudson@test python/runtests.py A: Yes, you need to put in quotes your input to python/runtest.py because otherwise argv[1] gets it only up to the first space. Like this: ssh hudson@test "python/runtest.py \"$XAS_SVN\""
Problem passing bash output to a python script
I'm fairly new to programming and I searched the internet for a way to pass bash output to a Python script. I came up with this in bash. XAS_SVN=`svn info` ssh hudson@test "python/runtests.py $XAS_SVN" And this in python. import sys print sys.argv[1] When I echo $SVN_INFO I get the result. Path: . URL: //svn/rnd-projects/testing/python Repository Root: //svn/rnd-projects Repository UUID: d07d5450-0a36-4e07-90d2-9411ff75afe9 Revision: 140 Node Kind: directory Schedule: normal Last Changed Author: Roy van der Valk Last Changed Rev: 140 Last Changed Date: 2009-06-09 14:13:29 +0200 (Tue, 09 Jun 2009) However the python script just prints the following. Path: Why is this and how can I solve this? I the $SVN_INFO variable not of string type? I know about the subprocess module and the Popen function, but I don't think this is a solution since the python script runs on another server.
[ "Since you have spaces in the variable, you need to escape them or read all the arguments in your script:\nprint ' '.join(sys.argv[1:])\n\nBut it might be better to use stdin/stdout to communicate, especially if there can be some characters susceptible to be interpreted by the shell in the output (like \"`$').\nIn the python script do:\nfor l in sys.stdin:\n sys.stdout.write(l)\n\nand in shell:\nsvn info | ssh hudson@test python/runtests.py\n\n", "Yes, you need to put in quotes your input to python/runtest.py because otherwise argv[1] gets it only up to the first space. Like this:\nssh hudson@test \"python/runtest.py \\\"$XAS_SVN\\\"\"\n\n" ]
[ 1, 0 ]
[]
[]
[ "bash", "python" ]
stackoverflow_0001333107_bash_python.txt
Q: Python String Cleanup + Manipulation (Accented Characters) I have a database full of names like: John Smith Scott J. Holmes Dr. Kaplan Ray's Dog Levi's Adrian O'Brien Perry Sean Smyre Carie Burchfield-Thompson Björn Árnason There are a few foreign names with accents in them that need to be converted to strings with non-accented characters. I'd like to convert the full names (after stripping characters like " ' " , "-") to user logins like: john.smith scott.j.holmes dr.kaplan rays.dog levis adrian.obrien perry.sean.smyre carie.burchfieldthompson bjorn.arnason So far I have: Fullname.strip() # get rid of leading/trailing white space Fullname.lower() # make everything lower case ... # after bad chars converted/removed Fullname.replace(' ', '.') # replace spaces with periods A: Take a look at this link [redacted] Here is the code from the page def latin1_to_ascii (unicrap): """This replaces UNICODE Latin-1 characters with something equivalent in 7-bit ASCII. All characters in the standard 7-bit ASCII range are preserved. In the 8th bit range all the Latin-1 accented letters are stripped of their accents. Most symbol characters are converted to something meaningful. Anything not converted is deleted. """ xlate = { 0xc0:'A', 0xc1:'A', 0xc2:'A', 0xc3:'A', 0xc4:'A', 0xc5:'A', 0xc6:'Ae', 0xc7:'C', 0xc8:'E', 0xc9:'E', 0xca:'E', 0xcb:'E', 0xcc:'I', 0xcd:'I', 0xce:'I', 0xcf:'I', 0xd0:'Th', 0xd1:'N', 0xd2:'O', 0xd3:'O', 0xd4:'O', 0xd5:'O', 0xd6:'O', 0xd8:'O', 0xd9:'U', 0xda:'U', 0xdb:'U', 0xdc:'U', 0xdd:'Y', 0xde:'th', 0xdf:'ss', 0xe0:'a', 0xe1:'a', 0xe2:'a', 0xe3:'a', 0xe4:'a', 0xe5:'a', 0xe6:'ae', 0xe7:'c', 0xe8:'e', 0xe9:'e', 0xea:'e', 0xeb:'e', 0xec:'i', 0xed:'i', 0xee:'i', 0xef:'i', 0xf0:'th', 0xf1:'n', 0xf2:'o', 0xf3:'o', 0xf4:'o', 0xf5:'o', 0xf6:'o', 0xf8:'o', 0xf9:'u', 0xfa:'u', 0xfb:'u', 0xfc:'u', 0xfd:'y', 0xfe:'th', 0xff:'y', 0xa1:'!', 0xa2:'{cent}', 0xa3:'{pound}', 0xa4:'{currency}', 0xa5:'{yen}', 0xa6:'|', 0xa7:'{section}', 0xa8:'{umlaut}', 0xa9:'{C}', 0xaa:'{^a}', 0xab:'<<', 0xac:'{not}', 0xad:'-', 0xae:'{R}', 0xaf:'_', 0xb0:'{degrees}', 0xb1:'{+/-}', 0xb2:'{^2}', 0xb3:'{^3}', 0xb4:"'", 0xb5:'{micro}', 0xb6:'{paragraph}', 0xb7:'*', 0xb8:'{cedilla}', 0xb9:'{^1}', 0xba:'{^o}', 0xbb:'>>', 0xbc:'{1/4}', 0xbd:'{1/2}', 0xbe:'{3/4}', 0xbf:'?', 0xd7:'*', 0xf7:'/' } r = '' for i in unicrap: if xlate.has_key(ord(i)): r += xlate[ord(i)] elif ord(i) >= 0x80: pass else: r += i return r # This gives an example of how to use latin1_to_ascii(). # This creates a string will all the characters in the latin-1 character set # then it converts the string to plain 7-bit ASCII. if __name__ == '__main__': s = unicode('','latin-1') for c in range(32,256): if c != 0x7f: s = s + unicode(chr(c),'latin-1') print 'INPUT:' print s.encode('latin-1') print print 'OUTPUT:' print latin1_to_ascii(s) A: If you are not afraid to install third-party modules, then have a look at the python port of the Perl module Text::Unidecode (it's also on pypi). The module does nothing more than use a lookup table to transliterate the characters. I glanced over the code and it looks very simple. So I suppose it's working on pretty much any OS and on any Python version (crossingfingers). It's also easy to bundle with your application. With this module you don't have to create your lookup table manually ( = reduced risk it being incomplete). The advantage of this module compared to the unicode normalization technique is this: Unicode normalization does not replace all characters. A good example is a character like "æ". Unicode normalisation will see it as "Letter, lowercase" (Ll). This means using the normalize method will give you neither a replacement character nor a useful hint. Unfortunately, that character is not representable in ASCII. So you'll get errors. The mentioned module does a better job at this. This will actually replace the "æ" with "ae". Which is actually useful and makes sense. The most impressive thing I've seen is that it goes much further. It even replaces Japanese Kana characters mostly properly. For example, it replaces "は" with "ha". Wich is perfectly fine. It's not fool-proof though as the current version replaces "ち" with "ti" instead of "chi". So you'll have to handle it with care for the more exotic characters. Usage of the module is straightforward:: from unidecode import unidecode var_utf8 = "æは".decode("utf8") unidecode( var_utf8 ).encode("ascii") >>> "aeha" Note that I have nothing to do with this module directly. It just happens that I find it very useful. Edit: The patch I submitted fixed the bug concerning the Japanese kana. I've only fixed the one's I could spot right away. I may have missed some. A: The following function is generic: import unicodedata def not_combining(char): return unicodedata.category(char) != 'Mn' def strip_accents(text, encoding): unicode_text= unicodedata.normalize('NFD', text.decode(encoding)) return filter(not_combining, unicode_text).encode(encoding) # in a cp1252 environment >>> print strip_accents("déjà", "cp1252") deja # in a cp1253 environment >>> print strip_accents("καλημέρα", "cp1253") καλημερα Obviously, you should know the encoding of your strings. A: I would do something like this # coding=utf-8 def alnum_dot(name, replace={}): import re for k, v in replace.items(): name = name.replace(k, v) return re.sub("[^a-z.]", "", name.strip().lower()) print alnum_dot(u"Frédrik Holmström", { u"ö":"o", " ":"." }) Second argument is a dict of the characters you want replaced, all non a-z and . chars that are not replaced will be stripped A: The translate method allows you to delete characters. You can use that to delete arbitrary characters. Fullname.translate(None,"'-\"") If you want to delete whole classes of characters, you might want to use the re module. re.sub('[^a-z0-9 ]', '', Fullname.strip().lower(),)
Python String Cleanup + Manipulation (Accented Characters)
I have a database full of names like: John Smith Scott J. Holmes Dr. Kaplan Ray's Dog Levi's Adrian O'Brien Perry Sean Smyre Carie Burchfield-Thompson Björn Árnason There are a few foreign names with accents in them that need to be converted to strings with non-accented characters. I'd like to convert the full names (after stripping characters like " ' " , "-") to user logins like: john.smith scott.j.holmes dr.kaplan rays.dog levis adrian.obrien perry.sean.smyre carie.burchfieldthompson bjorn.arnason So far I have: Fullname.strip() # get rid of leading/trailing white space Fullname.lower() # make everything lower case ... # after bad chars converted/removed Fullname.replace(' ', '.') # replace spaces with periods
[ "Take a look at this link [redacted]\nHere is the code from the page\ndef latin1_to_ascii (unicrap):\n \"\"\"This replaces UNICODE Latin-1 characters with\n something equivalent in 7-bit ASCII. All characters in the standard\n 7-bit ASCII range are preserved. In the 8th bit range all the Latin-1\n accented letters are stripped of their accents. Most symbol characters\n are converted to something meaningful. Anything not converted is deleted.\n \"\"\"\n xlate = {\n 0xc0:'A', 0xc1:'A', 0xc2:'A', 0xc3:'A', 0xc4:'A', 0xc5:'A',\n 0xc6:'Ae', 0xc7:'C',\n 0xc8:'E', 0xc9:'E', 0xca:'E', 0xcb:'E',\n 0xcc:'I', 0xcd:'I', 0xce:'I', 0xcf:'I',\n 0xd0:'Th', 0xd1:'N',\n 0xd2:'O', 0xd3:'O', 0xd4:'O', 0xd5:'O', 0xd6:'O', 0xd8:'O',\n 0xd9:'U', 0xda:'U', 0xdb:'U', 0xdc:'U',\n 0xdd:'Y', 0xde:'th', 0xdf:'ss',\n 0xe0:'a', 0xe1:'a', 0xe2:'a', 0xe3:'a', 0xe4:'a', 0xe5:'a',\n 0xe6:'ae', 0xe7:'c',\n 0xe8:'e', 0xe9:'e', 0xea:'e', 0xeb:'e',\n 0xec:'i', 0xed:'i', 0xee:'i', 0xef:'i',\n 0xf0:'th', 0xf1:'n',\n 0xf2:'o', 0xf3:'o', 0xf4:'o', 0xf5:'o', 0xf6:'o', 0xf8:'o',\n 0xf9:'u', 0xfa:'u', 0xfb:'u', 0xfc:'u',\n 0xfd:'y', 0xfe:'th', 0xff:'y',\n 0xa1:'!', 0xa2:'{cent}', 0xa3:'{pound}', 0xa4:'{currency}',\n 0xa5:'{yen}', 0xa6:'|', 0xa7:'{section}', 0xa8:'{umlaut}',\n 0xa9:'{C}', 0xaa:'{^a}', 0xab:'<<', 0xac:'{not}',\n 0xad:'-', 0xae:'{R}', 0xaf:'_', 0xb0:'{degrees}',\n 0xb1:'{+/-}', 0xb2:'{^2}', 0xb3:'{^3}', 0xb4:\"'\",\n 0xb5:'{micro}', 0xb6:'{paragraph}', 0xb7:'*', 0xb8:'{cedilla}',\n 0xb9:'{^1}', 0xba:'{^o}', 0xbb:'>>',\n 0xbc:'{1/4}', 0xbd:'{1/2}', 0xbe:'{3/4}', 0xbf:'?',\n 0xd7:'*', 0xf7:'/'\n }\n\n r = ''\n for i in unicrap:\n if xlate.has_key(ord(i)):\n r += xlate[ord(i)]\n elif ord(i) >= 0x80:\n pass\n else:\n r += i\n return r\n\n# This gives an example of how to use latin1_to_ascii().\n# This creates a string will all the characters in the latin-1 character set\n# then it converts the string to plain 7-bit ASCII.\nif __name__ == '__main__':\ns = unicode('','latin-1')\nfor c in range(32,256):\n if c != 0x7f:\n s = s + unicode(chr(c),'latin-1')\nprint 'INPUT:'\nprint s.encode('latin-1')\nprint\nprint 'OUTPUT:'\nprint latin1_to_ascii(s)\n\n", "If you are not afraid to install third-party modules, then have a look at the python port of the Perl module Text::Unidecode (it's also on pypi).\nThe module does nothing more than use a lookup table to transliterate the characters. I glanced over the code and it looks very simple. So I suppose it's working on pretty much any OS and on any Python version (crossingfingers). It's also easy to bundle with your application.\nWith this module you don't have to create your lookup table manually ( = reduced risk it being incomplete).\nThe advantage of this module compared to the unicode normalization technique is this: Unicode normalization does not replace all characters. A good example is a character like \"æ\". Unicode normalisation will see it as \"Letter, lowercase\" (Ll). This means using the normalize method will give you neither a replacement character nor a useful hint. Unfortunately, that character is not representable in ASCII. So you'll get errors.\nThe mentioned module does a better job at this. This will actually replace the \"æ\" with \"ae\". Which is actually useful and makes sense.\nThe most impressive thing I've seen is that it goes much further. It even replaces Japanese Kana characters mostly properly. For example, it replaces \"は\" with \"ha\". Wich is perfectly fine. It's not fool-proof though as the current version replaces \"ち\" with \"ti\" instead of \"chi\". So you'll have to handle it with care for the more exotic characters.\nUsage of the module is straightforward::\nfrom unidecode import unidecode\nvar_utf8 = \"æは\".decode(\"utf8\")\nunidecode( var_utf8 ).encode(\"ascii\")\n>>> \"aeha\"\n\nNote that I have nothing to do with this module directly. It just happens that I find it very useful.\nEdit: The patch I submitted fixed the bug concerning the Japanese kana. I've only fixed the one's I could spot right away. I may have missed some.\n", "The following function is generic:\nimport unicodedata\n\ndef not_combining(char):\n return unicodedata.category(char) != 'Mn'\n\ndef strip_accents(text, encoding):\n unicode_text= unicodedata.normalize('NFD', text.decode(encoding))\n return filter(not_combining, unicode_text).encode(encoding)\n\n# in a cp1252 environment\n>>> print strip_accents(\"déjà\", \"cp1252\")\ndeja\n# in a cp1253 environment\n>>> print strip_accents(\"καλημέρα\", \"cp1253\")\nκαλημερα\n\nObviously, you should know the encoding of your strings.\n", "I would do something like this\n# coding=utf-8\n\ndef alnum_dot(name, replace={}):\n import re\n\n for k, v in replace.items():\n name = name.replace(k, v)\n\n return re.sub(\"[^a-z.]\", \"\", name.strip().lower())\n\nprint alnum_dot(u\"Frédrik Holmström\", {\n u\"ö\":\"o\",\n \" \":\".\"\n})\n\nSecond argument is a dict of the characters you want replaced, all non a-z and . chars that are not replaced will be stripped\n", "The translate method allows you to delete characters. You can use that to delete arbitrary characters.\nFullname.translate(None,\"'-\\\"\")\n\nIf you want to delete whole classes of characters, you might want to use the re module.\nre.sub('[^a-z0-9 ]', '', Fullname.strip().lower(),) \n\n" ]
[ 12, 5, 3, 1, 1 ]
[]
[]
[ "python", "regex", "string", "unicode" ]
stackoverflow_0000930303_python_regex_string_unicode.txt
Q: Guitar Tablature and Music sheet oriented plugins for wordpress or Drupal I'm familiar with wordpress and cakePHP; however, I'm building a small community website (hobby) that allows users to post music sheet (pdf/image) or guitar tabs ( text files). These music sheets should be organized by artists and songs. I've already built my own cms, but I'm not looking forward to maintain it as i'm scared I won't have the time. An example of this would be ultimate-guitar.com . Not the whole site, just the way they display tabs and stuff I posted a 1 month old build for you guys to see what i've done here. the site for the most part works, but as i said, I'm scared that i won't be able to maintain it. This is not the latest build, i haven't gotten around to uploading it. It fixes most of the issues Are there any music publishing plugins for drupal and wordpress (I haven't seen any personally)? I'm open to other suggestions and comments as well so please feel free to mention them A: It doesn’t sound like you have any music specific needs, you just need to be able to attach text, pdf or images to an item (node in Drupal) and assign tags to it. You can use taxonomy in Drupal to assign artists to the nodes. I should think what you want is pretty simple to do. I would suggest that you try installing it and seeing what it can do. A: You might want to take a look at the Guitar module, which will allow you to easily add chord fingerings to your tabs.
Guitar Tablature and Music sheet oriented plugins for wordpress or Drupal
I'm familiar with wordpress and cakePHP; however, I'm building a small community website (hobby) that allows users to post music sheet (pdf/image) or guitar tabs ( text files). These music sheets should be organized by artists and songs. I've already built my own cms, but I'm not looking forward to maintain it as i'm scared I won't have the time. An example of this would be ultimate-guitar.com . Not the whole site, just the way they display tabs and stuff I posted a 1 month old build for you guys to see what i've done here. the site for the most part works, but as i said, I'm scared that i won't be able to maintain it. This is not the latest build, i haven't gotten around to uploading it. It fixes most of the issues Are there any music publishing plugins for drupal and wordpress (I haven't seen any personally)? I'm open to other suggestions and comments as well so please feel free to mention them
[ "It doesn’t sound like you have any music specific needs, you just need to be able to attach text, pdf or images to an item (node in Drupal) and assign tags to it. \nYou can use taxonomy in Drupal to assign artists to the nodes. I should think what you want is pretty simple to do. I would suggest that you try installing it and seeing what it can do.\n", "You might want to take a look at the Guitar module, which will allow you to easily add chord fingerings to your tabs.\n" ]
[ 3, 1 ]
[]
[]
[ "drupal", "php", "python", "wordpress" ]
stackoverflow_0001328533_drupal_php_python_wordpress.txt
Q: Tix and Python 3.0 Has anyone seen anything in Tix work under python 3.0? I've tried to work through the examples but when creating anything it states that cnf is unsubscriptable. I also noticed that none of the Dir Select stuff (DirList DirTree) works under 2.6.1. Why doesn't Python either dump Tix or support it? Its got a lot of good stuff to make easy programs. A: Likely what happened is that no one noticed the bug. (It's very hard to automatically test GUI libraries like Tix and Tkinter.) You should report bugs as you find them to http://bugs.python.org. A: Generally speaking, if you're using third-party modules, you're better off avoiding Python 3.0 for now. If you're working on a third-party module yourself, porting forward to Python 3.0 is a good idea, but for the time being general development in it is just going to be a recipe for pain. A: See this: http://docs.python.org/3.1/library/tkinter.tix.html?highlight=tix#module-tkinter.tix
Tix and Python 3.0
Has anyone seen anything in Tix work under python 3.0? I've tried to work through the examples but when creating anything it states that cnf is unsubscriptable. I also noticed that none of the Dir Select stuff (DirList DirTree) works under 2.6.1. Why doesn't Python either dump Tix or support it? Its got a lot of good stuff to make easy programs.
[ "Likely what happened is that no one noticed the bug. (It's very hard to automatically test GUI libraries like Tix and Tkinter.) You should report bugs as you find them to http://bugs.python.org.\n", "Generally speaking, if you're using third-party modules, you're better off avoiding Python 3.0 for now. If you're working on a third-party module yourself, porting forward to Python 3.0 is a good idea, but for the time being general development in it is just going to be a recipe for pain.\n", "See this: http://docs.python.org/3.1/library/tkinter.tix.html?highlight=tix#module-tkinter.tix\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "tix" ]
stackoverflow_0000399326_python_python_3.x_tix.txt
Q: Tick Python instances from Python I am interrested in doing a programming game using python, and I would like to do it in the style of GunTactyx (http://apocalyx.sourceforge.net/guntactyx/index.php). Only much simpler, as I am primarily interrested in the parallel execution of python scripts from python. Gun Tactyx challenges the player to write a program that controls individual units working together in teams, where each instruction carries a time panalty. Each program is executed in its own protected environment, communicating with the game world through functions that can interact with the game world. I was wondering if there is a Python way of achieving similar effect. Pseudo code structure of the game engine would be something like: Instantiate units with individual programs while 1 Update game world for unit in units: unit.tick() The simulation would run until a timeout or some goal condition. Kind regards /Tax A: Maybe you should look into fork of python: stackless, it allows concurrently running thousands of micro-threads without much performance penalty - every "thread" (these aren't real OS threads) could be one Unit. Also it's very easy to implement Actor model with stackless: In the actor model, everything is an actor (duh!). Actors are objects (in the generic sense, not necessarily the OO sense) that can: Receive messages from other actors. Process the received messages as they see fit. Send messages to other actors. Create new Actors. Actors do not have any direct access to other actors. All communication is accomplished via message passing. This provides a rich model to simulate real-world objects that are loosely-coupled and have limited knowledge of each others internals. from Introduction to concurrent programming with stackless Alternatively, you could also simulate this behavior by implementing co-routines - using python generators, like shown here. But I'll guess you'll be better off with stackless, as it's all there already.
Tick Python instances from Python
I am interrested in doing a programming game using python, and I would like to do it in the style of GunTactyx (http://apocalyx.sourceforge.net/guntactyx/index.php). Only much simpler, as I am primarily interrested in the parallel execution of python scripts from python. Gun Tactyx challenges the player to write a program that controls individual units working together in teams, where each instruction carries a time panalty. Each program is executed in its own protected environment, communicating with the game world through functions that can interact with the game world. I was wondering if there is a Python way of achieving similar effect. Pseudo code structure of the game engine would be something like: Instantiate units with individual programs while 1 Update game world for unit in units: unit.tick() The simulation would run until a timeout or some goal condition. Kind regards /Tax
[ "Maybe you should look into fork of python: stackless, it allows concurrently running thousands of micro-threads without much performance penalty - every \"thread\" (these aren't real OS threads) could be one Unit.\nAlso it's very easy to implement Actor model with stackless:\n\nIn the actor model, everything is an actor (duh!). Actors are objects (in the generic sense, not necessarily the OO sense) that can:\n Receive messages from other actors.\n Process the received messages as they see fit.\n Send messages to other actors.\n Create new Actors.\nActors do not have any direct access to other actors. All communication is accomplished via message passing. This provides a rich model to simulate real-world objects that are loosely-coupled and have limited knowledge of each others internals. \nfrom Introduction to concurrent programming with stackless\n\nAlternatively, you could also simulate this behavior by implementing co-routines - using python generators, like shown here. But I'll guess you'll be better off with stackless, as it's all there already.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001333016_python.txt
Q: How can I access the iphone / ipod clipboard using python? I want to modify a python application written for the ipod/iphone. It should copy a string into the clipboard so that I can use it in another application. Is it possible to access the iphone clipboard using python? Thanks in advance. UPDATE: Thanks for replying. A bit of background: The python program is a vocabulary program running locally on my ipod. Often I want to look up the vocabulary in a dictionary. Then I always have to repeat the following steps: Select and copy the word. Close the vocabulary program. Open the dictionary. Paste the word into the text field. Press search. I want to automate the process, therefore I want the python program to copy the word into the clipboard automatically and start the dictionary. I figured out the part with the starting already, using URL schemes. I was hoping to be able to automate the copying as well. A: Sorry no, I'm assuming since you mention python that this is a web-based application? If so there is no way you can put something into/take something out of the user's clipboard automatically. However if it is webbased the user will be able to select any text/image and copy to paste elsewhere.
How can I access the iphone / ipod clipboard using python?
I want to modify a python application written for the ipod/iphone. It should copy a string into the clipboard so that I can use it in another application. Is it possible to access the iphone clipboard using python? Thanks in advance. UPDATE: Thanks for replying. A bit of background: The python program is a vocabulary program running locally on my ipod. Often I want to look up the vocabulary in a dictionary. Then I always have to repeat the following steps: Select and copy the word. Close the vocabulary program. Open the dictionary. Paste the word into the text field. Press search. I want to automate the process, therefore I want the python program to copy the word into the clipboard automatically and start the dictionary. I figured out the part with the starting already, using URL schemes. I was hoping to be able to automate the copying as well.
[ "Sorry no, I'm assuming since you mention python that this is a web-based application? If so there is no way you can put something into/take something out of the user's clipboard automatically. However if it is webbased the user will be able to select any text/image and copy to paste elsewhere.\n" ]
[ 0 ]
[]
[]
[ "clipboard", "iphone", "ipod_touch", "python" ]
stackoverflow_0001332846_clipboard_iphone_ipod_touch_python.txt
Q: Delete None values from Python dict Newbie to Python, so this may seem silly. I have two dicts: default = {'a': 'alpha', 'b': 'beta', 'g': 'Gamma'} user = {'a': 'NewAlpha', 'b': None} I need to update my defaults with the values that exist in user. But only for those that have a value not equal to None. So I need to get back a new dict: result = {'a': 'NewAlpha', 'b': 'beta', 'g': 'Gamma'} A: result = default.copy() result.update((k, v) for k, v in user.iteritems() if v is not None) A: With the update() method, and some generator expression: D.update((k, v) for k, v in user.iteritems() if v is not None)
Delete None values from Python dict
Newbie to Python, so this may seem silly. I have two dicts: default = {'a': 'alpha', 'b': 'beta', 'g': 'Gamma'} user = {'a': 'NewAlpha', 'b': None} I need to update my defaults with the values that exist in user. But only for those that have a value not equal to None. So I need to get back a new dict: result = {'a': 'NewAlpha', 'b': 'beta', 'g': 'Gamma'}
[ "result = default.copy()\nresult.update((k, v) for k, v in user.iteritems() if v is not None)\n\n", "With the update() method, and some generator expression:\nD.update((k, v) for k, v in user.iteritems() if v is not None)\n\n" ]
[ 19, 7 ]
[]
[]
[ "python" ]
stackoverflow_0001334020_python.txt
Q: Python design patterns, cross importing I am using Python for automating a complex procedure that has few options. I want to have the following structure in python. - One "flow-class" containing the flow - One helper class that contains a lot of "black boxes" (functions that do not often get changed). 99% of the time, I modify things in the flow-class so I only want code there that is often modified so I do not have to scroll around a lot to find the code I want to modify. This class also contains global variables (configuration settings) that often get changed. The helper class contains global variables that not often get changed. In the flow-class I have a global variable that I want the user to be forced to input at every run. The line looks like this. print ("Would you like to see debug output (enter = no)? ") debug = getUserInput() The getUserInput() function should be located in the helper class as it is never modified. The getUserInput needs a global variable from the flow class, which indicates whether the user input should be consistent with Linux command line or Eclipse (running on Windows). My question is: How can I structure this in the best way? Currently it looks like the following: The flow-class: import helper_class isLinux = 1 debug = getUserInput() The helper-class: import os, flow_class def getUserInput(): userInput = input () if (flow_class.isLinux == 1): userInput = userInput[:-1] return userInput This currently gives me the following error due to the cross importing: Traceback (most recent call last): File "flow_class.py", line 1, in <module> import helper_class File "helper_class.py", line 1, in <module> import os, flow_class File "flow_class.py", line 5, in <module> debug = getUserInput() NameError: name 'getUserInput' is not defined I know that I could obviously solve this by always passing isLinux as a parameter to getUserInput, but this complicates the usage of this method and makes it less intuitive. A: you need to do helper_class.getUserIinput() in your flow_class. It's not about cross-importing. Once it's fixed you'll get AttributeError that is indeed related to cross-importing. At this stage you'll need to implement logic of getting getUserInput defined before importing flow_class. And to comment on your last statement: your assumption is not correct. Code would be much clearer if you use explicit local values. A: I know that I could obviously solve this by always passing isLinux as a parameter to getUserInput, but this complicates the usage of this method and makes it less intuitive. Actually using global variables complicates the usage of this program waaaay more than a simple parameter. try something like: debug = getUserInput(isLinux=True) Here's some other suggestions You mention there are lots of parameters that you'll change often. Should these be hard coded? Try using a configuration file, or passing a dict() from 'flow' as a parameter. That way you have a central place to change common variables without having to dive in! your 'flow/helper' class sounds like a Controller/Model paradigm. This is good. But your model shouldn't have to import your controller. These aren't suggestions specific to 'pythonic style' these are general programming practices. If you're concerned about program design try reading The Pragmatic Programmer, they have great tips for workflow and design. There's also Code Complete which Roberto suggested. A: I would like to question you on your last sentence. Usually, as also outlined in CC2, usage of global variables helps in writing code, but not in reading it. And code is read many more times than it is written; in your case, I understand you modify the same script over and over again. The problem you are facing now is just a consequence of the generic design decision to use global variables. Explicit parameter passing would make it much clearer, and easier to maintain. As said in the the Zen of Python, explicit is better than implicit.
Python design patterns, cross importing
I am using Python for automating a complex procedure that has few options. I want to have the following structure in python. - One "flow-class" containing the flow - One helper class that contains a lot of "black boxes" (functions that do not often get changed). 99% of the time, I modify things in the flow-class so I only want code there that is often modified so I do not have to scroll around a lot to find the code I want to modify. This class also contains global variables (configuration settings) that often get changed. The helper class contains global variables that not often get changed. In the flow-class I have a global variable that I want the user to be forced to input at every run. The line looks like this. print ("Would you like to see debug output (enter = no)? ") debug = getUserInput() The getUserInput() function should be located in the helper class as it is never modified. The getUserInput needs a global variable from the flow class, which indicates whether the user input should be consistent with Linux command line or Eclipse (running on Windows). My question is: How can I structure this in the best way? Currently it looks like the following: The flow-class: import helper_class isLinux = 1 debug = getUserInput() The helper-class: import os, flow_class def getUserInput(): userInput = input () if (flow_class.isLinux == 1): userInput = userInput[:-1] return userInput This currently gives me the following error due to the cross importing: Traceback (most recent call last): File "flow_class.py", line 1, in <module> import helper_class File "helper_class.py", line 1, in <module> import os, flow_class File "flow_class.py", line 5, in <module> debug = getUserInput() NameError: name 'getUserInput' is not defined I know that I could obviously solve this by always passing isLinux as a parameter to getUserInput, but this complicates the usage of this method and makes it less intuitive.
[ "you need to do helper_class.getUserIinput() in your flow_class. It's not about cross-importing. Once it's fixed you'll get AttributeError that is indeed related to cross-importing.\nAt this stage you'll need to implement logic of getting getUserInput defined before importing flow_class.\nAnd to comment on your last statement: your assumption is not correct. Code would be much clearer if you use explicit local values.\n", "\nI know that I could obviously solve this by always passing isLinux as a parameter to getUserInput, but this complicates the usage of this method and makes it less intuitive.\n\nActually using global variables complicates the usage of this program waaaay more than a simple parameter.\ntry something like: \ndebug = getUserInput(isLinux=True)\n\nHere's some other suggestions\n\nYou mention there are lots of parameters that you'll change often. Should these be hard coded? Try using a configuration file, or passing a dict() from 'flow' as a parameter. That way you have a central place to change common variables without having to dive in!\nyour 'flow/helper' class sounds like a Controller/Model paradigm. This is good. But your model shouldn't have to import your controller. \n\nThese aren't suggestions specific to 'pythonic style' these are general programming practices. If you're concerned about program design try reading The Pragmatic Programmer, they have great tips for workflow and design. There's also Code Complete which Roberto suggested.\n", "I would like to question you on your last sentence. \nUsually, as also outlined in CC2, usage of global variables helps in writing code, but not in reading it.\nAnd code is read many more times than it is written; in your case, I understand you modify the same script over and over again.\nThe problem you are facing now is just a consequence of the generic design decision to use global variables.\nExplicit parameter passing would make it much clearer, and easier to maintain.\nAs said in the the Zen of Python, explicit is better than implicit.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "design_patterns", "python", "python_3.x" ]
stackoverflow_0001334134_design_patterns_python_python_3.x.txt
Q: AttributeError: 'NoneType' object has no attribute 'GetDataStore' I guys, I developing a utility in python and i have 2 object the main class and an database helper for get sqlserver data. database.py import _mssql class sqlserver(object): global _host, _userid, _pwd, _db def __new__ (self, host, userid, pwd, database): _host = host _userid = userid _pwd = pwd _db = database def GetDataStore(self, sql): conn = _mssql.connect(server='(local)\\sqlexpress', user='sa', password='xxx', database='Framework.Data2') conn.execute_non_query('CREATE TABLE persons(id INT, name VARCHAR(100))') conn.execute_non_query("INSERT INTO persons VALUES(1, 'John Doe')") conn.execute_non_query("INSERT INTO persons VALUES(2, 'Jane Doe')") gaemodel.py import os import sys from fwk import system, types, databases class helper(object): pass def usage(app_name): return "Usage: %s <project name>" % (app_name) def main(argv): _io = system.io() project_name = argv[1] project_home = os.path.join(_io.CurrentDir(), project_name) _db = databases.sqlserver('(local)\sqlexpress', 'sa', 'xxx', 'Framework.Data2') _db.GetDataStore("select name from sysobjects where xtype = 'U' and name not like 'Meta%'") str = "from google.appengine.ext import db" #for row in cur: # str += "class %s" % row["name"] print cur if __name__ == "__main__": if len(sys.argv) > 1: main(sys.argv[1:]) else: print usage(sys.argv[0]); My problem is when i try run code return me this error Traceback (most recent call last): File "C:\Projectos\FrameworkGAE\src\gaemodel.py", line 28, in <module> main(sys.argv[1:]) File "C:\Projectos\FrameworkGAE\src\gaemodel.py", line 18, in main _ db. GetDataStore("select name from sysobjects where xtype = 'U' and name not like 'Meta%'") AttributeError: 'NoneType' object has no attribute 'GetDataStore' What is wrong ?? A: First of all: The __new__ method should be named __init__. Remove the global _host etc. line Then change the __init__ method: self.host = host self.userid = userid etc. And change GetDataStore: conn = _mssql.connect(server=self.host, user=self.userid, etc.) That should do the trick. I suggest you read a bit on object-oriented Python. A: Have a look at what __new__ is supposed to be doing and what arguments it's supposed to have. I think you wanted to use __init__ and not __new__. The reason for your particular error is that _db is None. A: I think you want to change databases.py like this: import _mssql class sqlserver(object): def __init__ (self, host, userid, pwd, database): self.host = host self.userid = userid self.pwd = pwd self.db = database def GetDataStore(self, sql): conn = _mssql.connect(server=self.host, user=self.userid, password=self.pwd, database=self.db) conn.execute_non_query('CREATE TABLE persons(id INT, name VARCHAR(100))') conn.execute_non_query("INSERT INTO persons VALUES(1, 'John Doe')") conn.execute_non_query("INSERT INTO persons VALUES(2, 'Jane Doe')")
AttributeError: 'NoneType' object has no attribute 'GetDataStore'
I guys, I developing a utility in python and i have 2 object the main class and an database helper for get sqlserver data. database.py import _mssql class sqlserver(object): global _host, _userid, _pwd, _db def __new__ (self, host, userid, pwd, database): _host = host _userid = userid _pwd = pwd _db = database def GetDataStore(self, sql): conn = _mssql.connect(server='(local)\\sqlexpress', user='sa', password='xxx', database='Framework.Data2') conn.execute_non_query('CREATE TABLE persons(id INT, name VARCHAR(100))') conn.execute_non_query("INSERT INTO persons VALUES(1, 'John Doe')") conn.execute_non_query("INSERT INTO persons VALUES(2, 'Jane Doe')") gaemodel.py import os import sys from fwk import system, types, databases class helper(object): pass def usage(app_name): return "Usage: %s <project name>" % (app_name) def main(argv): _io = system.io() project_name = argv[1] project_home = os.path.join(_io.CurrentDir(), project_name) _db = databases.sqlserver('(local)\sqlexpress', 'sa', 'xxx', 'Framework.Data2') _db.GetDataStore("select name from sysobjects where xtype = 'U' and name not like 'Meta%'") str = "from google.appengine.ext import db" #for row in cur: # str += "class %s" % row["name"] print cur if __name__ == "__main__": if len(sys.argv) > 1: main(sys.argv[1:]) else: print usage(sys.argv[0]); My problem is when i try run code return me this error Traceback (most recent call last): File "C:\Projectos\FrameworkGAE\src\gaemodel.py", line 28, in <module> main(sys.argv[1:]) File "C:\Projectos\FrameworkGAE\src\gaemodel.py", line 18, in main _ db. GetDataStore("select name from sysobjects where xtype = 'U' and name not like 'Meta%'") AttributeError: 'NoneType' object has no attribute 'GetDataStore' What is wrong ??
[ "First of all:\n\nThe __new__ method should be named __init__.\nRemove the global _host etc. line\n\nThen change the __init__ method:\nself.host = host\nself.userid = userid\netc.\n\nAnd change GetDataStore:\nconn = _mssql.connect(server=self.host, user=self.userid, etc.)\n\nThat should do the trick.\nI suggest you read a bit on object-oriented Python.\n", "Have a look at what __new__ is supposed to be doing and what arguments it's supposed to have. I think you wanted to use __init__ and not __new__. The reason for your particular error is that _db is None.\n", "I think you want to change databases.py like this:\nimport _mssql\n\nclass sqlserver(object):\n\n def __init__ (self, host, userid, pwd, database):\n self.host = host\n self.userid = userid\n self.pwd = pwd\n self.db = database\n\n def GetDataStore(self, sql):\n conn = _mssql.connect(server=self.host, user=self.userid, password=self.pwd, database=self.db)\n conn.execute_non_query('CREATE TABLE persons(id INT, name VARCHAR(100))')\n conn.execute_non_query(\"INSERT INTO persons VALUES(1, 'John Doe')\")\n conn.execute_non_query(\"INSERT INTO persons VALUES(2, 'Jane Doe')\")\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001334607_python.txt
Q: Aggregate photos from various services into one Stream Helllo All, I'm looking to aggregate photos from various streams into one stream in a similar manner as to friend feed. I'd like to be able to watch flickr and picasa and other sites with RSS feeds of my choosing and then create a timeline of top photos. For example, assume that X's below are photos: Event Name -- March 15th X X X X X X X X X more-> Event Name 2 -- March 12th X X X X X X X X X more-> Event Name 3 -- February 15th X X X X X X X X X more-> etc. It would be nice to also be able to filter based on rankings, etc... So, I've been searching for APIs/code libraries for PHP/JavaScript (but could also be Python) that would do such an aggregation, but I have yet to find anything. (My search terms probably weren't the best as it's hard to find anything specific when "picasa" and "flickr" are in the search request.) Any suggestion on some projects that do such a thing? If you've used FriendFeed, you'll know about what I'm looking for. Thanks.enter code here A: I suggest using YQL. The Yahoo! Query Language is an expressive SQL-like language that lets you query, filter, and join data across Web services. With it you can do things like the following: select * from query.multi where queries="select enclosure from rss where url='http://picasaweb.google.com/data/feed/base/all?alt=rss&kind=photo&access=public&filter=1&q=Paris&hl=de' LIMIT 5;select * from flickr.photos.search where text='Paris' LIMIT 5" With this query you will get the first 5 images from Picasa RSS-Feed and Flickr-Search matching "Paris". (For Flickr you will have to create the link to the image by yourself) The output format can be either XML, JSON or JSONP-X A: Have you checked out Gregarius. It's a PHP tool which you install on your own server which allows you to combine/group RSS feeds. A group of RSS feeds has its own RSS feed in gregarius. You don't need to look at the frontend you can just use gregarius as backend and use the group RSS feed to visualize your project. Not sure how to do the ranking with Gregarius.
Aggregate photos from various services into one Stream
Helllo All, I'm looking to aggregate photos from various streams into one stream in a similar manner as to friend feed. I'd like to be able to watch flickr and picasa and other sites with RSS feeds of my choosing and then create a timeline of top photos. For example, assume that X's below are photos: Event Name -- March 15th X X X X X X X X X more-> Event Name 2 -- March 12th X X X X X X X X X more-> Event Name 3 -- February 15th X X X X X X X X X more-> etc. It would be nice to also be able to filter based on rankings, etc... So, I've been searching for APIs/code libraries for PHP/JavaScript (but could also be Python) that would do such an aggregation, but I have yet to find anything. (My search terms probably weren't the best as it's hard to find anything specific when "picasa" and "flickr" are in the search request.) Any suggestion on some projects that do such a thing? If you've used FriendFeed, you'll know about what I'm looking for. Thanks.enter code here
[ "I suggest using YQL.\n\nThe Yahoo! Query Language is an expressive SQL-like language that lets you query, filter, and join data across Web services.\n\nWith it you can do things like the following:\nselect * from query.multi where queries=\"select enclosure from rss where url='http://picasaweb.google.com/data/feed/base/all?alt=rss&kind=photo&access=public&filter=1&q=Paris&hl=de' LIMIT 5;select * from flickr.photos.search where text='Paris' LIMIT 5\"\n\nWith this query you will get the first 5 images from Picasa RSS-Feed and Flickr-Search matching \"Paris\". (For Flickr you will have to create the link to the image by yourself)\nThe output format can be either XML, JSON or JSONP-X\n", "Have you checked out Gregarius. It's a PHP tool which you install on your own server which allows you to combine/group RSS feeds.\nA group of RSS feeds has its own RSS feed in gregarius. You don't need to look at the frontend you can just use gregarius as backend and use the group RSS feed to visualize your project. \nNot sure how to do the ranking with Gregarius.\n" ]
[ 2, 1 ]
[]
[]
[ "javascript", "php", "python" ]
stackoverflow_0001334477_javascript_php_python.txt
Q: How can I mass-assign SA ORM object attributes? I have an ORM mapped object, that I want to update. I have all attributes validated and secured in a dictionary (keyword arguments). Now I would like to update all object attributes as in the dictionary. for k,v in kw.items(): setattr(myobject, k, v) doesnt work (AttributeError Exception), thrown from SQLAlchemy. myobject.attr1 = kw['attr1'] myobject.attr2 = kw['attr2'] myobject.attr3 = kw['attr3'] etc is horrible copy paste code, I want to avoid that.# How can i achieve this? SQLAlchemy already does something similar to what I want to do in their constructors ( myobject = MyClass(**kw) ), but I cant find that in all the meta programming obfuscated crap in there. error from SA: << if self.trackparent: if value is not None: self.sethasparent(instance_state(value), True) if previous is not value and previous is not None: self.sethasparent(instance_state(previous), False) >> self.sethasparent(instance_state(value), True) AttributeError: 'unicode' object has no attribute '_sa_instance_state' A: myobject.__dict__.update(**kw) A: You are trying to assign a unicode string to a relation attribute. Say you have: class ClassA(Base): ... b_id = Column(None, ForeignKey('b.id')) b = relation(ClassB) And you are trying to do: my_object = ClassA() my_object.b = "foo" When you should be doing either: my_object.b_id = "foo" # or my_object.b = session.query(ClassB).get("foo")
How can I mass-assign SA ORM object attributes?
I have an ORM mapped object, that I want to update. I have all attributes validated and secured in a dictionary (keyword arguments). Now I would like to update all object attributes as in the dictionary. for k,v in kw.items(): setattr(myobject, k, v) doesnt work (AttributeError Exception), thrown from SQLAlchemy. myobject.attr1 = kw['attr1'] myobject.attr2 = kw['attr2'] myobject.attr3 = kw['attr3'] etc is horrible copy paste code, I want to avoid that.# How can i achieve this? SQLAlchemy already does something similar to what I want to do in their constructors ( myobject = MyClass(**kw) ), but I cant find that in all the meta programming obfuscated crap in there. error from SA: << if self.trackparent: if value is not None: self.sethasparent(instance_state(value), True) if previous is not value and previous is not None: self.sethasparent(instance_state(previous), False) >> self.sethasparent(instance_state(value), True) AttributeError: 'unicode' object has no attribute '_sa_instance_state'
[ "myobject.__dict__.update(**kw)\n\n", "You are trying to assign a unicode string to a relation attribute. Say you have:\n class ClassA(Base):\n ...\n b_id = Column(None, ForeignKey('b.id'))\n b = relation(ClassB)\n\nAnd you are trying to do:\n my_object = ClassA()\n my_object.b = \"foo\"\n\nWhen you should be doing either:\n my_object.b_id = \"foo\"\n # or\n my_object.b = session.query(ClassB).get(\"foo\")\n\n" ]
[ 4, 3 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001334171_python_sqlalchemy.txt
Q: How to change baseclass I have a class which is derived from a base class, and have many many lines of code e.g. class AutoComplete(TextCtrl): ..... What I want to do is change the baseclass so that it works like class AutoComplete(PriceCtrl): ..... I have use for both type of AutoCompletes and may be would like to add more base classes, so how can I do it dynamically? Composition would have been a solution, but I do not want to modify code a lot. any simple solutions? A: You could have a factory for your classes: def completefactory(baseclass): class AutoComplete(baseclass): pass return AutoComplete And then use: TextAutoComplete = completefactory(TextCtrl) PriceAutoComplete = completefactory(PriceCtrl) On the other hand depending on what you want to achieve and how your classes look, maybe AutoComplete is meant to be a mixin, so that you would define TextAutoComplete with: class TextAutocomplete(TextCtrl, AutoComplete): pass A: You could use multiple inheritance for this: class AutoCompleteBase(object): # code for your class # remember to call base implementation with super: # super(AutoCompleteBase, self).some_method() class TextAutoComplete(AutoCompleteBase, TextCtrl): pass class PriceAutoComplete(AutoCompleteBase, PriceCtrl): pass Also, there's the option of a metaclass: class BasesToSeparateClassesMeta(type): """Metaclass to create a separate childclass for each base. NB: doesn't create a class but a list of classes.""" def __new__(self, name, bases, dct): classes = [] for base in bases: cls = type.__new__(self, name, (base,), dct) # Need to init explicitly because not returning a class type.__init__(cls, name, (base,), dct) classes.append(cls) return classes class autocompletes(TextCtrl, PriceCtrl): __metaclass__ = BasesToSeparateClassesMeta # Rest of the code TextAutoComplete, PriceAutoComplete = autocompletes But I'd still suggest the class factory approach already suggested, one level of indentation really isn't that big of a deal. A: You could modify the __bases__ tuple. For example you could add another baseclass: AutoComplete.__bases__ += (PriceCtrl,) But in general I would try to avoid such hacks, it quickly creates a terrible mess.
How to change baseclass
I have a class which is derived from a base class, and have many many lines of code e.g. class AutoComplete(TextCtrl): ..... What I want to do is change the baseclass so that it works like class AutoComplete(PriceCtrl): ..... I have use for both type of AutoCompletes and may be would like to add more base classes, so how can I do it dynamically? Composition would have been a solution, but I do not want to modify code a lot. any simple solutions?
[ "You could have a factory for your classes:\ndef completefactory(baseclass):\n class AutoComplete(baseclass):\n pass\n return AutoComplete\n\nAnd then use:\nTextAutoComplete = completefactory(TextCtrl)\nPriceAutoComplete = completefactory(PriceCtrl)\n\nOn the other hand depending on what you want to achieve and how your classes look, maybe AutoComplete is meant to be a mixin, so that you would define TextAutoComplete with:\nclass TextAutocomplete(TextCtrl, AutoComplete):\n pass\n\n", "You could use multiple inheritance for this:\nclass AutoCompleteBase(object):\n # code for your class\n # remember to call base implementation with super:\n # super(AutoCompleteBase, self).some_method()\n\nclass TextAutoComplete(AutoCompleteBase, TextCtrl):\n pass\n\nclass PriceAutoComplete(AutoCompleteBase, PriceCtrl):\n pass\n\nAlso, there's the option of a metaclass:\nclass BasesToSeparateClassesMeta(type):\n \"\"\"Metaclass to create a separate childclass for each base.\n NB: doesn't create a class but a list of classes.\"\"\"\n def __new__(self, name, bases, dct):\n classes = []\n for base in bases:\n cls = type.__new__(self, name, (base,), dct)\n # Need to init explicitly because not returning a class\n type.__init__(cls, name, (base,), dct)\n classes.append(cls)\n return classes\n\nclass autocompletes(TextCtrl, PriceCtrl):\n __metaclass__ = BasesToSeparateClassesMeta\n # Rest of the code\n\nTextAutoComplete, PriceAutoComplete = autocompletes\n\nBut I'd still suggest the class factory approach already suggested, one level of indentation really isn't that big of a deal.\n", "You could modify the __bases__ tuple. For example you could add another baseclass:\nAutoComplete.__bases__ += (PriceCtrl,)\n\nBut in general I would try to avoid such hacks, it quickly creates a terrible mess.\n" ]
[ 7, 2, 1 ]
[]
[]
[ "class", "dynamic", "python" ]
stackoverflow_0001334222_class_dynamic_python.txt
Q: Python urllib.urlopen() call doesn't work with a URL that a browser accepts If I point Firefox at http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes, I get a page of HTML. But if I try this in Python: import urllib site = 'http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes' req = urllib.urlopen(site) text = req.read() I get the following: 500 Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. What am I doing wrong? A: You're doing nothing wrong, on the surface, and as the error page says you should contact the site's administrators because they're the ones with the server logs which may explain what's happening. Fortunately, bitbucket's site admins are a friendly bunch! No doubt there is some header or combination of headers that browsers set one way, urllib sets another way, and a bug on the server gets tickled in the latter case. You may want to see exactly what headers are being sent e.g. with firebug in firefox, and reproduce those until you isolate exactly the server bug; most likely it's going to be the user agent or some "accept"-ish header that's tickling that bug. A: You are not doing anything wrong, bitbucket does some user agent detection (to detect mercurial clients for example). Just changing the user agent fixes it (if it doesn't have urllib as a substring). You should fill an issue regarding this: http://bitbucket.org/jespern/bitbucket/issues/new/
Python urllib.urlopen() call doesn't work with a URL that a browser accepts
If I point Firefox at http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes, I get a page of HTML. But if I try this in Python: import urllib site = 'http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes' req = urllib.urlopen(site) text = req.read() I get the following: 500 Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. What am I doing wrong?
[ "You're doing nothing wrong, on the surface, and as the error page says you should contact the site's administrators because they're the ones with the server logs which may explain what's happening. Fortunately, bitbucket's site admins are a friendly bunch!\nNo doubt there is some header or combination of headers that browsers set one way, urllib sets another way, and a bug on the server gets tickled in the latter case. You may want to see exactly what headers are being sent e.g. with firebug in firefox, and reproduce those until you isolate exactly the server bug; most likely it's going to be the user agent or some \"accept\"-ish header that's tickling that bug.\n", "You are not doing anything wrong, bitbucket does some user agent detection (to detect mercurial clients for example). Just changing the user agent fixes it (if it doesn't have urllib as a substring).\nYou should fill an issue regarding this: http://bitbucket.org/jespern/bitbucket/issues/new/\n" ]
[ 3, 3 ]
[ "I don't think you're doing anything wrong -- it looks like this server was just down? Your script worked fine for me ('text' contained the same data as that displayed in the browser).\n" ]
[ -2 ]
[ "bitbucket", "python", "urllib" ]
stackoverflow_0001335439_bitbucket_python_urllib.txt
Q: Is it a good idea to hash a Python class? For example, suppose I do this: >>> class foo(object): ... pass ... >>> class bar(foo): ... pass ... >>> some_dict = { foo : 'foo', ... bar : 'bar'} >>> >>> some_dict[bar] 'bar' >>> some_dict[foo] 'foo' >>> hash(bar) 165007700 >>> id(bar) 165007700 Based on that, it looks like the class is getting hashed as its id number. Therefore, there shouldn't be any danger of worrying about, say, a bar hashing as either a foo or a bar or hash values changing if I mutate the class. Is this behavior reliable, or are there any gotchas here? A: Yes, any object that doesn't implement a __hash__() function will return its id when hashed. From Python Language Reference: Data Model - Basic Customization: User-defined classes have __cmp__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns id(x). However, if you're looking to have a unique identifier, use id to be clear about your intent. A hash of an object should be a combination of the hashes of its components. See the above link for more details. A: Classes have default implementations of __eq__ and __hash__ that use id() to make comparisons and compute hash values, respectively. That is, they compare by identity. The primary rule for implementing __hash__ methods is that if two objects compare equal to each other, they must also have the same hash value. Hash values can be seen as just an optimization used by dicts and sets to do find equal objects faster. Consequently, if you change __eq__ to do a different kind of equality testing, you must also change your __hash__ implementation to agree with that choice. Classes that use identity for comparisons can be freely mutated and used in dicts and sets because their identity never changes. Classes that implement __eq__ to compare by value and allow mutation of their values cannot be used in hash collections.
Is it a good idea to hash a Python class?
For example, suppose I do this: >>> class foo(object): ... pass ... >>> class bar(foo): ... pass ... >>> some_dict = { foo : 'foo', ... bar : 'bar'} >>> >>> some_dict[bar] 'bar' >>> some_dict[foo] 'foo' >>> hash(bar) 165007700 >>> id(bar) 165007700 Based on that, it looks like the class is getting hashed as its id number. Therefore, there shouldn't be any danger of worrying about, say, a bar hashing as either a foo or a bar or hash values changing if I mutate the class. Is this behavior reliable, or are there any gotchas here?
[ "Yes, any object that doesn't implement a __hash__() function will return its id when hashed. From Python Language Reference: Data Model - Basic Customization:\n\nUser-defined classes have __cmp__() and __hash__() methods by default; with them, all objects compare unequal (except with themselves) and x.__hash__() returns id(x).\n\nHowever, if you're looking to have a unique identifier, use id to be clear about your intent. A hash of an object should be a combination of the hashes of its components. See the above link for more details.\n", "Classes have default implementations of __eq__ and __hash__ that use id() to make comparisons and compute hash values, respectively. That is, they compare by identity. The primary rule for implementing __hash__ methods is that if two objects compare equal to each other, they must also have the same hash value. Hash values can be seen as just an optimization used by dicts and sets to do find equal objects faster. Consequently, if you change __eq__ to do a different kind of equality testing, you must also change your __hash__ implementation to agree with that choice.\nClasses that use identity for comparisons can be freely mutated and used in dicts and sets because their identity never changes. Classes that implement __eq__ to compare by value and allow mutation of their values cannot be used in hash collections.\n" ]
[ 8, 6 ]
[]
[]
[ "class", "dictionary", "hash", "inheritance", "python" ]
stackoverflow_0001335556_class_dictionary_hash_inheritance_python.txt
Q: Python: ZODB file size growing - not updating? I am using ZODB to store some data that exists in memory for the sake of persistence. If the service with the data in memory every crashes, restarting will load the data from ZODB rather than querying 100s of thousands of rows in a MySQL db. It seems that every time I save, say 500K of data to my database file, my .fs file grows by 500K, rather than staying at 500K. As an example: storage = FileStorage.FileStorage(MY_PATH) db = DB(storage) connection = db.open() root = connection.root() if not root.has_key('data_db'): root['data_db'] = OOBTree() mydictionary = {'some dictionary with 500K of data'} root['data_db'] = mydictionary root._p_changed = 1 transaction.commit() transaction.abort() connection.close() db.close() storage.close() I want to continuously overwrite the data in root['data_db'] with the current value of mydictionary. When I print len(root['data_db']) it always prints the right number of items from mydictionary, but every time this code runs (with the same exact data) the file size increased by the data size, in this case 500K. Am I doing something wrong here? A: When the data in ZODB changes, it's appended to the end of the file. Old data is left there. To reduce the filesize, you need to manually "pack" the database. Google came up with this mailing list post. A: Since you asked about another storage system in a comment, you might want to look into SQLite. Even though SQLite behaves the same in appending to data first, it offers the vacuum command to recover unused storage space. From the Python API, you'll can either use the vacuum pragma to do it automatically, or you can just execute the vacuum command.
Python: ZODB file size growing - not updating?
I am using ZODB to store some data that exists in memory for the sake of persistence. If the service with the data in memory every crashes, restarting will load the data from ZODB rather than querying 100s of thousands of rows in a MySQL db. It seems that every time I save, say 500K of data to my database file, my .fs file grows by 500K, rather than staying at 500K. As an example: storage = FileStorage.FileStorage(MY_PATH) db = DB(storage) connection = db.open() root = connection.root() if not root.has_key('data_db'): root['data_db'] = OOBTree() mydictionary = {'some dictionary with 500K of data'} root['data_db'] = mydictionary root._p_changed = 1 transaction.commit() transaction.abort() connection.close() db.close() storage.close() I want to continuously overwrite the data in root['data_db'] with the current value of mydictionary. When I print len(root['data_db']) it always prints the right number of items from mydictionary, but every time this code runs (with the same exact data) the file size increased by the data size, in this case 500K. Am I doing something wrong here?
[ "When the data in ZODB changes, it's appended to the end of the file. Old data is left there. To reduce the filesize, you need to manually \"pack\" the database.\nGoogle came up with this mailing list post.\n", "Since you asked about another storage system in a comment, you might want to look into SQLite.\nEven though SQLite behaves the same in appending to data first, it offers the vacuum command to recover unused storage space. From the Python API, you'll can either use the vacuum pragma to do it automatically, or you can just execute the vacuum command.\n" ]
[ 2, 1 ]
[]
[]
[ "python", "zodb" ]
stackoverflow_0001335615_python_zodb.txt
Q: Using python scipy.weave inline with ctype variables? I am trying to pass a ctype variable to inline c code using scipy.weave.inline. One would think this would be simple. Documentation is good when doing it with normal python object types, however, they have a lot more features than I need, and It makes more sense to me to use ctypes when working with C. I am unsure, however, where my error is. from scipy.weave import inline from ctypes import * def test(): y = c_float()*50 x = pointer(y) code = """ #line 120 "laplace.py" (This is only useful for debugging) int i; for (i=0; i < 50; i++) { x[i] = 1; } """ inline(code, [x], compiler = 'gcc') return y output = test() pi = pointer(output) print pi[0] A: scipy.weave does not know anything about ctypes. Inputs are restricted to most of the basic builtin types, numpy arrays, wxPython objects, VTK objects, and SWIG wrapped objects. You can add your own converter code, though. There is currently not much documentation on this, but you can look at the SWIG implementation as an instructive example.
Using python scipy.weave inline with ctype variables?
I am trying to pass a ctype variable to inline c code using scipy.weave.inline. One would think this would be simple. Documentation is good when doing it with normal python object types, however, they have a lot more features than I need, and It makes more sense to me to use ctypes when working with C. I am unsure, however, where my error is. from scipy.weave import inline from ctypes import * def test(): y = c_float()*50 x = pointer(y) code = """ #line 120 "laplace.py" (This is only useful for debugging) int i; for (i=0; i < 50; i++) { x[i] = 1; } """ inline(code, [x], compiler = 'gcc') return y output = test() pi = pointer(output) print pi[0]
[ "scipy.weave does not know anything about ctypes. Inputs are restricted to most of the basic builtin types, numpy arrays, wxPython objects, VTK objects, and SWIG wrapped objects. You can add your own converter code, though. There is currently not much documentation on this, but you can look at the SWIG implementation as an instructive example.\n" ]
[ 4 ]
[]
[]
[ "inline_code", "python", "scipy" ]
stackoverflow_0001137852_inline_code_python_scipy.txt
Q: How to find the compiled extensions modules in numpy I am compiling numpy myself on Windows. The build and install runs fine; but how do I list the currently enabled modules .. and modules that are not made available (due to maybe compilation failure or missing libraries)? A: numpy does not have optional components. Either the build is successful, or it fails. You can run the test suite to see if the build works. $ python -c "import numpy;numpy.test()" Running unit tests for numpy NumPy version 1.4.0.dev NumPy is installed in /Users/rkern/svn/numpy/numpy Python version 2.5.4 (r254:67916, Apr 23 2009, 14:49:51) [GCC 4.0.1 (Apple Inc. build 5465)] nose version 0.11.0 ..................... ... etc.
How to find the compiled extensions modules in numpy
I am compiling numpy myself on Windows. The build and install runs fine; but how do I list the currently enabled modules .. and modules that are not made available (due to maybe compilation failure or missing libraries)?
[ "numpy does not have optional components. Either the build is successful, or it fails. You can run the test suite to see if the build works.\n$ python -c \"import numpy;numpy.test()\"\nRunning unit tests for numpy\nNumPy version 1.4.0.dev\nNumPy is installed in /Users/rkern/svn/numpy/numpy\nPython version 2.5.4 (r254:67916, Apr 23 2009, 14:49:51) [GCC 4.0.1 (Apple Inc. build 5465)]\nnose version 0.11.0\n.....................\n... etc.\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "python", "windows" ]
stackoverflow_0001262783_numpy_python_windows.txt
Q: How do I set sys.excepthook to invoke pdb globally in python? From Python docs: sys.excepthook(type, value, traceback) This function prints out a given traceback and exception to sys.stderr. When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook. http://docs.python.org/library/sys.html How do I modify this globally so the default action is to always invoke pdb? Is there a configuration file I can change? I don't want to wrap my code to do this. A: Here's what you need http://ynniv.com/blog/2007/11/debugging-python.html Three ways, the first is simple but crude (Thomas Heller) - add the following to site-packages/sitecustomize.py: import pdb, sys, traceback def info(type, value, tb): traceback.print_exception(type, value, tb) pdb.pm() sys.excepthook = info The second is more sophisticated, and checks for interactive mode (weirdly skipping the debugging in interactive mode), from the cookbook: # code snippet, to be included in 'sitecustomize.py' import sys def info(type, value, tb): if hasattr(sys, 'ps1') or not sys.stderr.isatty(): # we are in interactive mode or we don't have a tty-like # device, so we call the default hook sys.__excepthook__(type, value, tb) else: import traceback, pdb # we are NOT in interactive mode, print the exception... traceback.print_exception(type, value, tb) print # ...then start the debugger in post-mortem mode. pdb.pm() sys.excepthook = info And the third (which always start the debugger unless stdin or stderr are redirected) by ynniv # code snippet, to be included in 'sitecustomize.py' import sys def info(type, value, tb): if (#hasattr(sys, "ps1") or not sys.stderr.isatty() or not sys.stdin.isatty()): # stdin or stderr is redirected, just do the normal thing original_hook(type, value, tb) else: # a terminal is attached and stderr is not redirected, debug import traceback, pdb traceback.print_exception(type, value, tb) print pdb.pm() #traceback.print_stack() original_hook = sys.excepthook if sys.excepthook == sys.__excepthook__: # if someone already patched excepthook, let them win sys.excepthook = info A: Another option is to use ipython, which I consider a must-have tool for any python developer anyway. Instead of running your script from the shell, run it from ipython with %run. When an exception occurs, you can type %debug to debug it. (There's also an option to automatically debug any exception that occurs, but I forget what it is.) A: Try: import pdb import sys def excepthook(type, value, traceback): pdb.post_mortem(traceback) excepthook.old = sys.excepthook sys.excepthook = excepthook def raise_exception(): raise_exception() raise_exception()
How do I set sys.excepthook to invoke pdb globally in python?
From Python docs: sys.excepthook(type, value, traceback) This function prints out a given traceback and exception to sys.stderr. When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook. http://docs.python.org/library/sys.html How do I modify this globally so the default action is to always invoke pdb? Is there a configuration file I can change? I don't want to wrap my code to do this.
[ "Here's what you need\nhttp://ynniv.com/blog/2007/11/debugging-python.html\nThree ways, the first is simple but crude (Thomas Heller) - add the following to site-packages/sitecustomize.py:\nimport pdb, sys, traceback\ndef info(type, value, tb):\n traceback.print_exception(type, value, tb)\n pdb.pm()\nsys.excepthook = info\n\nThe second is more sophisticated, and checks for interactive mode (weirdly skipping the debugging in interactive mode), from the cookbook:\n# code snippet, to be included in 'sitecustomize.py'\nimport sys\n\ndef info(type, value, tb):\n if hasattr(sys, 'ps1') or not sys.stderr.isatty():\n # we are in interactive mode or we don't have a tty-like\n # device, so we call the default hook\n sys.__excepthook__(type, value, tb)\n else:\n import traceback, pdb\n # we are NOT in interactive mode, print the exception...\n traceback.print_exception(type, value, tb)\n print\n # ...then start the debugger in post-mortem mode.\n pdb.pm()\n\nsys.excepthook = info\n\nAnd the third (which always start the debugger unless stdin or stderr are redirected) by ynniv\n# code snippet, to be included in 'sitecustomize.py'\nimport sys\n\ndef info(type, value, tb):\n if (#hasattr(sys, \"ps1\") or\n not sys.stderr.isatty() or \n not sys.stdin.isatty()):\n # stdin or stderr is redirected, just do the normal thing\n original_hook(type, value, tb)\n else:\n # a terminal is attached and stderr is not redirected, debug \n import traceback, pdb\n traceback.print_exception(type, value, tb)\n print\n pdb.pm()\n #traceback.print_stack()\n\noriginal_hook = sys.excepthook\nif sys.excepthook == sys.__excepthook__:\n # if someone already patched excepthook, let them win\n sys.excepthook = info\n\n", "Another option is to use ipython, which I consider a must-have tool for any python developer anyway. Instead of running your script from the shell, run it from ipython with %run. When an exception occurs, you can type %debug to debug it. (There's also an option to automatically debug any exception that occurs, but I forget what it is.)\n", "Try:\nimport pdb\nimport sys\n\ndef excepthook(type, value, traceback):\n pdb.post_mortem(traceback)\n\nexcepthook.old = sys.excepthook\nsys.excepthook = excepthook\n\ndef raise_exception():\n raise_exception()\n\nraise_exception()\n\n" ]
[ 21, 1, 0 ]
[]
[]
[ "configuration", "debugging", "pdb", "python" ]
stackoverflow_0001237379_configuration_debugging_pdb_python.txt
Q: Example of subclassing string.Template in Python? I haven't been able to find a good example of subclassing string.Template in Python, even though I've seen multiple references to doing so in documentation. Are there any examples of this on the web? I want to change the $ to be a different character and maybe change the regex for identifiers. A: From python docs: Advanced usage: you can derive subclasses of Template to customize the placeholder syntax, delimiter character, or the entire regular expression used to parse template strings. To do this, you can override these class attributes: delimiter – This is the literal string describing a placeholder introducing delimiter. The default value $. Note that this should not be a regular expression, as the implementation will call re.escape() on this string as needed. idpattern – This is the regular expression describing the pattern for non-braced placeholders (the braces will be added automatically as appropriate). The default value is the regular expression [_a-z][_a-z0-9]*. Example: from string import Template class MyTemplate(Template): delimiter = '#' idpattern = r'[a-z][_a-z0-9]*' >>> s = MyTemplate('#who likes $what') >>> s.substitute(who='tim', what='kung pao') 'tim likes $what' In python 3: New in version 3.2. Alternatively, you can provide the entire regular expression pattern by overriding the class attribute pattern. If you do this, the value must be a regular expression object with four named capturing groups. The capturing groups correspond to the rules given above, along with the invalid placeholder rule: escaped – This group matches the escape sequence, e.g. $$, in the default pattern. named – This group matches the unbraced placeholder name; it should not include the delimiter in capturing group. braced – This group matches the brace enclosed placeholder name; it should not include either the delimiter or braces in the capturing group. invalid – This group matches any other delimiter pattern (usually a single delimiter), and it should appear last in the regular expression. Example: from string import Template import re class TemplateClone(Template): delimiter = '$' pattern = r''' \$(?: (?P<escaped>\$) | # Escape sequence of two delimiters (?P<named>[_a-z][_a-z0-9]*) | # delimiter and a Python identifier {(?P<braced>[_a-z][_a-z0-9]*)} | # delimiter and a braced identifier (?P<invalid>) # Other ill-formed delimiter exprs ) ''' class TemplateAlternative(Template): delimiter = '[-' pattern = r''' \[-(?: (?P<escaped>-) | # Expression [-- will become [- (?P<named>[^\[\]\n-]+)-\] | # -, [, ], and \n can't be used in names \b\B(?P<braced>) | # Braced names disabled (?P<invalid>) # ) ''' >>> t = TemplateClone("$hi sir") >>> t.substitute({"hi": "hello"}) 'hello sir' >>> ta = TemplateAlternative("[-hi-] sir") >>> ta.substitute({"hi": "have a nice day"}) 'have a nice day sir' >>> ta = TemplateAlternative("[--[-hi-]-]") >>> ta.substitute({"hi": "have a nice day"}) '[-have a nice day-]' Apparently it is also possible to just omit any of the regex groups escaped, named, braced or invalid to disable it.
Example of subclassing string.Template in Python?
I haven't been able to find a good example of subclassing string.Template in Python, even though I've seen multiple references to doing so in documentation. Are there any examples of this on the web? I want to change the $ to be a different character and maybe change the regex for identifiers.
[ "From python docs:\n\nAdvanced usage: you can derive\n subclasses of Template to customize\n the placeholder syntax, delimiter\n character, or the entire regular\n expression used to parse template\n strings. To do this, you can override\n these class attributes:\n\ndelimiter – This is the literal string describing a placeholder\n introducing delimiter. The default\n value $. Note that this should not be\n a regular expression, as the\n implementation will call re.escape()\n on this string as needed.\nidpattern – This is the regular expression describing the pattern for\n non-braced placeholders (the braces\n will be added automatically as\n appropriate). The default value is the\n regular expression [_a-z][_a-z0-9]*.\n\n\nExample:\nfrom string import Template\n\nclass MyTemplate(Template):\n delimiter = '#'\n idpattern = r'[a-z][_a-z0-9]*'\n\n>>> s = MyTemplate('#who likes $what')\n>>> s.substitute(who='tim', what='kung pao')\n'tim likes $what'\n\n\nIn python 3:\n\nNew in version 3.2.\nAlternatively, you can provide the entire regular expression pattern\n by overriding the class attribute pattern. If you do this, the value\n must be a regular expression object with four named capturing groups.\n The capturing groups correspond to the rules given above, along with\n the invalid placeholder rule:\n\nescaped – This group matches the escape sequence, e.g. $$, in the default pattern.\nnamed – This group matches the unbraced placeholder name; it should not include the delimiter in capturing group.\nbraced – This group matches the brace enclosed placeholder name; it should not include either the delimiter or braces in the capturing\n group.\ninvalid – This group matches any other delimiter pattern (usually a single delimiter), and it should appear last in the regular\n expression.\n\n\nExample:\nfrom string import Template\nimport re\n\nclass TemplateClone(Template):\n delimiter = '$'\n pattern = r'''\n \\$(?:\n (?P<escaped>\\$) | # Escape sequence of two delimiters\n (?P<named>[_a-z][_a-z0-9]*) | # delimiter and a Python identifier\n {(?P<braced>[_a-z][_a-z0-9]*)} | # delimiter and a braced identifier\n (?P<invalid>) # Other ill-formed delimiter exprs\n )\n '''\n\nclass TemplateAlternative(Template):\n delimiter = '[-'\n pattern = r'''\n \\[-(?:\n (?P<escaped>-) | # Expression [-- will become [-\n (?P<named>[^\\[\\]\\n-]+)-\\] | # -, [, ], and \\n can't be used in names\n \\b\\B(?P<braced>) | # Braced names disabled\n (?P<invalid>) #\n )\n '''\n\n>>> t = TemplateClone(\"$hi sir\")\n>>> t.substitute({\"hi\": \"hello\"})\n'hello sir'\n\n>>> ta = TemplateAlternative(\"[-hi-] sir\")\n>>> ta.substitute({\"hi\": \"have a nice day\"})\n'have a nice day sir'\n>>> ta = TemplateAlternative(\"[--[-hi-]-]\")\n>>> ta.substitute({\"hi\": \"have a nice day\"})\n'[-have a nice day-]'\n\nApparently it is also possible to just omit any of the regex groups escaped, named, braced or invalid to disable it.\n" ]
[ 31 ]
[]
[]
[ "python", "stringtemplate" ]
stackoverflow_0001336786_python_stringtemplate.txt
Q: Python: Alternatives to pickling a module I am working on my program, GarlicSim, in which a user creates a simulation, then he is able to manipulate it as he desires, and then he can save it to file. I recently tried implementing the saving feature. The natural thing that occured to me is to pickle the Project object, which contains the entire simulation. Problem is, the Project object also includes a module-- That is the "simulation package", which is a package/module that contains several critical objects, mostly functions, that define the simulation. I need to save them together with the simulation, but it seems that it is impossible to pickle a module, as I witnessed when I tried to pickle the Project object and an exception was raised. What would be a good way to work around that limitation? (I should also note that the simulation package gets imported dynamically in the program.) A: If the project somehow has a reference to a module with stuff you need, it sounds like you might want to refactor the use of that module into a class within the module. This is often better anyway, because the use of a module for stuff smells of a big fat global. In my experience, such an application structure will only lead to trouble. (Of course the quick way out is to save the module's dict instead of the module itself.) A: If you have the original code for the simulation package modules, which I presume are dynamically generated, then I would suggest serializing that and reconstructing the modules when loaded. You would do this in the Project.__getstate__() and Project.__setstate__() methods.
Python: Alternatives to pickling a module
I am working on my program, GarlicSim, in which a user creates a simulation, then he is able to manipulate it as he desires, and then he can save it to file. I recently tried implementing the saving feature. The natural thing that occured to me is to pickle the Project object, which contains the entire simulation. Problem is, the Project object also includes a module-- That is the "simulation package", which is a package/module that contains several critical objects, mostly functions, that define the simulation. I need to save them together with the simulation, but it seems that it is impossible to pickle a module, as I witnessed when I tried to pickle the Project object and an exception was raised. What would be a good way to work around that limitation? (I should also note that the simulation package gets imported dynamically in the program.)
[ "If the project somehow has a reference to a module with stuff you need, it sounds like you might want to refactor the use of that module into a class within the module. This is often better anyway, because the use of a module for stuff smells of a big fat global. In my experience, such an application structure will only lead to trouble.\n(Of course the quick way out is to save the module's dict instead of the module itself.)\n", "If you have the original code for the simulation package modules, which I presume are dynamically generated, then I would suggest serializing that and reconstructing the modules when loaded. You would do this in the Project.__getstate__() and Project.__setstate__() methods.\n" ]
[ 2, 1 ]
[]
[]
[ "module", "pickle", "python" ]
stackoverflow_0001336908_module_pickle_python.txt
Q: Python dateutil.rrule is incredibly slow I'm using the python dateutil module for a calendaring application which supports repeating events. I really like the ability to parse ical rrules using the rrulestr() function. Also, using rrule.between() to get dates within a given interval is very fast. However, as soon as I try doing any other operations (ie: list slices, before(), after(),...) everything begins to crawl. It seems like dateutil tries to calculate every date even if all I want is to get the last date with rrule.before(datetime.max). Is there any way of avoiding these unnecessary calculations? A: My guess is probably not. The last date before datetime.max means you have to calculate all the recurrences up until datetime.max, and that will reasonably be a LOT of recurrences. It might be possible to add shortcuts for some of the simpler recurrences. If it is every year on the same date for example, you don't really need to compute the recurrences inbetween, for example. But if you have every third something you must, for example, and also if you have a maximum recurrences, etc. But I guess dateutil doesn't have these shortcuts. It would probably be quite complex to implement reliably. May I ask why you need to find the last recurrence before datetime.max? It is, after all, almost eight thousand years into the future... :-)
Python dateutil.rrule is incredibly slow
I'm using the python dateutil module for a calendaring application which supports repeating events. I really like the ability to parse ical rrules using the rrulestr() function. Also, using rrule.between() to get dates within a given interval is very fast. However, as soon as I try doing any other operations (ie: list slices, before(), after(),...) everything begins to crawl. It seems like dateutil tries to calculate every date even if all I want is to get the last date with rrule.before(datetime.max). Is there any way of avoiding these unnecessary calculations?
[ "My guess is probably not. The last date before datetime.max means you have to calculate all the recurrences up until datetime.max, and that will reasonably be a LOT of recurrences. It might be possible to add shortcuts for some of the simpler recurrences. If it is every year on the same date for example, you don't really need to compute the recurrences inbetween, for example. But if you have every third something you must, for example, and also if you have a maximum recurrences, etc. But I guess dateutil doesn't have these shortcuts. It would probably be quite complex to implement reliably.\nMay I ask why you need to find the last recurrence before datetime.max? It is, after all, almost eight thousand years into the future... :-)\n" ]
[ 4 ]
[]
[]
[ "calendar", "icalendar", "python", "python_dateutil" ]
stackoverflow_0001336824_calendar_icalendar_python_python_dateutil.txt
Q: Performance - Python vs. C#/C++/C reading char-by-char So I have these giant XML files (and by giant, I mean like 1.5GB+) and they don't have CRLFs. I'm trying to run a diff-like program to find the differences between these files. Since I've yet to find a diff program that won't explode due to memory exhaustion, I've decided the best bet was to add CRLFs after closing tags. I wrote a python script to read char-by-char and add new-lines after '>'. The problem is I'm running this on a single core PC circa-1995 or something ridiculous, and it's only processing about 20MB/hour when I have both converting at the same time. Any idea if writing this in C#/C/C++ instead will yield any benefits? If not, does anyone know of a diff program that will go byte-by-byte? Thanks. EDIT: Here's the code for my processing function... def read_and_format(inputfile, outputfile): ''' Open input and output files, then read char-by-char and add new lines after ">" ''' infile = codecs.open(inputfile,"r","utf-8") outfile = codecs.open(outputfile,"w","utf-8") char = infile.read(1) while(1): if char == "": break else: outfile.write(char) if(char == ">"): outfile.write("\n") char = infile.read(1) infile.close() outfile.close() EDIT2: Thanks for the awesome responses. Increaseing the read size created an unbelievable speed increase. Problem solved. A: Reading and writing a single character at a time is almost always going to be slow, because disks are block-based devices, rather than character-based devices - it will read a lot more than just the one byte you're after, and the surplus parts need to be discarded. Try reading and writing more at a time, say, 8192 bytes (8KB) and then finding and adding newlines in that string before writing it out - you should save a lot in performance because a lot less I/O is required. As LBushkin points out, your I/O library may be doing buffering, but unless there is some form of documentation that shows this does indeed happen (for reading AND writing), it's a fairly easy thing to try before rewriting in a different language. A: Why don't you just use sed? cat giant.xml | sed 's/>/>\x0a\x0d/g' > giant-with-linebreaks.xml A: Rather than reading byte by byte, which incurs a disk access for each byte read, try reading ~20 MB at a time and doing your search + replace on that :) You can probably do this in Notepad.... Billy3 A: For the type of problem you describe, I suspect the algorithm you employ for comparing the data will have a much more significant effect than the I/O model or language. In fact, string allocation and search may be more expensive here than anything else. Some general suggestions before you write this yourself: Try running on a faster machine if you have one available. That will make a huge difference. Look for an existing tool online for doing XML diffs ... don't write one yourself. If are are going to write this in C# (or Java or C/C++), I would do the following: Read a fairly large block into memory all at once (let's say between 200k and 1M) Allocate an empty block that's twice that size (this assumes a worst case of every character is a '>') Copy from the input block to the output block conditionally appending a CRLF after each '>' character. Write the new block out to disk. Repeat until all the data has been processed. Additionally, you could also write such a program to run on multiple threads, so that while once thread is perform CRLF insertions in memory, a separate thread is read blocks in from disk. This type of parallelization is complicated ... so I would only do so if you really need maximum performance. Here's a really simple C# program to get you started, if you need it. It accepts an input file path and an output path on the command line, and performs the substitution you are looking for ('>' ==> CRLF). This sample leaves much to be improved (parallel processing, streaming, some validation, etc)... but it should be a decent start. using System; using System.IO; namespace ExpandBrackets { class Program { static void Main(string[] args) { if (args.Length == 2) { using( StreamReader input = new StreamReader( args[0] ) ) using( StreamWriter output = new StreamWriter( args[1] ) ) { int readSize = 0; int blockSize = 100000; char[] inBuffer = new char[blockSize]; char[] outBuffer = new char[blockSize*3]; while( ( readSize = input.ReadBlock( inBuffer, 0, blockSize ) ) > 0 ) { int writeSize = TransformBlock( inBuffer, outBuffer, readSize ); output.Write( outBuffer, 0, writeSize ); } } } else { Console.WriteLine( "Usage: repchar {inputfile} {outputfile}" ); } } private static int TransformBlock( char[] inBuffer, char[] outBuffer, int size ) { int j = 0; for( int i = 0; i < size; i++ ) { outBuffer[j++] = inBuffer[i]; if (inBuffer[i] == '>') // append CR LF { outBuffer[j++] = '\r'; outBuffer[j++] = '\n'; } } return j; } } } A: All of the languages mentioned typically, at some point, revert to the C runtime library for byte by byte file access. Writing this in C will probably be the fastest option. However, I doubt it will provide a huge speed boost. Python is fairly speedy, if you're doing things correctly. The main way to really get a big speed improvement would be to introduce threading. If you read the data in from the file in a large block in one thread, and had a separate thread that did your newline processing + diff processing, you could dramatically improve the speed of this algorithm. This would probably be easier to implement in C++, C#, or IronPython than in C or CPython directly, since they provide very easy, high-level synchronization tools for handling the threading issues (especially when using .NET). A: you could try xmldiff - http://msdn.microsoft.com/en-us/library/aa302294.aspx I haven't used it for such huge data but I think it would be reasonably optimized A: I put this as a comment on another answer, but in case you miss it--you might want to look at The Shootout. It's a highly optimized set of code for various problems in many languages. According to those results, Python tends to be about 50x slower than c (but it is faster than the other interpreted languages). In comparison Java is about 2x slower than c. If you went to one of the faster compiled languages, I don't see why you wouldn't see a similar increase. By the way, the figures attained from the shootout are wonderfully un-assailable, you can't really challenge them, instead if you don't believe the numbers are fair because the code to solve a problem in your favorite language isn't optimized properly, then you can submit better code yourself. The act of many people doing this means most of the code on there is pretty damn optimized for every popular language. If you show them a more optimized compiler or interpreter, they may include the results from it as well. Oh: except C#, that's only represented by MONO so if Microsoft's compiler is more optimized, it's not shown. All the tests seem to run on Linux machines. My guess is Microsoft's C# should run at about the same speed as Java, but the shootout lists mono as a bit slower (about 3x as slow as C).. A: As others said, if you do it in C it will be pretty much unbeatable, because C buffers I/O, and getc() is inlined (in my memory). Your real performance issue will be in the diff. Maybe there's a pretty good one out there, but for those size files I doubt it. For fun, I'm a do-it-yourselfer. The strategy I would use is to have a rolling window in each file, several megabytes long. The search strategy for mismatches is diagonal search, which is if you are at lines i and j, compare in this sequence: line(i+0) == line(j+0) line(i+0) == line(j+1) line(i+1) == line(j+0) line(i+0) == line(j+2) line(i+1) == line(j+1) line(i+2) == line(j+0) and so on. No doubt there's a better way, but if I'm going to code it myself and manage the rolling windows, that's what I'd try.
Performance - Python vs. C#/C++/C reading char-by-char
So I have these giant XML files (and by giant, I mean like 1.5GB+) and they don't have CRLFs. I'm trying to run a diff-like program to find the differences between these files. Since I've yet to find a diff program that won't explode due to memory exhaustion, I've decided the best bet was to add CRLFs after closing tags. I wrote a python script to read char-by-char and add new-lines after '>'. The problem is I'm running this on a single core PC circa-1995 or something ridiculous, and it's only processing about 20MB/hour when I have both converting at the same time. Any idea if writing this in C#/C/C++ instead will yield any benefits? If not, does anyone know of a diff program that will go byte-by-byte? Thanks. EDIT: Here's the code for my processing function... def read_and_format(inputfile, outputfile): ''' Open input and output files, then read char-by-char and add new lines after ">" ''' infile = codecs.open(inputfile,"r","utf-8") outfile = codecs.open(outputfile,"w","utf-8") char = infile.read(1) while(1): if char == "": break else: outfile.write(char) if(char == ">"): outfile.write("\n") char = infile.read(1) infile.close() outfile.close() EDIT2: Thanks for the awesome responses. Increaseing the read size created an unbelievable speed increase. Problem solved.
[ "Reading and writing a single character at a time is almost always going to be slow, because disks are block-based devices, rather than character-based devices - it will read a lot more than just the one byte you're after, and the surplus parts need to be discarded.\nTry reading and writing more at a time, say, 8192 bytes (8KB) and then finding and adding newlines in that string before writing it out - you should save a lot in performance because a lot less I/O is required.\nAs LBushkin points out, your I/O library may be doing buffering, but unless there is some form of documentation that shows this does indeed happen (for reading AND writing), it's a fairly easy thing to try before rewriting in a different language.\n", "Why don't you just use sed?\ncat giant.xml | sed 's/>/>\\x0a\\x0d/g' > giant-with-linebreaks.xml\n", "Rather than reading byte by byte, which incurs a disk access for each byte read, try reading ~20 MB at a time and doing your search + replace on that :)\nYou can probably do this in Notepad....\nBilly3\n", "For the type of problem you describe, I suspect the algorithm you employ for comparing the data will have a much more significant effect than the I/O model or language. In fact, string allocation and search may be more expensive here than anything else.\nSome general suggestions before you write this yourself:\n\nTry running on a faster machine if you have one available. That will make a huge difference.\nLook for an existing tool online for doing XML diffs ... don't write one yourself.\n\nIf are are going to write this in C# (or Java or C/C++), I would do the following:\n\nRead a fairly large block into memory all at once (let's say between 200k and 1M)\nAllocate an empty block that's twice that size (this assumes a worst case of every character is a '>')\nCopy from the input block to the output block conditionally appending a CRLF after each '>' character.\nWrite the new block out to disk.\nRepeat until all the data has been processed.\n\nAdditionally, you could also write such a program to run on multiple threads, so that while once thread is perform CRLF insertions in memory, a separate thread is read blocks in from disk. This type of parallelization is complicated ... so I would only do so if you really need maximum performance.\nHere's a really simple C# program to get you started, if you need it. It accepts an input file path and an output path on the command line, and performs the substitution you are looking for ('>' ==> CRLF). This sample leaves much to be improved (parallel processing, streaming, some validation, etc)... but it should be a decent start.\nusing System;\nusing System.IO;\n\nnamespace ExpandBrackets\n{\n class Program\n {\n static void Main(string[] args)\n {\n if (args.Length == 2)\n {\n using( StreamReader input = new StreamReader( args[0] ) )\n using( StreamWriter output = new StreamWriter( args[1] ) )\n {\n int readSize = 0;\n int blockSize = 100000;\n char[] inBuffer = new char[blockSize];\n char[] outBuffer = new char[blockSize*3];\n while( ( readSize = input.ReadBlock( inBuffer, 0, blockSize ) ) > 0 )\n {\n int writeSize = TransformBlock( inBuffer, outBuffer, readSize );\n output.Write( outBuffer, 0, writeSize );\n }\n }\n }\n else\n {\n Console.WriteLine( \"Usage: repchar {inputfile} {outputfile}\" );\n }\n }\n\n private static int TransformBlock( char[] inBuffer, char[] outBuffer, int size )\n {\n int j = 0;\n for( int i = 0; i < size; i++ )\n {\n outBuffer[j++] = inBuffer[i];\n if (inBuffer[i] == '>') // append CR LF\n {\n outBuffer[j++] = '\\r';\n outBuffer[j++] = '\\n';\n }\n }\n return j;\n }\n }\n}\n\n", "All of the languages mentioned typically, at some point, revert to the C runtime library for byte by byte file access. Writing this in C will probably be the fastest option.\nHowever, I doubt it will provide a huge speed boost. Python is fairly speedy, if you're doing things correctly.\nThe main way to really get a big speed improvement would be to introduce threading. If you read the data in from the file in a large block in one thread, and had a separate thread that did your newline processing + diff processing, you could dramatically improve the speed of this algorithm. This would probably be easier to implement in C++, C#, or IronPython than in C or CPython directly, since they provide very easy, high-level synchronization tools for handling the threading issues (especially when using .NET).\n", "you could try xmldiff - http://msdn.microsoft.com/en-us/library/aa302294.aspx \nI haven't used it for such huge data but I think it would be reasonably optimized\n", "I put this as a comment on another answer, but in case you miss it--you might want to look at The Shootout. It's a highly optimized set of code for various problems in many languages.\nAccording to those results, Python tends to be about 50x slower than c (but it is faster than the other interpreted languages). In comparison Java is about 2x slower than c. If you went to one of the faster compiled languages, I don't see why you wouldn't see a similar increase.\nBy the way, the figures attained from the shootout are wonderfully un-assailable, you can't really challenge them, instead if you don't believe the numbers are fair because the code to solve a problem in your favorite language isn't optimized properly, then you can submit better code yourself. The act of many people doing this means most of the code on there is pretty damn optimized for every popular language. If you show them a more optimized compiler or interpreter, they may include the results from it as well.\nOh: except C#, that's only represented by MONO so if Microsoft's compiler is more optimized, it's not shown. All the tests seem to run on Linux machines. My guess is Microsoft's C# should run at about the same speed as Java, but the shootout lists mono as a bit slower (about 3x as slow as C)..\n", "As others said, if you do it in C it will be pretty much unbeatable, because C buffers I/O, and getc() is inlined (in my memory).\nYour real performance issue will be in the diff.\nMaybe there's a pretty good one out there, but for those size files I doubt it. For fun, I'm a do-it-yourselfer. The strategy I would use is to have a rolling window in each file, several megabytes long. The search strategy for mismatches is diagonal search, which is if you are at lines i and j, compare in this sequence:\nline(i+0) == line(j+0)\n\nline(i+0) == line(j+1)\nline(i+1) == line(j+0)\n\nline(i+0) == line(j+2)\nline(i+1) == line(j+1)\nline(i+2) == line(j+0)\n\nand so on. No doubt there's a better way, but if I'm going to code it myself and manage the rolling windows, that's what I'd try.\n" ]
[ 11, 3, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "character", "performance", "python" ]
stackoverflow_0001336259_c#_character_performance_python.txt
Q: Email integration I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response. My question is, how is this done and what is it called? Thanks A: Generally: 1) Set up a dedicated email account for the purpose. 2) Have a programm monitor the mailbox (let's say fetchmail, since that's what I do). 3) When an email arrives at the account, fetchmail downloads the email, writes it to disk, and calls script or program you have written with the email file as an argument. 4) Your script or program parses the email and takes an appropriate action. The part that's usually mysterious to people is the fetchmail part (#2). Specifically on Mail Servers (iff you control the mailserver enough to redirect emails to scripts): 1-3) Configure an address to be piped to a script you have written. 4) Same as above. A: You should take a look at Lamson; it'll enable you do what you've described, and more besides. A: From your tags, I'll assume you're wanting to do this in Django. There's an app out there called jutda-helpdesk that does exactly what you're looking for using poplib, which means you just have to set up a POP3 compatible email address. Take a look at their get_email.py to see how they do it. You just run this script from cron. A: This is an area where the Rails-world is ahead: Rails has built-in support for receiving emails. The mail sever configuration though is probably just the same. A: To see a working example on how to receive emails in python and process then using django, check this: http://code.google.com/p/jutda-helpdesk/ A: A common tool used for this purpose is procmail. You need to set up dedicated email address (which is the "from_email" address in your outgoing email). Then your MTA, such as postfix or qmail, will deliver mail to that address to procmail instead of an actual mailbox. Procmail can then pass the email on to your python script that can do updates in the app. See standalone django scripts by James Bennett on how to code python scripts that can work with your app.
Email integration
I was wondering if someone could help me out. In some web application, the app will send out emails, say when a new message has been posted. Then instead of signing into the application to post a reply you can just simply reply to the email and it will automatically update the web app with your response. My question is, how is this done and what is it called? Thanks
[ "Generally:\n1) Set up a dedicated email account for the purpose.\n2) Have a programm monitor the mailbox (let's say fetchmail, since that's what I do).\n3) When an email arrives at the account, fetchmail downloads the email, writes it to disk, and calls script or program you have written with the email file as an argument.\n4) Your script or program parses the email and takes an appropriate action.\nThe part that's usually mysterious to people is the fetchmail part (#2).\nSpecifically on Mail Servers (iff you control the mailserver enough to redirect emails to scripts):\n1-3) Configure an address to be piped to a script you have written.\n4) Same as above.\n", "You should take a look at Lamson; it'll enable you do what you've described, and more besides.\n", "From your tags, I'll assume you're wanting to do this in Django.\nThere's an app out there called jutda-helpdesk that does exactly what you're looking for using poplib, which means you just have to set up a POP3 compatible email address.\nTake a look at their get_email.py to see how they do it. You just run this script from cron.\n", "This is an area where the Rails-world is ahead: Rails has built-in support for receiving emails. The mail sever configuration though is probably just the same.\n", "To see a working example on how to receive emails in python and process then using django, check this: http://code.google.com/p/jutda-helpdesk/\n", "A common tool used for this purpose is procmail.\nYou need to set up dedicated email address (which is the \"from_email\" address in your outgoing email). Then your MTA, such as postfix or qmail, will deliver mail to that address to procmail instead of an actual mailbox.\nProcmail can then pass the email on to your python script that can do updates in the app. See standalone django scripts by James Bennett on how to code python scripts that can work with your app.\n" ]
[ 7, 5, 4, 3, 2, 1 ]
[]
[]
[ "django", "email", "python" ]
stackoverflow_0000640970_django_email_python.txt
Q: Python: Set with only existence check? I have a set of lots of big long strings that I want to do existence lookups for. I don't need the whole string ever to be saved. As far as I can tell, the set() actually stored the string which is eating up a lot of my memory. Does such a data structure exist? done = hash_only_set() while len(queue) > 0 : item = queue.pop() if item not in done : process(item) done.add(item) (My queue is constantly being filled by other threads so I have no way of dedupping it at the start). A: It's certainly possible to keep a set of only hashes: done = set() while len(queue) > 0 : item = queue.pop() h = hash(item) if h not in done : process(item) done.add(h) Notice that because of hash collisions, there is a chance that you consider an item done even though it isn't. If you cannot accept this risk, you really need to save the full strings to be able to tell whether you have seen it before. Alternatively: perhaps the processing itself would be able to tell? Yet alternatively: if you cannot accept to keep the strings in memory, keep them in a database, or create files in a directory with the same name as the string. A: You can use a data structure called Bloom Filter specifically for this purpose. A Python implementation can be found here. EDIT: Important notes: False positives are possible in this data structure, i.e. a check for the existence of a string could return a positive result even though it was not stored. False negatives (getting a negative result for a string that was stored) are not possible. That said, the chances of this happening can be brought to a minimum if used properly and so I consider this data structure to be very useful. A: If you use a secure (like SHA-256, found in the hashlib module) hash function to hash the strings, it's very unlikely that you would found duplicate (and if you find some you can probably win a prize as with most cryptographic hash functions). The builtin __hash__() method does not guarantee you won't have duplicates (and since it only uses 32 bits, it's very likely you'll find some). A: You need to know the whole string to have 100% certainty. If you have lots of strings with similar prefixes you could save space by using a trie to store the strings. If your strings are long you could also save space by using a large hash function like SHA-1 to make the possibility of hash collisions so remote as to be irrelevant. If you can make the process() function idempotent - i.e. having it called twice on an item is only a performance issue, then the problem becomes a lot simpler and you can use lossy datastructures, such as bloom filters. A: You would have to think about how to do the lookup, since there are two methods that the set needs, __hash__ and __eq__. The hash is a "loose part" that you can take away, but the __eq__ is not a loose part that you can save; you have to have two strings for the comparison. If you only need negative confirmation (this item is not part of the set), you could fill a Set collection you implemented yourself with your strings, then you "finalize" the set by removing all strings, except those with collisions (those are kept around for eq tests), and you promise not to add more objects to your Set. Now you have an exclusive test available.. you can tell if an object is not in your Set. You can't be certain if "obj in Set == True" is a false positive or not. Edit: This is basically a bloom filter that was cleverly linked, but a bloom filter might use more than one hash per element which is really clever. Edit2: This is my 3-minute bloom filter: class BloomFilter (object): """ Let's make a bloom filter http://en.wikipedia.org/wiki/Bloom_filter __contains__ has false positives, but never false negatives """ def __init__(self, hashes=(hash, )): self.hashes = hashes self.data = set() def __contains__(self, obj): return all((h(obj) in self.data) for h in self.hashes) def add(self, obj): self.data.update(h(obj) for h in self.hashes) A: As has been hinted already, if the answers offered here (most of which break down in the face of hash collisions) are not acceptable you would need to use a lossless representation of the strings. Python's zlib module provides built-in string compression capabilities and could be used to pre-process the strings before you put them in your set. Note however that the strings would need to be quite long (which you hint that they are) and have minimal entropy in order to save much memory space. Other compression options might provide better space savings and some Python based implementations can be found here
Python: Set with only existence check?
I have a set of lots of big long strings that I want to do existence lookups for. I don't need the whole string ever to be saved. As far as I can tell, the set() actually stored the string which is eating up a lot of my memory. Does such a data structure exist? done = hash_only_set() while len(queue) > 0 : item = queue.pop() if item not in done : process(item) done.add(item) (My queue is constantly being filled by other threads so I have no way of dedupping it at the start).
[ "It's certainly possible to keep a set of only hashes:\ndone = set()\nwhile len(queue) > 0 :\n item = queue.pop()\n h = hash(item)\n if h not in done :\n process(item)\n done.add(h)\n\nNotice that because of hash collisions, there is a chance that you consider an item done even though it isn't. \nIf you cannot accept this risk, you really need to save the full strings to be able to tell whether you have seen it before. Alternatively: perhaps the processing itself would be able to tell?\nYet alternatively: if you cannot accept to keep the strings in memory, keep them in a database, or create files in a directory with the same name as the string.\n", "You can use a data structure called Bloom Filter specifically for this purpose. A Python implementation can be found here.\nEDIT: Important notes:\n\nFalse positives are possible in this data structure, i.e. a check for the existence of a string could return a positive result even though it was not stored.\nFalse negatives (getting a negative result for a string that was stored) are not possible.\n\nThat said, the chances of this happening can be brought to a minimum if used properly and so I consider this data structure to be very useful.\n", "If you use a secure (like SHA-256, found in the hashlib module) hash function to hash the strings, it's very unlikely that you would found duplicate (and if you find some you can probably win a prize as with most cryptographic hash functions).\nThe builtin __hash__() method does not guarantee you won't have duplicates (and since it only uses 32 bits, it's very likely you'll find some).\n", "You need to know the whole string to have 100% certainty. If you have lots of strings with similar prefixes you could save space by using a trie to store the strings. If your strings are long you could also save space by using a large hash function like SHA-1 to make the possibility of hash collisions so remote as to be irrelevant.\nIf you can make the process() function idempotent - i.e. having it called twice on an item is only a performance issue, then the problem becomes a lot simpler and you can use lossy datastructures, such as bloom filters.\n", "You would have to think about how to do the lookup, since there are two methods that the set needs, __hash__ and __eq__.\nThe hash is a \"loose part\" that you can take away, but the __eq__ is not a loose part that you can save; you have to have two strings for the comparison.\nIf you only need negative confirmation (this item is not part of the set), you could fill a Set collection you implemented yourself with your strings, then you \"finalize\" the set by removing all strings, except those with collisions (those are kept around for eq tests), and you promise not to add more objects to your Set. Now you have an exclusive test available.. you can tell if an object is not in your Set. You can't be certain if \"obj in Set == True\" is a false positive or not.\nEdit: This is basically a bloom filter that was cleverly linked, but a bloom filter might use more than one hash per element which is really clever.\nEdit2: This is my 3-minute bloom filter:\nclass BloomFilter (object):\n \"\"\" \n Let's make a bloom filter\n http://en.wikipedia.org/wiki/Bloom_filter\n\n __contains__ has false positives, but never false negatives\n \"\"\" \n def __init__(self, hashes=(hash, )): \n self.hashes = hashes\n self.data = set()\n def __contains__(self, obj):\n return all((h(obj) in self.data) for h in self.hashes)\n def add(self, obj):\n self.data.update(h(obj) for h in self.hashes)\n\n", "As has been hinted already, if the answers offered here (most of which break down in the face of hash collisions) are not acceptable you would need to use a lossless representation of the strings. \nPython's zlib module provides built-in string compression capabilities and could be used to pre-process the strings before you put them in your set. Note however that the strings would need to be quite long (which you hint that they are) and have minimal entropy in order to save much memory space. Other compression options might provide better space savings and some Python based implementations can be found here\n" ]
[ 10, 4, 4, 3, 2, 0 ]
[]
[]
[ "data_structures", "hash", "python", "set" ]
stackoverflow_0001333381_data_structures_hash_python_set.txt
Q: What's the fastest way to fixup line-endings for SMTP sending? I'm coding a email application that produces messages for sending via SMTP. That means I need to change all lone \n and \r characters into the canonical \r\n sequence we all know and love. Here's the code I've got now: CRLF = '\r\n' msg = re.sub(r'(?<!\r)\n', CRLF, msg) msg = re.sub(r'\r(?!\n)', CRLF, msg) The problem is it's not very fast. On large messages (around 80k) it takes up nearly 30% of the time to send a message! Can you do better? I eagerly await your Python gymnastics. A: This regex helped: re.sub(r'\r\n|\r|\n', '\r\n', msg) But this code ended up winning: msg.replace('\r\n','\n').replace('\r','\n').replace('\n','\r\n') The original regexes took .6s to convert /usr/share/dict/words from \n to \r\n, the new regex took .3s, and the replace()s took .08s. A: Maybe it is the fact that inserting an extra character in the middle of the string is killing it. When you are substituting the text "hello \r world" it has to actually increase the size of the entire string by one character to "hello \r\n world" . I would suggest looping over the string and looking at characters one by one. If it is not a \r or \n then just append it to the new string. If it is a \r or \n append the new string with the correct values Code in C# (converting to python should be trivial) string FixLineEndings(string input) { if (string.IsNullOrEmpty(input)) return string.Empty; StringBuilder rv = new StringBuilder(input.Length); for(int i = 0; i < input.Length; i++) { char c = input[i]; if (c != '\r' && c != '\n') { rv.Append(c); } else if (c == '\n') { rv.Append("\r\n"); } else if (c == '\r') { if (i == input.Length - 1) { rv.Append("\r\n"); //a \r at the end of the string } else if (input[i + 1] != '\n') { rv.Append("\r\n"); } } } return rv.ToString(); } This was interesting enough to go write up a sample program to test. I used the regex given in the other answer and the code for using the regex was: static readonly Regex _r1 = new Regex(@"(? I tried with a bunch of test cases. The outputs are: ------------------------ Size: 1000 characters All\r String: 00:00:00.0038237 Regex : 00:00:00.0047669 All\r\n String: 00:00:00.0001745 Regex : 00:00:00.0009238 All\n String: 00:00:00.0024014 Regex : 00:00:00.0029281 No \r or \n String: 00:00:00.0000904 Regex : 00:00:00.0000628 \r at every 100th position and \n at every 102th position String: 00:00:00.0002232 Regex : 00:00:00.0001937 ------------------------ Size: 10000 characters All\r String: 00:00:00.0010271 Regex : 00:00:00.0096480 All\r\n String: 00:00:00.0006441 Regex : 00:00:00.0038943 All\n String: 00:00:00.0010618 Regex : 00:00:00.0136604 No \r or \n String: 00:00:00.0006781 Regex : 00:00:00.0001943 \r at every 100th position and \n at every 102th position String: 00:00:00.0006537 Regex : 00:00:00.0005838 which show the string replacing function doing better in cases where the number of \r and \n's are high. For regular use though the original regex approach is much faster (see the last set of test cases - the ones w/o \r\n and with few \r's and \n's) This was of course coded in C# and not python but i'm guessing there would be similarities in the run times across languages A: Replace them on the fly as you're writing the string to wherever it's going. If you use a regex or anything else you'll be making two passes: one to replace the characters and then one to write it. Deriving a new Stream class and wrapping it around whatever you're writing to is pretty effective; that's the way we do it with System.Net.Mail and that means I can use the same stream encoder for writing to both files and network streams. I'd have to see some of your code in order to give you a really good way to do this though. Also, keep in mind that the actual replacement won't really be any faster, however the total execution time would be reduced since you're only making one pass instead of two (assuming you actually are writing the output of the email somewhere). A: You could start by pre-compiling the regexes, e.g. FIXCR = re.compile(r'\r(?!\n)') FIXLN = re.compile(r'(?<!\r)\n') Then use FIXCR.sub and FIXLN.sub. Next, you could try to combine the regexes into one, with a | thingy, which should also help.
What's the fastest way to fixup line-endings for SMTP sending?
I'm coding a email application that produces messages for sending via SMTP. That means I need to change all lone \n and \r characters into the canonical \r\n sequence we all know and love. Here's the code I've got now: CRLF = '\r\n' msg = re.sub(r'(?<!\r)\n', CRLF, msg) msg = re.sub(r'\r(?!\n)', CRLF, msg) The problem is it's not very fast. On large messages (around 80k) it takes up nearly 30% of the time to send a message! Can you do better? I eagerly await your Python gymnastics.
[ "This regex helped:\nre.sub(r'\\r\\n|\\r|\\n', '\\r\\n', msg)\nBut this code ended up winning:\nmsg.replace('\\r\\n','\\n').replace('\\r','\\n').replace('\\n','\\r\\n')\nThe original regexes took .6s to convert /usr/share/dict/words from \\n to \\r\\n, the new regex took .3s, and the replace()s took .08s. \n", "Maybe it is the fact that inserting an extra character in the middle of the string is killing it.\nWhen you are substituting the text \"hello \\r world\" it has to actually increase the size of the entire string by one character to \"hello \\r\\n world\" .\nI would suggest looping over the string and looking at characters one by one. If it is not a \\r or \\n then just append it to the new string. If it is a \\r or \\n append the new string with the correct values\nCode in C# (converting to python should be trivial)\n string FixLineEndings(string input)\n {\n if (string.IsNullOrEmpty(input))\n return string.Empty;\n\n StringBuilder rv = new StringBuilder(input.Length);\n\n for(int i = 0; i < input.Length; i++)\n {\n char c = input[i];\n if (c != '\\r' && c != '\\n')\n {\n rv.Append(c);\n }\n else if (c == '\\n')\n {\n rv.Append(\"\\r\\n\");\n }\n else if (c == '\\r')\n {\n if (i == input.Length - 1)\n {\n rv.Append(\"\\r\\n\"); //a \\r at the end of the string\n }\n else if (input[i + 1] != '\\n')\n {\n rv.Append(\"\\r\\n\");\n }\n\n }\n }\n\n return rv.ToString();\n }\n\nThis was interesting enough to go write up a sample program to test. I used the regex given in the other answer and the code for using the regex was:\nstatic readonly Regex _r1 = new Regex(@\"(?\n\nI tried with a bunch of test cases. The outputs are:\n\n------------------------\nSize: 1000 characters\nAll\\r\n String: 00:00:00.0038237\n Regex : 00:00:00.0047669\nAll\\r\\n\n String: 00:00:00.0001745\n Regex : 00:00:00.0009238\nAll\\n\n String: 00:00:00.0024014\n Regex : 00:00:00.0029281\nNo \\r or \\n\n String: 00:00:00.0000904\n Regex : 00:00:00.0000628\n\\r at every 100th position and \\n at every 102th position\n String: 00:00:00.0002232\n Regex : 00:00:00.0001937\n------------------------\nSize: 10000 characters\nAll\\r\n String: 00:00:00.0010271\n Regex : 00:00:00.0096480\nAll\\r\\n\n String: 00:00:00.0006441\n Regex : 00:00:00.0038943\nAll\\n\n String: 00:00:00.0010618\n Regex : 00:00:00.0136604\nNo \\r or \\n\n String: 00:00:00.0006781\n Regex : 00:00:00.0001943\n\\r at every 100th position and \\n at every 102th position\n String: 00:00:00.0006537\n Regex : 00:00:00.0005838\n\nwhich show the string replacing function doing better in cases where the number of \\r and \\n's are high. For regular use though the original regex approach is much faster (see the last set of test cases - the ones w/o \\r\\n and with few \\r's and \\n's)\nThis was of course coded in C# and not python but i'm guessing there would be similarities in the run times across languages\n", "Replace them on the fly as you're writing the string to wherever it's going. If you use a regex or anything else you'll be making two passes: one to replace the characters and then one to write it. Deriving a new Stream class and wrapping it around whatever you're writing to is pretty effective; that's the way we do it with System.Net.Mail and that means I can use the same stream encoder for writing to both files and network streams. I'd have to see some of your code in order to give you a really good way to do this though. Also, keep in mind that the actual replacement won't really be any faster, however the total execution time would be reduced since you're only making one pass instead of two (assuming you actually are writing the output of the email somewhere).\n", "You could start by pre-compiling the regexes, e.g.\nFIXCR = re.compile(r'\\r(?!\\n)')\nFIXLN = re.compile(r'(?<!\\r)\\n')\n\nThen use FIXCR.sub and FIXLN.sub. Next, you could try to combine the regexes into one, with a | thingy, which should also help.\n" ]
[ 2, 1, 1, 0 ]
[ "Something like this? Compile your regex.\nCRLF = '\\r\\n'\ncr_or_lf_regex = re.compile(r'(?:(?<!\\r)\\n)|(?:\\r(?!\\n))')\n\nThen, when you want to replace stuff use this:\ncr_or_lf_regex.sub(CRLF, msg)\n\nEDIT: Since the above is actually slower, let me take another stab at it.\nlast_chr = ''\n\ndef fix_crlf(input_chr):\n global last_chr\n if input_chr != '\\r' and input_chr != '\\n' and last_chr != '\\r':\n result = input_chr\n else:\n if last_chr == '\\r' and input_chr == '\\n': result = '\\r\\n'\n elif last_chr != '\\r' and input_chr == '\\n': result = '\\r\\n'\n elif last_chr == '\\r' and input_chr != '\\n': result = '\\r\\n%s' % input_chr\n else: result = ''\n\n last_chr = input_chr\n return result\n\nfixed_msg = ''.join([fix_crlf(c) for c in msg])\n\n" ]
[ -1 ]
[ "performance", "python", "smtp" ]
stackoverflow_0001336524_performance_python_smtp.txt
Q: SQLAlchemy Inheritance I'm a bit confused about inheritance under sqlalchemy, to the point where I'm not even sure what type of inheritance (single table, joined table, concrete) I should be using here. I've got a base class with some information that's shared amongst the subclasses, and some data that are completely separate. Sometimes, I'll want data from all the classes, and sometimes only from the subclasses. Here's an example: class Building: def __init__(self, x, y): self.x = x self.y = y class Commercial(Building): def __init__(self, x, y, business): Building.__init__(self, x, y) self.business = business class Residential(Building): def __init__(self, x, y, numResidents): Building.__init__(self, x, y, layer) self.numResidents = numResidents How would I convert this to SQLAlchemy using declarative? How, then, would I query which buildings are within x>5 and y>3? Or which Residential buildings have only 1 resident? A: Choosing how to represent the inheritance is mostly a database design issue. For performance single table inheritance is usually best. From a good database design point of view, joined table inheritance is better. Joined table inheritance enables you to have foreign keys to subclasses enforced by the database, it's a lot simpler to have non-null constraints for subclass fields. Concrete table inheritance is kind of worst of both worlds. Single table inheritance setup with declarative looks like this: class Building(Base): __tablename__ = 'building' id = Column(Integer, primary_key=True) building_type = Column(String(32), nullable=False) x = Column(Float, nullable=False) y = Column(Float, nullable=False) __mapper_args__ = {'polymorphic_on': building_type} class Commercial(Building): __mapper_args__ = {'polymorphic_identity': 'commercial'} business = Column(String(50)) class Residential(Building): __mapper_args__ = {'polymorphic_identity': 'residential'} num_residents = Column(Integer) To make it joined table inheritance, you'll need to add __tablename__ = 'commercial' id = Column(None, ForeignKey('building.id'), primary_key=True) to the subclasses. Querying is mostly the same with both approaches: # buildings that are within x>5 and y>3 session.query(Building).filter((Building.x > 5) & (Building.y > 3)) # Residential buildings that have only 1 resident session.query(Residential).filter(Residential.num_residents == 1) To control which fields are loaded you can use the query.with_polymorphic() method. The most important thing to think about using inheritance for the datamapping, is whether you actually need inheritance or can do with aggregation. Inheritance will be a pain if you will ever need to change the type of an building, or your buildings can have both commercial and residential aspects. In those cases it's usually better to have the commercial and residential aspects as related objects. A: Ants Aasma's solution is much more elegant, but if you are keeping your Class definitions separate from your table definitions intentionally, you need to map your classes to your tables with the mapper function. After you have defined your classes, you need to define your tables: building = Table('building', metadata, Column('id', Integer, primary_key=True), Column('x', Integer), Column('y', Integer), ) commercial = Table('commercial', metadata, Column('building_id', Integer, ForeignKey('building.id'), primary_key=True), Column('business', String(50)), ) residential = Table('residential', metadata, Column('building_id', Integer, ForeignKey('building.id'), primary_key=True), Column('numResidents', Integer), ) Then you can map the tables to the classes: mapper(Building, building) mapper(Commercial, commercial, inherits=Building, polymorphic_identity='commercial') mapper(Residential, residential, inherits=Building, polymorphic_identity='residential') Then interact with the classes the exact same way Ants Aasma described.
SQLAlchemy Inheritance
I'm a bit confused about inheritance under sqlalchemy, to the point where I'm not even sure what type of inheritance (single table, joined table, concrete) I should be using here. I've got a base class with some information that's shared amongst the subclasses, and some data that are completely separate. Sometimes, I'll want data from all the classes, and sometimes only from the subclasses. Here's an example: class Building: def __init__(self, x, y): self.x = x self.y = y class Commercial(Building): def __init__(self, x, y, business): Building.__init__(self, x, y) self.business = business class Residential(Building): def __init__(self, x, y, numResidents): Building.__init__(self, x, y, layer) self.numResidents = numResidents How would I convert this to SQLAlchemy using declarative? How, then, would I query which buildings are within x>5 and y>3? Or which Residential buildings have only 1 resident?
[ "Choosing how to represent the inheritance is mostly a database design issue. For performance single table inheritance is usually best. From a good database design point of view, joined table inheritance is better. Joined table inheritance enables you to have foreign keys to subclasses enforced by the database, it's a lot simpler to have non-null constraints for subclass fields. Concrete table inheritance is kind of worst of both worlds.\nSingle table inheritance setup with declarative looks like this:\nclass Building(Base):\n __tablename__ = 'building'\n id = Column(Integer, primary_key=True)\n building_type = Column(String(32), nullable=False)\n x = Column(Float, nullable=False)\n y = Column(Float, nullable=False)\n __mapper_args__ = {'polymorphic_on': building_type}\n\nclass Commercial(Building):\n __mapper_args__ = {'polymorphic_identity': 'commercial'}\n business = Column(String(50))\n\nclass Residential(Building):\n __mapper_args__ = {'polymorphic_identity': 'residential'}\n num_residents = Column(Integer)\n\nTo make it joined table inheritance, you'll need to add\n__tablename__ = 'commercial'\nid = Column(None, ForeignKey('building.id'), primary_key=True)\n\nto the subclasses.\nQuerying is mostly the same with both approaches:\n# buildings that are within x>5 and y>3\nsession.query(Building).filter((Building.x > 5) & (Building.y > 3))\n# Residential buildings that have only 1 resident\nsession.query(Residential).filter(Residential.num_residents == 1)\n\nTo control which fields are loaded you can use the query.with_polymorphic() method.\nThe most important thing to think about using inheritance for the datamapping, is whether you actually need inheritance or can do with aggregation. Inheritance will be a pain if you will ever need to change the type of an building, or your buildings can have both commercial and residential aspects. In those cases it's usually better to have the commercial and residential aspects as related objects.\n", "Ants Aasma's solution is much more elegant, but if you are keeping your Class definitions separate from your table definitions intentionally, you need to map your classes to your tables with the mapper function. After you have defined your classes, you need to define your tables:\nbuilding = Table('building', metadata,\n Column('id', Integer, primary_key=True),\n Column('x', Integer),\n Column('y', Integer),\n)\ncommercial = Table('commercial', metadata,\n Column('building_id', Integer, ForeignKey('building.id'), primary_key=True),\n Column('business', String(50)),\n)\nresidential = Table('residential', metadata,\n Column('building_id', Integer, ForeignKey('building.id'), primary_key=True),\n Column('numResidents', Integer),\n)\nThen you can map the tables to the classes:\nmapper(Building, building)\nmapper(Commercial, commercial, inherits=Building, polymorphic_identity='commercial')\nmapper(Residential, residential, inherits=Building, polymorphic_identity='residential')\nThen interact with the classes the exact same way Ants Aasma described.\n" ]
[ 109, 19 ]
[]
[]
[ "inheritance", "python", "sqlalchemy" ]
stackoverflow_0001337095_inheritance_python_sqlalchemy.txt
Q: Python object @property I'm trying to create a point class which defines a property called "coordinate". However, it's not behaving like I'd expect and I can't figure out why. class Point: def __init__(self, coord=None): self.x = coord[0] self.y = coord[1] @property def coordinate(self): return (self.x, self.y) @coordinate.setter def coordinate(self, value): self.x = value[0] self.y = value[1] p = Point((0,0)) p.coordinate = (1,2) >>> p.x 0 >>> p.y 0 >>> p.coordinate (1, 2) It seems that p.x and p.y are not getting set for some reason, even though the setter "should" set those values. Anybody know why this is? A: The property method (and by extension, the @property decorator) requires a new-style class i.e. a class that subclasses object. For instance, class Point: should be class Point(object): Also, the setter attribute (along with the others) was added in Python 2.6. A: It will work if you derive Point from object: class Point(object): # ...
Python object @property
I'm trying to create a point class which defines a property called "coordinate". However, it's not behaving like I'd expect and I can't figure out why. class Point: def __init__(self, coord=None): self.x = coord[0] self.y = coord[1] @property def coordinate(self): return (self.x, self.y) @coordinate.setter def coordinate(self, value): self.x = value[0] self.y = value[1] p = Point((0,0)) p.coordinate = (1,2) >>> p.x 0 >>> p.y 0 >>> p.coordinate (1, 2) It seems that p.x and p.y are not getting set for some reason, even though the setter "should" set those values. Anybody know why this is?
[ "The property method (and by extension, the @property decorator) requires a new-style class i.e. a class that subclasses object.\nFor instance,\nclass Point:\n\nshould be\nclass Point(object):\n\nAlso, the setter attribute (along with the others) was added in Python 2.6.\n", "It will work if you derive Point from object:\nclass Point(object):\n # ...\n\n" ]
[ 10, 4 ]
[]
[]
[ "new_style_class", "python" ]
stackoverflow_0001337935_new_style_class_python.txt
Q: Can I write Python applications using PyObjC that target NON-jailbroken iPhones? Is it currently possible to compile Python and PyObjC for the iPhone such that AppStore applications can written in Python? If not, is this a purely technical issue or a deliberate policy decision by Apple? A: No: it's Apple's deliberate policy decision (no doubt with some technical underpinnings) to not support interpreters/runtimes on iPhone for most languages -- ObjC (and Javascript within Safari) is what Apple wants you to use, not Python, Java, Ruby, and so forth. A: no, apple strictly forbids running any kind of interpreter on iphone, and it is completely policy issue.
Can I write Python applications using PyObjC that target NON-jailbroken iPhones?
Is it currently possible to compile Python and PyObjC for the iPhone such that AppStore applications can written in Python? If not, is this a purely technical issue or a deliberate policy decision by Apple?
[ "No: it's Apple's deliberate policy decision (no doubt with some technical underpinnings) to not support interpreters/runtimes on iPhone for most languages -- ObjC (and Javascript within Safari) is what Apple wants you to use, not Python, Java, Ruby, and so forth.\n", "no, apple strictly forbids running any kind of interpreter on iphone, and it is completely policy issue.\n" ]
[ 1, 0 ]
[]
[]
[ "iphone", "pyobjc", "python" ]
stackoverflow_0001338095_iphone_pyobjc_python.txt
Q: Executing Python Scripts in Android This link says that Android support Python, Lua and BeanShell Scripts, subsequently for Perl too. If it is so, is it possible for developers to write python scripts and call them in their standard Java based android applications? A: I remember reading about this awhile back as well. It's not on the android dev site. It's a separate project, android-scripting. Python API: API Reference SL4A API Help A: I think I have read somewhere that ASE with Python was a huge library ( several Mo), and so was completely unpractical for a public application. But you can still use it for development...
Executing Python Scripts in Android
This link says that Android support Python, Lua and BeanShell Scripts, subsequently for Perl too. If it is so, is it possible for developers to write python scripts and call them in their standard Java based android applications?
[ "I remember reading about this awhile back as well.\nIt's not on the android dev site.\nIt's a separate project, android-scripting.\nPython API:\nAPI Reference\nSL4A API Help\n", "I think I have read somewhere that ASE with Python was a huge library ( several Mo), and so was completely unpractical for a public application.\nBut you can still use it for development...\n" ]
[ 5, 0 ]
[]
[]
[ "android", "python", "scripting" ]
stackoverflow_0001326169_android_python_scripting.txt
Q: Configure Django project in a subdirectory using mod_python. Admin not working HI guys. I was trying to configure my django project in a subdirectory of the root, but didn't get things working.(LOcally it works perfect). I followed the django official django documentarion to deploy a project with mod_python. The real problem is that I am getting "Page not found" errors, whenever I try to go to the admin or any view of my apps. Here is my python.conf file located in /etc/httpd/conf.d/ in Fedora 7 LoadModule python_module modules/mod_python.so SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonOption django.root /mysite PythonDebug On PythonPath "['/var/www/vhosts/mysite.com/httpdocs','/var/www/vhosts/mysite.com/httpdocs/mysite'] + sys.path" I know /var/www/ is not the best place to put my django project, but I just want to send a demo of my work in progress to my customer, later I will change the location. For example. If I go to www.domain.com/mysite/ I get the index view I configured in mysite.urls. But I cannot access to my app.urls (www.domain.com/mysite/app/) and any of the admin.urls.(www.domain.com/mysite/admin/) Here is mysite.urls: urlpatterns = patterns('', url(r'^admin/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'), (r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done'), (r'^reset/(?P<uidb36>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), (r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), (r'^$', 'app.views.index'), (r'^admin/', include(admin.site.urls)), (r'^app/', include('mysite.app.urls')), (r'^photologue/', include('photologue.urls')), ) I also tried changing admin.site.urls with ''django.contrib.admin.urls' , but it didn't worked. I googled a lot to solve this problem and read how other developers configure their django project, but didn't find too much information to deploy django in a subdirectory. I have the admin enabled in INSTALLED_APPS and the settings.py is ok. Please if you have any guide or telling me what I am doing wrong it will be much appreciated. THanks. A: I'm using mod_wsgi, so I'm not sure if it's all the same. But in my urls.py, I have: (r'^admin/(.*)', admin.site.root), In my Apache config, I have this: Alias /admin/media/ /usr/lib/python2.5/site-packages/django/contrib/admin/media Your path may vary. A: If your settings.py is correct and has your correct INSTALLED_APPS and it works in the development server, then I'd say it's you Apache configuration file. Try running my python app to create Apache configuration files for mod_python + Django. The source is here at github.com. Once you have a working configuration file, you can modify it. Run like this: C:\Users\hughdbrown\Documents\django\Apache-conf>python http_conf_gen.py --flavor=mod_python --source_dir=. --server_name=foo.com --project_name=foo Writing 'foo.vhost.python.conf' Result looks like this: # apache_template.txt NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName foo.com DocumentRoot "./foo/" <Location "/"> # without this, you'll get 403 permission errors # Apache - "Client denied by server configuration" allow from all SetHandler python-program PythonHandler django.core.handlers.modpython PythonOption django.root /foo PythonDebug On PythonPath "[os.path.normpath(s) for s in (r'.', r'C:\Python26\lib\site-packages\django') ] + sys.path" SetEnv DJANGO_SETTINGS_MODULE foo.settings PythonAutoReload Off </Location> <Location "/media" > SetHandler None allow from all </Location> <Location "/site-media" > SetHandler None allow from all </Location> <LocationMatch "\.(jpg|gif|png)$"> SetHandler None allow from all </LocationMatch> </VirtualHost>
Configure Django project in a subdirectory using mod_python. Admin not working
HI guys. I was trying to configure my django project in a subdirectory of the root, but didn't get things working.(LOcally it works perfect). I followed the django official django documentarion to deploy a project with mod_python. The real problem is that I am getting "Page not found" errors, whenever I try to go to the admin or any view of my apps. Here is my python.conf file located in /etc/httpd/conf.d/ in Fedora 7 LoadModule python_module modules/mod_python.so SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonOption django.root /mysite PythonDebug On PythonPath "['/var/www/vhosts/mysite.com/httpdocs','/var/www/vhosts/mysite.com/httpdocs/mysite'] + sys.path" I know /var/www/ is not the best place to put my django project, but I just want to send a demo of my work in progress to my customer, later I will change the location. For example. If I go to www.domain.com/mysite/ I get the index view I configured in mysite.urls. But I cannot access to my app.urls (www.domain.com/mysite/app/) and any of the admin.urls.(www.domain.com/mysite/admin/) Here is mysite.urls: urlpatterns = patterns('', url(r'^admin/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'), (r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done'), (r'^reset/(?P<uidb36>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), (r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), (r'^$', 'app.views.index'), (r'^admin/', include(admin.site.urls)), (r'^app/', include('mysite.app.urls')), (r'^photologue/', include('photologue.urls')), ) I also tried changing admin.site.urls with ''django.contrib.admin.urls' , but it didn't worked. I googled a lot to solve this problem and read how other developers configure their django project, but didn't find too much information to deploy django in a subdirectory. I have the admin enabled in INSTALLED_APPS and the settings.py is ok. Please if you have any guide or telling me what I am doing wrong it will be much appreciated. THanks.
[ "I'm using mod_wsgi, so I'm not sure if it's all the same. But in my urls.py, I have:\n(r'^admin/(.*)', admin.site.root),\n\nIn my Apache config, I have this:\nAlias /admin/media/ /usr/lib/python2.5/site-packages/django/contrib/admin/media\n\nYour path may vary.\n", "If your settings.py is correct and has your correct INSTALLED_APPS and it works in the development server, then I'd say it's you Apache configuration file.\nTry running my python app to create Apache configuration files for mod_python + Django. The source is here at github.com. Once you have a working configuration file, you can modify it.\nRun like this:\nC:\\Users\\hughdbrown\\Documents\\django\\Apache-conf>python http_conf_gen.py --flavor=mod_python --source_dir=. --server_name=foo.com --project_name=foo\nWriting 'foo.vhost.python.conf'\n\nResult looks like this:\n# apache_template.txt\nNameVirtualHost *:80\n\n<VirtualHost *:80>\n ServerAdmin [email protected]\n ServerName foo.com\n\n DocumentRoot \"./foo/\"\n\n <Location \"/\">\n # without this, you'll get 403 permission errors\n # Apache - \"Client denied by server configuration\" \n allow from all\n\n SetHandler python-program\n PythonHandler django.core.handlers.modpython\n PythonOption django.root /foo\n\n PythonDebug On\n PythonPath \"[os.path.normpath(s) for s in (r'.', r'C:\\Python26\\lib\\site-packages\\django') ] + sys.path\"\n SetEnv DJANGO_SETTINGS_MODULE foo.settings\n PythonAutoReload Off\n </Location>\n\n <Location \"/media\" >\n SetHandler None\n allow from all\n </Location>\n\n <Location \"/site-media\" >\n SetHandler None\n allow from all\n </Location>\n\n <LocationMatch \"\\.(jpg|gif|png)$\">\n SetHandler None\n allow from all\n </LocationMatch>\n</VirtualHost>\n\n" ]
[ 0, 0 ]
[]
[]
[ "deployment", "django", "mod_python", "python" ]
stackoverflow_0001338101_deployment_django_mod_python_python.txt
Q: How to sort digits in a number? I'm trying to make an easy script in Python which takes a number and saves in a variable, sorting the digits in ascending and descending orders and saving both in separate variables. Implementing Kaprekar's constant. It's probably a pretty noobish question. But I'm new to this and I couldn't find anything on Google that could help me. A site I found tried to explain a way using lists, but it didn't work out very well. A: Sort the digits in ascending and descending orders: ascending = "".join(sorted(str(number))) descending = "".join(sorted(str(number), reverse=True)) Like this: >>> number = 5896 >>> ascending = "".join(sorted(str(number))) >>> >>> descending = "".join(sorted(str(number), reverse=True)) >>> ascending '5689' >>> descending '9865' And if you need them to be numbers again (not just strings), call int() on them: >>> int(ascending) 5689 >>> int(descending) 9865 2020-01-30 >>> def kaprekar(number): ... diff = None ... while diff != 0: ... ascending = "".join(sorted(str(number))) ... descending = "".join(sorted(str(number), reverse=True)) ... print(ascending, descending) ... next_number = int(descending) - int(ascending) ... diff = number - next_number ... number = next_number ... >>> kaprekar(2777) 2777 7772 4599 9954 3555 5553 1899 9981 0288 8820 2358 8532 1467 7641 A: >>> x = [4,5,81,5,28958,28] # first list >>> print sorted(x) [4, 5, 5, 28, 81, 28958] >>> x [4, 5, 81, 5, 28958, 28] >>> x.sort() # sort the list in place >>> x [4, 5, 5, 28, 81, 28958] >>> x.append(1) # add to the list >>> x [4, 5, 5, 28, 81, 28958, 1] >>> sorted(x) [1, 4, 5, 5, 28, 81, 28958] As many others have pointed out, you can sort a number forwards like: >>> int(''.join(sorted(str(2314)))) 1234 That's pretty much the most standard way. Reverse a number? Doesn't work well in a number with trailing zeros. >>> y = int(''.join(sorted(str(2314)))) >>> y 1234 >>> int(str(y)[::-1]) 4321 The [::-1] notation indicates that the iterable is to be traversed in reverse order. A: As Mark Rushakoff already mentioned (but didn't solve) in his answer, str(n) doesn't handle numeric n with leading zeros, which you need for Kaprekar's operation. hughdbrown's answer similarly doesn't work with leading zeros. One way to make sure you have a four-character string is to use the zfill string method. For example: >>> n = 2 >>> str(n) '2' >>> str(n).zfill(4) '0002' You should also be aware that in versions of Python prior to 3, a leading zero in a numeric literal indicated octal: >>> str(0043) '35' >>> str(0378) File "<stdin>", line 1 str(0378) ^ SyntaxError: invalid token In Python 3, 0043 is not a valid numeric literal at all. A: I don't know the python syntax, but thinking the generically, I would convert the input string into a character array, they do a sort on the character array, and lastly pipe it out.
How to sort digits in a number?
I'm trying to make an easy script in Python which takes a number and saves in a variable, sorting the digits in ascending and descending orders and saving both in separate variables. Implementing Kaprekar's constant. It's probably a pretty noobish question. But I'm new to this and I couldn't find anything on Google that could help me. A site I found tried to explain a way using lists, but it didn't work out very well.
[ "Sort the digits in ascending and descending orders:\nascending = \"\".join(sorted(str(number)))\n\ndescending = \"\".join(sorted(str(number), reverse=True))\n\nLike this:\n>>> number = 5896\n>>> ascending = \"\".join(sorted(str(number)))\n>>>\n>>> descending = \"\".join(sorted(str(number), reverse=True))\n>>> ascending\n'5689'\n>>> descending\n'9865'\n\nAnd if you need them to be numbers again (not just strings), call int() on them:\n>>> int(ascending)\n5689\n>>> int(descending)\n9865\n\n\n2020-01-30\n>>> def kaprekar(number):\n... diff = None\n... while diff != 0:\n... ascending = \"\".join(sorted(str(number)))\n... descending = \"\".join(sorted(str(number), reverse=True))\n... print(ascending, descending)\n... next_number = int(descending) - int(ascending)\n... diff = number - next_number\n... number = next_number\n...\n>>> kaprekar(2777)\n2777 7772\n4599 9954\n3555 5553\n1899 9981\n0288 8820\n2358 8532\n1467 7641\n\n", ">>> x = [4,5,81,5,28958,28] # first list\n>>> print sorted(x)\n[4, 5, 5, 28, 81, 28958]\n>>> x\n[4, 5, 81, 5, 28958, 28]\n>>> x.sort() # sort the list in place\n>>> x\n[4, 5, 5, 28, 81, 28958]\n>>> x.append(1) # add to the list\n>>> x\n[4, 5, 5, 28, 81, 28958, 1]\n>>> sorted(x)\n[1, 4, 5, 5, 28, 81, 28958]\n\nAs many others have pointed out, you can sort a number forwards like:\n>>> int(''.join(sorted(str(2314))))\n1234\n\nThat's pretty much the most standard way.\nReverse a number? Doesn't work well in a number with trailing zeros.\n>>> y = int(''.join(sorted(str(2314))))\n>>> y\n1234\n>>> int(str(y)[::-1])\n4321\n\nThe [::-1] notation indicates that the iterable is to be traversed in reverse order.\n", "As Mark Rushakoff already mentioned (but didn't solve) in his answer, str(n) doesn't handle numeric n with leading zeros, which you need for Kaprekar's operation. hughdbrown's answer similarly doesn't work with leading zeros.\nOne way to make sure you have a four-character string is to use the zfill string method. For example:\n>>> n = 2\n>>> str(n)\n'2'\n>>> str(n).zfill(4)\n'0002'\n\nYou should also be aware that in versions of Python prior to 3, a leading zero in a numeric literal indicated octal:\n>>> str(0043)\n'35'\n>>> str(0378)\n File \"<stdin>\", line 1\n str(0378)\n ^\nSyntaxError: invalid token\n\nIn Python 3, 0043 is not a valid numeric literal at all.\n", "I don't know the python syntax, but thinking the generically, I would convert the input string into a character array, they do a sort on the character array, and lastly pipe it out.\n" ]
[ 19, 4, 4, 1 ]
[ "Here's an answer to the title question in Perl, with a bias toward sorting 4-digit numbers for the Kaprekar algorithm. In the example, replace 'shift' with the number to sort. It sorts digits in a 4-digit number with leading 0's ($asc is sorted in ascending order, $dec is descending), and outputs a number with leading 0's:\nmy $num = sprintf(\"%04d\", shift);\nmy $asc = sprintf(\"%04d\", join('', sort {$a <=> $b} split('', $num)));\nmy $dec = sprintf(\"%04d\", join('', sort {$b <=> $a} split('', $num)));\n\n" ]
[ -1 ]
[ "numbers", "python" ]
stackoverflow_0001301156_numbers_python.txt
Q: BDB Python Interface Error when Reading BDB bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- /dbs/supermodels.db: unexpected file type or format') Is this error a result of incompatible BDB versions (1.85 or 3+)? If so, how do I check the versions, trouble-shoot and solve this error? A: Yes, this certainly could be due to older versions of the db file, but it would help if you posted the code that generated this exception and the full traceback. In the absence of this, are you sure that the database file that you're opening is of the correct type? For example, attempting to open a btree file as if it is a hash raises the exception that you are seeing: >>> import bsddb >>> bt = bsddb.btopen('bt') >>> bt.close() >>> bsddb.hashopen('bt') Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/bsddb/__init__.py", line 298, in hashopen d.open(file, db.DB_HASH, flags, mode) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- ./bt: unexpected file type or format') In *nix you can usually determine the type of db by using the file command, e.g. $ file /etc/aliases.db cert8.db /etc/aliases.db: Berkeley DB (Hash, version 8, native byte-order) cert8.db: Berkeley DB 1.85 (Hash, version 2, native byte-order) Opening a 1.85 version file fails with the same exception: >>> db = bsddb.hashopen('/etc/aliases.db') # works, but... >>> db = bsddb.hashopen('cert8.db') Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/bsddb/__init__.py", line 298, in hashopen d.open(file, db.DB_HASH, flags, mode) bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- ./cert8.db: unexpected file type or format') If you need to migrate the database files, you should look at the db_dump, db_dump185 and db_load utilities that come with the bdb distribuition.
BDB Python Interface Error when Reading BDB
bsddb.db.DBInvalidArgError: (22, 'Invalid argument -- /dbs/supermodels.db: unexpected file type or format') Is this error a result of incompatible BDB versions (1.85 or 3+)? If so, how do I check the versions, trouble-shoot and solve this error?
[ "Yes, this certainly could be due to older versions of the db file, but it would help if you posted the code that generated this exception and the full traceback.\nIn the absence of this, are you sure that the database file that you're opening is of the correct type? For example, attempting to open a btree file as if it is a hash raises the exception that you are seeing:\n>>> import bsddb\n>>> bt = bsddb.btopen('bt')\n>>> bt.close()\n>>> bsddb.hashopen('bt')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\n File \"/usr/lib/python2.4/bsddb/__init__.py\", line 298, in hashopen\n d.open(file, db.DB_HASH, flags, mode)\nbsddb.db.DBInvalidArgError: (22, 'Invalid argument -- ./bt: unexpected file type or format')\n\nIn *nix you can usually determine the type of db by using the file command, e.g.\n$ file /etc/aliases.db cert8.db \n/etc/aliases.db: Berkeley DB (Hash, version 8, native byte-order)\ncert8.db: Berkeley DB 1.85 (Hash, version 2, native byte-order)\n\nOpening a 1.85 version file fails with the same exception:\n>>> db = bsddb.hashopen('/etc/aliases.db') # works, but...\n>>> db = bsddb.hashopen('cert8.db')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\n File \"/usr/lib/python2.4/bsddb/__init__.py\", line 298, in hashopen\n d.open(file, db.DB_HASH, flags, mode)\nbsddb.db.DBInvalidArgError: (22, 'Invalid argument -- ./cert8.db: unexpected file type or format')\n\nIf you need to migrate the database files, you should look at the db_dump, db_dump185 and db_load utilities that come with the bdb distribuition.\n" ]
[ 1 ]
[]
[]
[ "berkeley_db", "python" ]
stackoverflow_0001336617_berkeley_db_python.txt
Q: Parameter binding using GQL in Google App Engine Okay so I have this mode: class Posts(db.Model): rand1 = db.FloatProperty() #other models here and this controller: class Random(webapp.RequestHandler): def get(self): rand2 = random.random() posts_query = db.GqlQuery("SELECT * FROM Posts WHERE rand1 > :rand2 ORDER BY rand LIMIT 1") #Assigning values for Django templating template_values = { 'posts_query': posts_query, #test purposes 'rand2': rand2, } path = os.path.join(os.path.dirname(__file__), 'templates/random.html') self.response.out.write(template.render(path, template_values)) So when an entity is added a random float is generated (0-1) and then when I need to grab a random entity I want to be able to just use a simple SELECT query. It errors with: BadArgumentError('Missing named arguments for bind, requires argument rand2',) Now this works if I go: posts_query = db.GqlQuery("SELECT * FROM Posts WHERE rand1 > 1 ORDER BY rand LIMIT 1") So clearly my query is wrong; how does one use a variable in a where statement :S A: Substitute: "...WHERE rand1 > :rand2 ORDER BY rand LIMIT 1") with: "...WHERE rand1 > :rand2 ORDER BY rand LIMIT 1", rand2=rand2) Or "...WHERE rand1 > :1 ORDER BY rand LIMIT 1", rand2) See for more information: "The Gql query class" The funny thing is that I have just learned this about 2 hrs ago :P
Parameter binding using GQL in Google App Engine
Okay so I have this mode: class Posts(db.Model): rand1 = db.FloatProperty() #other models here and this controller: class Random(webapp.RequestHandler): def get(self): rand2 = random.random() posts_query = db.GqlQuery("SELECT * FROM Posts WHERE rand1 > :rand2 ORDER BY rand LIMIT 1") #Assigning values for Django templating template_values = { 'posts_query': posts_query, #test purposes 'rand2': rand2, } path = os.path.join(os.path.dirname(__file__), 'templates/random.html') self.response.out.write(template.render(path, template_values)) So when an entity is added a random float is generated (0-1) and then when I need to grab a random entity I want to be able to just use a simple SELECT query. It errors with: BadArgumentError('Missing named arguments for bind, requires argument rand2',) Now this works if I go: posts_query = db.GqlQuery("SELECT * FROM Posts WHERE rand1 > 1 ORDER BY rand LIMIT 1") So clearly my query is wrong; how does one use a variable in a where statement :S
[ "Substitute:\n \"...WHERE rand1 > :rand2 ORDER BY rand LIMIT 1\")\n\nwith:\n \"...WHERE rand1 > :rand2 ORDER BY rand LIMIT 1\", rand2=rand2)\n\nOr\n \"...WHERE rand1 > :1 ORDER BY rand LIMIT 1\", rand2)\n\nSee for more information: \"The Gql query class\"\n The funny thing is that I have just learned this about 2 hrs ago :P \n" ]
[ 3 ]
[]
[]
[ "binding", "django", "google_app_engine", "gql", "python" ]
stackoverflow_0001338704_binding_django_google_app_engine_gql_python.txt
Q: python variable scope I have started to learn about python and is currently reading through a script written by someone else. I noticed that globals are scattered throughout the script (and I don't like it).. Besides that, I also noticed that when I have code like this def some_function(): foo.some_method() # some other code if __name__ == '__main__' : foo = Some_Object() some_function() even though I don't pass in foo into some_function(), but some_function is still able to manipulate foo (??!). I don't quite like this although it is somewhat similar to Javascript closure (?). I would like to know whether it is possible to stop some_function() from accessing foo if foo is not passed in as a function argument? Or this is the preferred way in python??! (I'm using python 2.5 under ubuntu hardy at the moment) A: That script has really serious issues with style and organization -- for example, if somebody imports it they have to somehow divine the fact that they have to set thescript.foo to an instance of Some_Object before calling some_function... yeurgh!-) It's unfortunate that you're having to learn Python from a badly written script, but I'm not sure I understand your question. Variable scope in Python is locals (including arguments), nonlocals (i.e., locals of surrounding functions, for nested functions), globals, builtins. Is what you want to stop access to globals? some_function.func_globals is read-only, but you could make a new function with empty globals: import new f=new.function(some_function.func_code, {}) now calling f() will given an exception NameError: global name 'foo' is not defined. You could set this back in the module with the name some_function, or even do it systematically via a decorator, e.g.: def noglobal(f): return new.function(f.func_code, {}) ... @noglobal def some_function(): ... this will guarantee the exception happens whenever some_function is called. I'm not clear on what benefit you expect to derive from that, though. Maybe you can clarify...? A: As far as I know, the only way to stop some_function from accessing foo is to eliminate the foo variable from some_function's scope, possibly like: tmp = foo del foo some_function() foo = tmp Of course, this will crash your (current) code since foo doesn't exist in the scope of some_function anymore. In Python, variables are searched locally, then up in scope until globally, and finally built-ins are searched. Another option could be: with some_object as foo: some_function() But then, you'll have to at least declare some_object.__exit__, maybe some_object.__enter__ as well. The end result is that you control which foo is in the scope of some_function. More explanation on the "with" statement here. A: In python, something like foo is not a value, it's a name. When you try to access it, python tries to find the value associated with it (like dereferencing a pointer). It does this by first looking in the local scope (the function), then working its way outwards until it reaches the module scope (i.e. global), and finally builtins, until it finds something matching the name. That makes something like this work: def foo(): bar() def bar(): pass Despite the fact that bar doesn't exist when you defined foo, the function will work because you later defined bar in a scope that encloses foo. Exactly the same thing is going on in the code you post, it's just that foo is the output of Some_Object(), not a function definition. As Alex said, the fact that you can write code like that does not mean that you should.
python variable scope
I have started to learn about python and is currently reading through a script written by someone else. I noticed that globals are scattered throughout the script (and I don't like it).. Besides that, I also noticed that when I have code like this def some_function(): foo.some_method() # some other code if __name__ == '__main__' : foo = Some_Object() some_function() even though I don't pass in foo into some_function(), but some_function is still able to manipulate foo (??!). I don't quite like this although it is somewhat similar to Javascript closure (?). I would like to know whether it is possible to stop some_function() from accessing foo if foo is not passed in as a function argument? Or this is the preferred way in python??! (I'm using python 2.5 under ubuntu hardy at the moment)
[ "That script has really serious issues with style and organization -- for example, if somebody imports it they have to somehow divine the fact that they have to set thescript.foo to an instance of Some_Object before calling some_function... yeurgh!-)\nIt's unfortunate that you're having to learn Python from a badly written script, but I'm not sure I understand your question. Variable scope in Python is locals (including arguments), nonlocals (i.e., locals of surrounding functions, for nested functions), globals, builtins.\nIs what you want to stop access to globals? some_function.func_globals is read-only, but you could make a new function with empty globals:\nimport new\nf=new.function(some_function.func_code, {})\n\nnow calling f() will given an exception NameError: global name 'foo' is not defined. You could set this back in the module with the name some_function, or even do it systematically via a decorator, e.g.:\ndef noglobal(f):\n return new.function(f.func_code, {})\n...\n@noglobal\ndef some_function(): ...\n\nthis will guarantee the exception happens whenever some_function is called. I'm not clear on what benefit you expect to derive from that, though. Maybe you can clarify...?\n", "As far as I know, the only way to stop some_function from accessing foo is to eliminate the foo variable from some_function's scope, possibly like:\ntmp = foo\ndel foo\nsome_function()\nfoo = tmp\n\nOf course, this will crash your (current) code since foo doesn't exist in the scope of some_function anymore.\nIn Python, variables are searched locally, then up in scope until globally, and finally built-ins are searched.\nAnother option could be:\nwith some_object as foo:\n some_function()\n\nBut then, you'll have to at least declare some_object.__exit__, maybe some_object.__enter__ as well. The end result is that you control which foo is in the scope of some_function. \nMore explanation on the \"with\" statement here.\n", "In python, something like foo is not a value, it's a name. When you try to access it, python tries to find the value associated with it (like dereferencing a pointer). It does this by first looking in the local scope (the function), then working its way outwards until it reaches the module scope (i.e. global), and finally builtins, until it finds something matching the name.\nThat makes something like this work:\ndef foo():\n bar()\n\ndef bar():\n pass\n\nDespite the fact that bar doesn't exist when you defined foo, the function will work because you later defined bar in a scope that encloses foo.\nExactly the same thing is going on in the code you post, it's just that foo is the output of Some_Object(), not a function definition.\nAs Alex said, the fact that you can write code like that does not mean that you should.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001338590_python.txt
Q: Is it crazy to not rely on a caching system like memcached nowadays ( for dynamic sites )? I was just reviewing one of my client's applications which uses some old outdated php framework that doesn't rely on caching at all and is pretty much completely database dependent. I figure I'll just rewrite it from scratch because it's really outdated and in this rewrite I want to implement a caching system. It'd be nice if I could get a few pointers if anyone has done this prior. Rewrite will be done in either PHP or Python Would be nice if I could profile before and after this implementation I have my own server so I'm not restricted by shared hosting A: Caching, when it works right (==high hit rate), is one of the few general-purpose techniques that can really help with latency -- the harder part of problems generically describes as "performance". You can enhance QPS (queries per second) measures of performance just by throwing more hardware at the problem -- but latency doesn't work that way (i.e., it doesn't take just one month to make a babies if you set nine mothers to work on it;-). However, the main resource used by caching is typically memory (RAM or disk as it may be). As you mention in a comment that the only performance problem you observe is memory usage, caching wouldn't help: it would just earmark some portion of memory to use for caching purposes, leaving even less available as a "general fund". As a resident of California I'm witnessing first-hand what happens when too many resources are earmarked, and I couldn't recommend such a course of action with a clear conscience!-) A: If your site performance is fine then there's no reason to add caching. Lots of sites can get by without any cache at all, or by moving to a file-system based cache. It's only the super high traffic sites that need memcached. What's "crazy" is code architecture (or a lack of architecture) that makes adding caching in latter difficult. A: Since Python is one of your choices, I would go with Django. Built-in caching mechanism, and I've been using this debug_toolbar to help me while developing/profiling. By the way, memcached does not work the way you've described. It maps unique keys to values in memory, it has nothing to do with .csh files or database queries. What you store in a value is what's going to be cached. Oh, and caching is only worth if there are (or will be) performance problems. There's nothing wrong with "not relying" with caches if you don't need it. Premature optimization is 99% evil! A: Depending on the specific nature of the codebase and traffic patterns, you might not even need to re-write the whole site. Horribly inefficient code is not such a big deal if it can be bypassed via cache for 99.9% of page requests. When choosing PHP or Python, make sure you figure out where you're going to host the site (or if you even get to make that call). Many of my clients are already set up on a webserver and Python is not an option. You should also make sure any databases/external programs you want to interface with are well-supported in PHP or Python.
Is it crazy to not rely on a caching system like memcached nowadays ( for dynamic sites )?
I was just reviewing one of my client's applications which uses some old outdated php framework that doesn't rely on caching at all and is pretty much completely database dependent. I figure I'll just rewrite it from scratch because it's really outdated and in this rewrite I want to implement a caching system. It'd be nice if I could get a few pointers if anyone has done this prior. Rewrite will be done in either PHP or Python Would be nice if I could profile before and after this implementation I have my own server so I'm not restricted by shared hosting
[ "Caching, when it works right (==high hit rate), is one of the few general-purpose techniques that can really help with latency -- the harder part of problems generically describes as \"performance\". You can enhance QPS (queries per second) measures of performance just by throwing more hardware at the problem -- but latency doesn't work that way (i.e., it doesn't take just one month to make a babies if you set nine mothers to work on it;-).\nHowever, the main resource used by caching is typically memory (RAM or disk as it may be). As you mention in a comment that the only performance problem you observe is memory usage, caching wouldn't help: it would just earmark some portion of memory to use for caching purposes, leaving even less available as a \"general fund\". As a resident of California I'm witnessing first-hand what happens when too many resources are earmarked, and I couldn't recommend such a course of action with a clear conscience!-)\n", "If your site performance is fine then there's no reason to add caching. Lots of sites can get by without any cache at all, or by moving to a file-system based cache. It's only the super high traffic sites that need memcached.\nWhat's \"crazy\" is code architecture (or a lack of architecture) that makes adding caching in latter difficult. \n", "Since Python is one of your choices, I would go with Django. Built-in caching mechanism, and I've been using this debug_toolbar to help me while developing/profiling.\nBy the way, memcached does not work the way you've described. It maps unique keys to values in memory, it has nothing to do with .csh files or database queries. What you store in a value is what's going to be cached.\nOh, and caching is only worth if there are (or will be) performance problems. There's nothing wrong with \"not relying\" with caches if you don't need it. Premature optimization is 99% evil!\n", "Depending on the specific nature of the codebase and traffic patterns, you might not even need to re-write the whole site. Horribly inefficient code is not such a big deal if it can be bypassed via cache for 99.9% of page requests.\nWhen choosing PHP or Python, make sure you figure out where you're going to host the site (or if you even get to make that call). Many of my clients are already set up on a webserver and Python is not an option. You should also make sure any databases/external programs you want to interface with are well-supported in PHP or Python.\n" ]
[ 10, 6, 3, 0 ]
[]
[]
[ "memcached", "php", "python", "scalability" ]
stackoverflow_0001338777_memcached_php_python_scalability.txt
Q: What wrong when SimpleXMLRPC and DBusGMainLoop working in the same time In python I try create a service that maintain calling event between SflPhone(dbus service) and external app, when I start SimpleXMLRPCServer my service no longer response for any calling event, such as on_call_state_changed function was not called. When I comment out thread.start_new_thread(start_server(s,)) everything is work well. I don't know how to make this two thing work together. Does one can help? Thank. import dbus from dbus.mainloop.glib import DBusGMainLoop import gobject from gobject import GObject from SimpleXMLRPCServer import SimpleXMLRPCServer import thread from os import path class SlfPhoneConnector : def __init__(self) : self.activeCalls = {} account = { "username" : "1111", "Account.type" : "SIP", "hostname" : "192.168.1.109", "Account.alias" : "1111", "password":"1111", "Account.enable" : "TRUE" } session = dbus.SessionBus() conf_obj = session.get_object("org.sflphone.SFLphone", "/org/sflphone/SFLphone/ConfigurationManager") self.conf_mgr = dbus.Interface(conf_obj ,"org.sflphone.SFLphone.ConfigurationManager") call_obj = session.get_object("org.sflphone.SFLphone", "/org/sflphone/SFLphone/CallManager") self.call_mgr = dbus.Interface(call_obj ,"org.sflphone.SFLphone.CallManager") self.call_mgr.connect_to_signal('incomingCall', self.on_incoming_call) self.call_mgr.connect_to_signal('callStateChanged', self.on_call_state_changed) self.account_id = self.conf_mgr.addAccount(account) self.conf_mgr.sendRegister(self.account_id, 1) #self.call_mgr.placeCall(self.account_id, self.account_id, "2222" ) def on_incoming_call(self, account, callid, to): print "Incoming call: " + account + ", " + callid + ", " + to self.activeCalls[callid] = {'Account': account, 'To': to, 'State': '' } self.call_mgr.accept(callid) # On call state changed event, set the values for new calls, # or delete the call from the list of active calls def on_call_state_changed(self, callid, state): print "Call state changed: " + callid + ", " + state if state == "HUNGUP": try: del self.activeCalls[callid] except KeyError: print "Call " + callid + " didn't exist. Cannot delete." elif state in [ "RINGING", "CURRENT", "INCOMING", "HOLD" ]: try: self.activeCalls[callid]['State'] = state except KeyError, e: print "This call didn't exist!: " + callid + ". Adding it to the list." callDetails = self.getCallDetails(callid) self.activeCalls[callid] = {'Account': callDetails['ACCOUNTID'], 'To': callDetails['PEER_NUMBER'], 'State': state } elif state in [ "BUSY", "FAILURE" ]: try: del self.activeCalls[callid] except KeyError, e: print "This call didn't exist!: " + callid def getCallDetails(self, callid): """Return informations on this call if exists""" return self.call_mgr.getCallDetails(callid) def place_call(self, callid): self.call_mgr.placeCall(self.account_id, self.account_id, callid) def hangup(self) : call0 = self.activeCalls.keys()[0] self.call_mgr.hangUp(call0) def start_server(obj): server = SimpleXMLRPCServer( ("localhost", 9988), allow_none= True) server.register_instance(obj) print "server start @localhost 9988 forever ..." server.serve_forever() if __name__ == "__main__" : DBusGMainLoop(set_as_default=True) s = SlfPhoneConnector() thread.start_new_thread(start_server(s,)) ... {{ another code here }} #loop = gobject.MainLoop() #loop.run() A: Try adding after ifmain trick: gobject.threads_init() dbus.glib.init_threads()
What wrong when SimpleXMLRPC and DBusGMainLoop working in the same time
In python I try create a service that maintain calling event between SflPhone(dbus service) and external app, when I start SimpleXMLRPCServer my service no longer response for any calling event, such as on_call_state_changed function was not called. When I comment out thread.start_new_thread(start_server(s,)) everything is work well. I don't know how to make this two thing work together. Does one can help? Thank. import dbus from dbus.mainloop.glib import DBusGMainLoop import gobject from gobject import GObject from SimpleXMLRPCServer import SimpleXMLRPCServer import thread from os import path class SlfPhoneConnector : def __init__(self) : self.activeCalls = {} account = { "username" : "1111", "Account.type" : "SIP", "hostname" : "192.168.1.109", "Account.alias" : "1111", "password":"1111", "Account.enable" : "TRUE" } session = dbus.SessionBus() conf_obj = session.get_object("org.sflphone.SFLphone", "/org/sflphone/SFLphone/ConfigurationManager") self.conf_mgr = dbus.Interface(conf_obj ,"org.sflphone.SFLphone.ConfigurationManager") call_obj = session.get_object("org.sflphone.SFLphone", "/org/sflphone/SFLphone/CallManager") self.call_mgr = dbus.Interface(call_obj ,"org.sflphone.SFLphone.CallManager") self.call_mgr.connect_to_signal('incomingCall', self.on_incoming_call) self.call_mgr.connect_to_signal('callStateChanged', self.on_call_state_changed) self.account_id = self.conf_mgr.addAccount(account) self.conf_mgr.sendRegister(self.account_id, 1) #self.call_mgr.placeCall(self.account_id, self.account_id, "2222" ) def on_incoming_call(self, account, callid, to): print "Incoming call: " + account + ", " + callid + ", " + to self.activeCalls[callid] = {'Account': account, 'To': to, 'State': '' } self.call_mgr.accept(callid) # On call state changed event, set the values for new calls, # or delete the call from the list of active calls def on_call_state_changed(self, callid, state): print "Call state changed: " + callid + ", " + state if state == "HUNGUP": try: del self.activeCalls[callid] except KeyError: print "Call " + callid + " didn't exist. Cannot delete." elif state in [ "RINGING", "CURRENT", "INCOMING", "HOLD" ]: try: self.activeCalls[callid]['State'] = state except KeyError, e: print "This call didn't exist!: " + callid + ". Adding it to the list." callDetails = self.getCallDetails(callid) self.activeCalls[callid] = {'Account': callDetails['ACCOUNTID'], 'To': callDetails['PEER_NUMBER'], 'State': state } elif state in [ "BUSY", "FAILURE" ]: try: del self.activeCalls[callid] except KeyError, e: print "This call didn't exist!: " + callid def getCallDetails(self, callid): """Return informations on this call if exists""" return self.call_mgr.getCallDetails(callid) def place_call(self, callid): self.call_mgr.placeCall(self.account_id, self.account_id, callid) def hangup(self) : call0 = self.activeCalls.keys()[0] self.call_mgr.hangUp(call0) def start_server(obj): server = SimpleXMLRPCServer( ("localhost", 9988), allow_none= True) server.register_instance(obj) print "server start @localhost 9988 forever ..." server.serve_forever() if __name__ == "__main__" : DBusGMainLoop(set_as_default=True) s = SlfPhoneConnector() thread.start_new_thread(start_server(s,)) ... {{ another code here }} #loop = gobject.MainLoop() #loop.run()
[ "Try adding after ifmain trick:\ngobject.threads_init()\ndbus.glib.init_threads()\n\n" ]
[ 0 ]
[]
[]
[ "dbus", "python", "sip", "voip", "xml_rpc" ]
stackoverflow_0001339003_dbus_python_sip_voip_xml_rpc.txt
Q: How to fetch rows from below table using google app engine GQL query (python)? List_name Email ========== ================== andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] In the above table email_ids are repeated, I want to display all emails in the above table with out repetition. Here there is only 4 emails, I want to retrieve only 4 rows from table is this possible using GQL query in goolge app engine. And one more thing i want to use paging to display emails (10 emails for a page), i.e., if there is 15 emails in table, need to display 10 emails in 1ST page and 5 emails in 2nd page. Paging is very important! A: If you're accustomed to working with a relational database, Google App Engine can seem unusual. The query syntax is very limited. Instead of putting everything into a simple table and writing complicated queries, you have to put everything into complicated data structures and then write simple queries. You should get used to creating several different Entities, usually one for each primary key for a possible query. In this case, you should have an Entity UniqueEmailAddress or something like that. Every time you add a record, get the UniqueEmailAddress with that name and update it, or create it if it doesn't exist. Then you can just query on UniqueEmailAddress directly. A: You can use the set function in python you will be able to remove all the duplication. I don't think there is a way of doing it GQL. >>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana'] >>> fruit = set(basket) # create a set without duplicates >>> fruit set(['orange', 'pear', 'apple', 'banana'])
How to fetch rows from below table using google app engine GQL query (python)?
List_name Email ========== ================== andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] andrew [email protected] adam [email protected] smith [email protected] john [email protected] In the above table email_ids are repeated, I want to display all emails in the above table with out repetition. Here there is only 4 emails, I want to retrieve only 4 rows from table is this possible using GQL query in goolge app engine. And one more thing i want to use paging to display emails (10 emails for a page), i.e., if there is 15 emails in table, need to display 10 emails in 1ST page and 5 emails in 2nd page. Paging is very important!
[ "If you're accustomed to working with a relational database, Google App Engine can seem unusual. The query syntax is very limited. Instead of putting everything into a simple table and writing complicated queries, you have to put everything into complicated data structures and then write simple queries.\nYou should get used to creating several different Entities, usually one for each primary key for a possible query. In this case, you should have an Entity UniqueEmailAddress or something like that. Every time you add a record, get the UniqueEmailAddress with that name and update it, or create it if it doesn't exist. Then you can just query on UniqueEmailAddress directly.\n", "You can use the set function in python you will be able to remove all the duplication. I don't think there is a way of doing it GQL.\n>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\n>>> fruit = set(basket) # create a set without duplicates\n>>> fruit\nset(['orange', 'pear', 'apple', 'banana'])\n\n" ]
[ 4, 1 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001339346_google_app_engine_python.txt
Q: accessing base class primitive type in python I am trying to derive a class from a python primitive, the float, for the purpose of printing a different repr string when it's printed out. How do I access the underlying data from the derived class when I do this? Here's a simplified example of what I am trying to do: class efloat(float): def __repr__(self): return "here's my number: %s" % str(WHAT CAN I PUT HERE???) Ok, thanks folks! I think I get it now. Here's the finished class for anyone who's curious: import math class efloat(float): """efloat(x) -> floating point number with engineering representation when printed Convert a string or a number to a floating point number, if possible. When asked to render itself for printing (via str() or print) it is normalized to engineering style notation at powers of 10 in multiples of 3 (for micro, milli, kilo, mega, giga, etc.) """ def _exponent(self): if self == 0.0: ret = 0 else: ret = math.floor(math.log10(abs(self))) return ret def _mantissa(self): return self/math.pow(10, self._exponent()) def _asEng(self): shift = self._exponent() % 3 retval = "%3.12ge%+d" % (self._mantissa()*math.pow(10, shift), self._exponent() - shift) return retval def __str__(self): return self._asEng() def __repr__(self): return str(self) def __add__(self, x): return efloat(float.__add__(self, float(x))) def __radd__(self, x): return efloat(float.__add__(self, float(x))) def __mul__(self, x): return efloat(float.__mul__(self, float(x))) def __rmul__(self, x): return efloat(float.__mul__(self, float(x))) def __sub__(self, x): return efloat(float.__sub__(self, float(x))) def __rsub__(self, x): return efloat(float.__rsub__(self, float(x))) def __div__(self, x): return efloat(float.__div__(self, float(x))) def __rdiv__(self, x): return efloat(float.__rdiv__(self, float(x))) def __truediv__(self, x): return efloat(float.__truediv__(self, float(x))) def __rtruediv__(self, x): return efloat(float.__rtruediv__(self, float(x))) def __pow__(self, x): return efloat(float.__pow__(self, float(x))) def __rpow__(self, x): return efloat(float.__rpow__(self, float(x))) def __divmod__(self, x): return efloat(float.__divmod__(self, float(x))) def __neg__(self): return efloat(float.__neg__(self)) def __floordiv__(self, x): return efloat(float.__floordiv__(self, float(x))) A: If you don't override __str__, that will still access the underlying method, so: class efloat(float): def __repr__(self): return "here's my number: %s" % self will work. More generally, you could use self+0, self*1, or any other identity manipulation that you did not explicitly override; if you overrode them all, worst case, float.__add__(self, 0) or the like. A: You can call the base class methods, by accessing them off the base class to get an unbound method and call them with self: class myfloat(float): def __str__(self): return "My float is " + float.__str__(self) print(myfloat(4.5))
accessing base class primitive type in python
I am trying to derive a class from a python primitive, the float, for the purpose of printing a different repr string when it's printed out. How do I access the underlying data from the derived class when I do this? Here's a simplified example of what I am trying to do: class efloat(float): def __repr__(self): return "here's my number: %s" % str(WHAT CAN I PUT HERE???) Ok, thanks folks! I think I get it now. Here's the finished class for anyone who's curious: import math class efloat(float): """efloat(x) -> floating point number with engineering representation when printed Convert a string or a number to a floating point number, if possible. When asked to render itself for printing (via str() or print) it is normalized to engineering style notation at powers of 10 in multiples of 3 (for micro, milli, kilo, mega, giga, etc.) """ def _exponent(self): if self == 0.0: ret = 0 else: ret = math.floor(math.log10(abs(self))) return ret def _mantissa(self): return self/math.pow(10, self._exponent()) def _asEng(self): shift = self._exponent() % 3 retval = "%3.12ge%+d" % (self._mantissa()*math.pow(10, shift), self._exponent() - shift) return retval def __str__(self): return self._asEng() def __repr__(self): return str(self) def __add__(self, x): return efloat(float.__add__(self, float(x))) def __radd__(self, x): return efloat(float.__add__(self, float(x))) def __mul__(self, x): return efloat(float.__mul__(self, float(x))) def __rmul__(self, x): return efloat(float.__mul__(self, float(x))) def __sub__(self, x): return efloat(float.__sub__(self, float(x))) def __rsub__(self, x): return efloat(float.__rsub__(self, float(x))) def __div__(self, x): return efloat(float.__div__(self, float(x))) def __rdiv__(self, x): return efloat(float.__rdiv__(self, float(x))) def __truediv__(self, x): return efloat(float.__truediv__(self, float(x))) def __rtruediv__(self, x): return efloat(float.__rtruediv__(self, float(x))) def __pow__(self, x): return efloat(float.__pow__(self, float(x))) def __rpow__(self, x): return efloat(float.__rpow__(self, float(x))) def __divmod__(self, x): return efloat(float.__divmod__(self, float(x))) def __neg__(self): return efloat(float.__neg__(self)) def __floordiv__(self, x): return efloat(float.__floordiv__(self, float(x)))
[ "If you don't override __str__, that will still access the underlying method, so:\nclass efloat(float):\n def __repr__(self):\n return \"here's my number: %s\" % self\n\nwill work. More generally, you could use self+0, self*1, or any other identity manipulation that you did not explicitly override; if you overrode them all, worst case, float.__add__(self, 0) or the like.\n", "You can call the base class methods, by accessing them off the base class to get an unbound method and call them with self:\nclass myfloat(float):\n def __str__(self):\n return \"My float is \" + float.__str__(self)\n\nprint(myfloat(4.5))\n\n" ]
[ 4, 2 ]
[]
[]
[ "class", "floating_point", "python" ]
stackoverflow_0001338858_class_floating_point_python.txt
Q: Accessing dictionary with class attribute now I am working with python. So one question about dict .... suppose I have a dict that config = {'account_receivable': '4', 'account_payable': '5', 'account_cogs': '8', 'accoun t_retained_earning': '9', 'account_income': '6', 'account_expense': '31', 'durat ion': 2, 'financial_year_month': 9, 'financial_year_day': 15, 'account_cash': '3 ', 'account_inventory': '2', 'account_accumulated_depriciation': '34', 'account_ depriciation_expense': '35', 'account_salary_expense': '30', 'account_payroll_pa yable': '68', 'account_discount': '36', 'financial_year_close': '2008-08-08'} if print --> config['account_receivable'] it will return its corresponding value that 4 but I want to access it by that way--> config.account_receivable and then it will return it corresponding value how can I implement this? A: For that purpose, lo that many years ago, I invented the simple Bunch idiom; one simple way to implement Bunch is: class Bunch(object): def __init__(self, adict): self.__dict__.update(adict) If config is a dict, you can't use config.account_receivable -- that's absolutely impossible, because a dict doesn't have that attribute, period. However, you can wrap config into a Bunch: cb = Bunch(config) and then access cb.config_account to your heart's content! Edit: if you want attribute assignment on the Bunch to also affect the original dict (config in this case), so that e.g. cb.foo = 23 will do config['foo'] = 23, you need a slighly different implementation of Bunch: class RwBunch(object): def __init__(self, adict): self.__dict__ = adict Normally, the plain Bunch is preferred, exactly because, after instantiation, the Bunch instance and the dict it was "primed" from are entirely decoupled -- changes to either of them do not affect the other; and such decoupling, most often, is what's desired. When you do want "coupling" effects, then RwBunch is the way to get them: with it, every attribute setting or deletion on the instance will intrinsically set or delete the item from the dict, and, vice versa, setting or deleting items from the dict will intrinsically set or delete attributes from the instance. A: You can do this with collections.namedtuple: from collections import namedtuple config_object = namedtuple('ConfigClass', config.keys())(*config.values()) print config_object.account_receivable You can learn more about namedtuple here: http://docs.python.org/dev/library/collections.html A: Have a look at Convert Python dict to object?. A: You need to use one of Python's special methods. class config(object): def __init__(self, data): self.data = data def __getattr__(self, name): return self.data[name] c = config(data_dict) print c.account_discount -> 36 A: Well, you could do it with a bunch of objects. class Config(object): pass config = Config() config.account_receivable = 4 print config.account_receivable Obviously you can extend this class to do more for you. e.g. define __init__ so you can create it with arguments, and perhaps defaults. You could possibly also use a namedtuple (python 2.4/2.5 link). This is a data structure specifically designed to hold structured records. from collections import namedtuple Config = namedtuple('Config', 'account_receivable account_payable') # etc -- list all the fields c = Config(account_receivable='4', account_payable='5') print c.account_receivable With namedtuples, you cannot change values once they have been set. A: You can subclass dict to return items from itself for undefined attributes: class AttrAccessibleDict(dict): def __getattr__(self, key): try: return self[key] except KeyError: return AttributeError(key) config = AttrAccessibleDict(config) print(config.account_receivable) You might also want to override some other methods as well, such as __setattr__, __delattr__, __str__, __repr__ and copy.
Accessing dictionary with class attribute
now I am working with python. So one question about dict .... suppose I have a dict that config = {'account_receivable': '4', 'account_payable': '5', 'account_cogs': '8', 'accoun t_retained_earning': '9', 'account_income': '6', 'account_expense': '31', 'durat ion': 2, 'financial_year_month': 9, 'financial_year_day': 15, 'account_cash': '3 ', 'account_inventory': '2', 'account_accumulated_depriciation': '34', 'account_ depriciation_expense': '35', 'account_salary_expense': '30', 'account_payroll_pa yable': '68', 'account_discount': '36', 'financial_year_close': '2008-08-08'} if print --> config['account_receivable'] it will return its corresponding value that 4 but I want to access it by that way--> config.account_receivable and then it will return it corresponding value how can I implement this?
[ "For that purpose, lo that many years ago, I invented the simple Bunch idiom; one simple way to implement Bunch is:\nclass Bunch(object):\n def __init__(self, adict):\n self.__dict__.update(adict)\n\nIf config is a dict, you can't use config.account_receivable -- that's absolutely impossible, because a dict doesn't have that attribute, period. However, you can wrap config into a Bunch:\ncb = Bunch(config)\n\nand then access cb.config_account to your heart's content!\nEdit: if you want attribute assignment on the Bunch to also affect the original dict (config in this case), so that e.g. cb.foo = 23 will do config['foo'] = 23, you need a slighly different implementation of Bunch:\nclass RwBunch(object):\n def __init__(self, adict):\n self.__dict__ = adict\n\nNormally, the plain Bunch is preferred, exactly because, after instantiation, the Bunch instance and the dict it was \"primed\" from are entirely decoupled -- changes to either of them do not affect the other; and such decoupling, most often, is what's desired.\nWhen you do want \"coupling\" effects, then RwBunch is the way to get them: with it, every attribute setting or deletion on the instance will intrinsically set or delete the item from the dict, and, vice versa, setting or deleting items from the dict will intrinsically set or delete attributes from the instance.\n", "You can do this with collections.namedtuple:\nfrom collections import namedtuple\nconfig_object = namedtuple('ConfigClass', config.keys())(*config.values())\nprint config_object.account_receivable\n\nYou can learn more about namedtuple here:\nhttp://docs.python.org/dev/library/collections.html\n", "Have a look at Convert Python dict to object?.\n", "You need to use one of Python's special methods.\nclass config(object):\n def __init__(self, data):\n self.data = data\n def __getattr__(self, name):\n return self.data[name]\n\n\nc = config(data_dict)\nprint c.account_discount\n-> 36\n\n", "Well, you could do it with a bunch of objects.\nclass Config(object):\n pass\n\nconfig = Config()\nconfig.account_receivable = 4\nprint config.account_receivable\n\nObviously you can extend this class to do more for you. e.g. define __init__ so you can create it with arguments, and perhaps defaults.\nYou could possibly also use a namedtuple (python 2.4/2.5 link). This is a data structure specifically designed to hold structured records.\nfrom collections import namedtuple\nConfig = namedtuple('Config', 'account_receivable account_payable') # etc -- list all the fields\nc = Config(account_receivable='4', account_payable='5')\nprint c.account_receivable\n\nWith namedtuples, you cannot change values once they have been set.\n", "You can subclass dict to return items from itself for undefined attributes:\nclass AttrAccessibleDict(dict):\n def __getattr__(self, key):\n try:\n return self[key]\n except KeyError: \n return AttributeError(key)\n\nconfig = AttrAccessibleDict(config)\nprint(config.account_receivable)\n\nYou might also want to override some other methods as well, such as __setattr__, __delattr__, __str__, __repr__ and copy.\n" ]
[ 13, 7, 3, 2, 0, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0001338714_dictionary_python.txt
Q: Python: Persistent shell variables in subprocess I'm trying to execute a series of commands using Pythons subprocess module, however I need to set shell variables with export before running them. Of course the shell doesn't seem to be persistent so when I run a command later those shell variables are lost. Is there any way to go about this? I could create a /bin/sh process, but how would I get the exit codes of the commands run under that? A: subprocess.Popen takes an optional named argument env that's a dictionary to use as the subprocess's environment (what you're describing as "shell variables"). Prepare a dict as you need it (you may start with a copy of os.environ and alter that as you need) and pass it to all the subprocess.Popen calls you perform. A: Alex is absolutely correct. To give an example current_env=environ.copy() current_env["XXX"] = "SOMETHING" #If you want to change some env variable subProcess.Popen("command_n_args", env=current_env)
Python: Persistent shell variables in subprocess
I'm trying to execute a series of commands using Pythons subprocess module, however I need to set shell variables with export before running them. Of course the shell doesn't seem to be persistent so when I run a command later those shell variables are lost. Is there any way to go about this? I could create a /bin/sh process, but how would I get the exit codes of the commands run under that?
[ "subprocess.Popen takes an optional named argument env that's a dictionary to use as the subprocess's environment (what you're describing as \"shell variables\"). Prepare a dict as you need it (you may start with a copy of os.environ and alter that as you need) and pass it to all the subprocess.Popen calls you perform.\n", "Alex is absolutely correct. To give an example\ncurrent_env=environ.copy()\ncurrent_env[\"XXX\"] = \"SOMETHING\" #If you want to change some env variable\nsubProcess.Popen(\"command_n_args\", env=current_env)\n\n" ]
[ 13, 5 ]
[]
[]
[ "persistent", "python", "shell", "subprocess", "variables" ]
stackoverflow_0001126116_persistent_python_shell_subprocess_variables.txt
Q: Why do I get 'service unavailable' with multiple chat sends when using XMPP? I have made a simple IM client in both Python and C#, using a few different XMPP libraries for each. They work very well as simple autoresponders, or trivial bots, but when I turn them into chat rooms (ie, a message gets reflected to many other JIDs), I suddenly start getting 503 service-unavailable responses from the Google talk server. Where should I start looking to resolve this issue? Given that I have used several languages and libraries, I don't think this is a problem with my particular setup. I am using the various examples provided with the libraries. A: Do you have all people you try to send messages to in your rooster? Otherwise GTalk won't allow the message to be sent and instead return Error 503. There was a pidgin bug tracker describing a similar problem: Pidgin #4236 If you're sure you have all the JIDs in your rooster you should also check how manny messages are send in parallel. Google will limit the count of messages a single JID is allowed to send in a specified period of time. A: If you're looking to create actual chat rooms, why not rather get a jabber server to host those (following http://xmpp.org/extensions/xep-0045.html - ejabberd has these as default and there are plugins for most jabber servers to implement them), and then have your bot join that room (most clients support this - Google Talk doesn't unfortunately)?
Why do I get 'service unavailable' with multiple chat sends when using XMPP?
I have made a simple IM client in both Python and C#, using a few different XMPP libraries for each. They work very well as simple autoresponders, or trivial bots, but when I turn them into chat rooms (ie, a message gets reflected to many other JIDs), I suddenly start getting 503 service-unavailable responses from the Google talk server. Where should I start looking to resolve this issue? Given that I have used several languages and libraries, I don't think this is a problem with my particular setup. I am using the various examples provided with the libraries.
[ "Do you have all people you try to send messages to in your rooster?\nOtherwise GTalk won't allow the message to be sent and instead return Error 503.\nThere was a pidgin bug tracker describing a similar problem:\nPidgin #4236 \nIf you're sure you have all the JIDs in your rooster you should also check how manny messages are send in parallel. Google will limit the count of messages a single\nJID is allowed to send in a specified period of time.\n", "If you're looking to create actual chat rooms, why not rather get a jabber server to host those (following http://xmpp.org/extensions/xep-0045.html - ejabberd has these as default and there are plugins for most jabber servers to implement them), and then have your bot join that room (most clients support this - Google Talk doesn't unfortunately)?\n" ]
[ 2, 1 ]
[]
[]
[ "c#", "python", "xmpp" ]
stackoverflow_0001323693_c#_python_xmpp.txt
Q: Attribute Error in Python I'm trying to add a unittest attribute to an object in Python class Boy: def run(self, args): print("Hello") class BoyTest(unittest.TestCase) def test(self) self.assertEqual('2' , '2') def self_test(): suite = unittest.TestSuite() loader = unittest.TestLoader() suite.addTest(loader.loadTestsFromTestCase(Boy.BoyTest)) return suite However, I keep getting "AttributeError: class Boy has no attribute 'BoyTest'" whenever I call self_test(). Why? A: As the argument of loadTestsFromTestCase, you're trying to access Boy.BoyTest, i.e., the BoyTest attribute of class object Boy, which just doesn't exist, as the error msg is telling you. Why don't you just use BoyTest there instead?
Attribute Error in Python
I'm trying to add a unittest attribute to an object in Python class Boy: def run(self, args): print("Hello") class BoyTest(unittest.TestCase) def test(self) self.assertEqual('2' , '2') def self_test(): suite = unittest.TestSuite() loader = unittest.TestLoader() suite.addTest(loader.loadTestsFromTestCase(Boy.BoyTest)) return suite However, I keep getting "AttributeError: class Boy has no attribute 'BoyTest'" whenever I call self_test(). Why?
[ "As the argument of loadTestsFromTestCase, you're trying to access Boy.BoyTest, i.e., the BoyTest attribute of class object Boy, which just doesn't exist, as the error msg is telling you. Why don't you just use BoyTest there instead?\n" ]
[ 3 ]
[ "As Alex has stated you are trying to use BoyTest as an attibute of Boy:\nclass Boy:\n\n def run(self, args):\n print(\"Hello\")\n\nclass BoyTest(unittest.TestCase)\n\n def test(self)\n self.assertEqual('2' , '2')\n\ndef self_test():\n suite = unittest.TestSuite()\n loader = unittest.TestLoader()\n suite.addTest(loader.loadTestsFromTestCase(BoyTest))\n return suite\n\nNote the change:\nsuite.addTest(loader.loadTestsFromTestCase(Boy.BoyTest))\n\nto:\nsuite.addTest(loader.loadTestsFromTestCase(BoyTest))\n\nDoes this solve your problem?\n" ]
[ -1 ]
[ "attributeerror", "python" ]
stackoverflow_0001338847_attributeerror_python.txt
Q: Regexp to literally interpret \t as \t and not tab I'm trying to match a sequence of text with backslashed in it, like a windows path. Now, when I match with regexp in python, it gets the match, but the module interprets all backslashes followed by a valid escape char (i.e. t) as an escape sequence, which is not what I want. How do I get it not to do that? Thanks /m EDIT: well, i missed that the regexp that matches the text that contains the backslash is a (.*). I've tried the raw notation (examplefied in the awnsers), but it does not help in my situation. Or im doing it wrong. EDIT2: Did it wrong. Thanks guys/girls! A: Use double backslashes with r like this >>> re.match(r"\\t", r"\t") <_sre.SRE_Match object at 0xb7ce5d78> From python docs: When one wants to match a literal backslash, it must be escaped in the regular expression. With raw string notation, this means r"\". Without raw string notation, one must use "\\", making the following lines of code functionally identical: >>> re.match(r"\\", r"\\") <_sre.SRE_Match object at ...> >>> re.match("\\\\", r"\\") <_sre.SRE_Match object at ...> A: Always use the r prefix when defining your regex. This will tell Python to treat the string as raw, so it doesn't do any of the standard processing. regex = r'\t'
Regexp to literally interpret \t as \t and not tab
I'm trying to match a sequence of text with backslashed in it, like a windows path. Now, when I match with regexp in python, it gets the match, but the module interprets all backslashes followed by a valid escape char (i.e. t) as an escape sequence, which is not what I want. How do I get it not to do that? Thanks /m EDIT: well, i missed that the regexp that matches the text that contains the backslash is a (.*). I've tried the raw notation (examplefied in the awnsers), but it does not help in my situation. Or im doing it wrong. EDIT2: Did it wrong. Thanks guys/girls!
[ "Use double backslashes with r like this\n>>> re.match(r\"\\\\t\", r\"\\t\")\n<_sre.SRE_Match object at 0xb7ce5d78>\n\nFrom python docs:\n\nWhen one wants to match a literal\n backslash, it must be escaped in the\n regular expression. With raw string\n notation, this means r\"\\\". Without\n raw string notation, one must use\n \"\\\\\", making the following lines of\n code functionally identical:\n\n>>> re.match(r\"\\\\\", r\"\\\\\")\n<_sre.SRE_Match object at ...>\n>>> re.match(\"\\\\\\\\\", r\"\\\\\")\n<_sre.SRE_Match object at ...>\n\n", "Always use the r prefix when defining your regex. This will tell Python to treat the string as raw, so it doesn't do any of the standard processing.\n regex = r'\\t'\n\n" ]
[ 11, 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001340162_python_regex.txt
Q: back-to-back histograms in matplotlib There is a nice function that draws back to back histograms in Matlab. I need to create a similar graph in matplotlib. Can anyone show a working code example? A: Thanks to the link pointed by Mark Rushakoff, following is what I finally did import numpy as np from matplotlib import pylab as pl dataOne = get_data_one() dataTwo = get_data_two() hN = pl.hist(dataTwo, orientation='horizontal', normed=0, rwidth=0.8, label='ONE') hS = pl.hist(dataOne, bins=hN[1], orientation='horizontal', normed=0, rwidth=0.8, label='TWO') for p in hS[2]: p.set_width( - p.get_width()) xmin = min([ min(w.get_width() for w in hS[2]), min([w.get_width() for w in hN[2]]) ]) xmin = np.floor(xmin) xmax = max([ max(w.get_width() for w in hS[2]), max([w.get_width() for w in hN[2]]) ]) xmax = np.ceil(xmax) range = xmax - xmin delta = 0.0 * range pl.xlim([xmin - delta, xmax + delta]) xt = pl.xticks() n = xt[0] s = ['%.1f'%abs(i) for i in n] pl.xticks(n, s) pl.legend(loc='best') pl.axvline(0.0) pl.show() A: This matplotlib users mailing post has some sample code for a bihistogram that goes up and down instead of left and right. Here's the example output he linked to. If up-down absolutely won't work for you, it should only take a few minutes to swap the operations on the y-axis with the x-axis operations. Also, your link isn't a MATLAB function, it's an actual script that someone wrote in about 40 lines. You could actually look at the script source and try porting it, since MATLAB and matplotlib have fairly close syntax.
back-to-back histograms in matplotlib
There is a nice function that draws back to back histograms in Matlab. I need to create a similar graph in matplotlib. Can anyone show a working code example?
[ "Thanks to the link pointed by Mark Rushakoff, following is what I finally did\nimport numpy as np\nfrom matplotlib import pylab as pl\n\ndataOne = get_data_one()\ndataTwo = get_data_two()\n\nhN = pl.hist(dataTwo, orientation='horizontal', normed=0, rwidth=0.8, label='ONE')\nhS = pl.hist(dataOne, bins=hN[1], orientation='horizontal', normed=0, \n rwidth=0.8, label='TWO')\n\nfor p in hS[2]:\n p.set_width( - p.get_width())\n\nxmin = min([ min(w.get_width() for w in hS[2]), \n min([w.get_width() for w in hN[2]]) ])\nxmin = np.floor(xmin)\nxmax = max([ max(w.get_width() for w in hS[2]), \n max([w.get_width() for w in hN[2]]) ])\nxmax = np.ceil(xmax)\nrange = xmax - xmin\ndelta = 0.0 * range\npl.xlim([xmin - delta, xmax + delta])\nxt = pl.xticks()\nn = xt[0]\ns = ['%.1f'%abs(i) for i in n]\npl.xticks(n, s)\npl.legend(loc='best')\npl.axvline(0.0)\npl.show()\n\n", "This matplotlib users mailing post has some sample code for a bihistogram that goes up and down instead of left and right. Here's the example output he linked to.\nIf up-down absolutely won't work for you, it should only take a few minutes to swap the operations on the y-axis with the x-axis operations.\nAlso, your link isn't a MATLAB function, it's an actual script that someone wrote in about 40 lines. You could actually look at the script source and try porting it, since MATLAB and matplotlib have fairly close syntax.\n" ]
[ 6, 2 ]
[]
[]
[ "histogram", "matplotlib", "python" ]
stackoverflow_0001340338_histogram_matplotlib_python.txt
Q: Python equivalent to "php -s" As you may or may not know, you can generate a color syntax-higlighted HTML file from a PHP source file using php -s. I know about the syntaxhighlighter that Stackoverflow uses and that's not really what I'm looking for. I'm looking for something will generate HTML output without Javascript. Does anyone know of something equivalent to php -s for Python? A: $ pygmentize -O full -O style=native -o test.html test.py To install Pygments: $ easy_install Pygments You can use it as a library. from pygments import highlight from pygments.lexers import guess_lexer from pygments.formatters import HtmlFormatter code = '#!/usr/bin/python\nprint "Hello World!"' lexer = guess_lexer(code) # or just pygments.lexers.PythonLexer() formatter = HtmlFormatter(noclasses=True, nowrap=True, lineseparator="<br>\n") print highlight(code, lexer, formatter) Output: <span style="color: #408080; font-style: italic">#!/usr/bin/python</span><br> <span style="color: #008000; font-weight: bold">print</span> <span style="color: #BA2121">&quot;Hello World!&quot;</span><br> (added whitespace for clarity) As html: #!/usr/bin/python print "Hello World!" A: I found Highlight at http://www.andre-simon.de to be an extremely good tool for doing this. It is Open-source (GPL'ed though!) A: If you have access to kwrite from KDE, you can export a file as HTML which will have the same colorization that you use for editing. This works for all languages. A: if you need only a few files to convert to html pages and are on windows you can use Notepad++. It comes (as of last versions) with NppExport plugin, that let's one to convert source code to highlighted HTML and RTF (according to your colouring scheme). It works not only with python of course, but with any language you can use in Notepad++.
Python equivalent to "php -s"
As you may or may not know, you can generate a color syntax-higlighted HTML file from a PHP source file using php -s. I know about the syntaxhighlighter that Stackoverflow uses and that's not really what I'm looking for. I'm looking for something will generate HTML output without Javascript. Does anyone know of something equivalent to php -s for Python?
[ "$ pygmentize -O full -O style=native -o test.html test.py\n\nTo install Pygments:\n$ easy_install Pygments\n\nYou can use it as a library.\nfrom pygments import highlight\nfrom pygments.lexers import guess_lexer\nfrom pygments.formatters import HtmlFormatter\n\ncode = '#!/usr/bin/python\\nprint \"Hello World!\"'\nlexer = guess_lexer(code) # or just pygments.lexers.PythonLexer()\nformatter = HtmlFormatter(noclasses=True, nowrap=True, lineseparator=\"<br>\\n\")\nprint highlight(code, lexer, formatter)\n\nOutput:\n<span style=\"color: #408080; font-style: italic\">#!/usr/bin/python</span><br>\n<span style=\"color: #008000; font-weight: bold\">print</span> \n<span style=\"color: #BA2121\">&quot;Hello World!&quot;</span><br>\n\n(added whitespace for clarity)\nAs html:\n#!/usr/bin/python\nprint \n\"Hello World!\"\n", "I found Highlight at http://www.andre-simon.de to be an extremely good tool for doing this. It is Open-source (GPL'ed though!)\n", "If you have access to kwrite from KDE, you can export a file as HTML which will have the same colorization that you use for editing. This works for all languages.\n", "if you need only a few files to convert to html pages and are on windows you can use Notepad++. It comes (as of last versions) with NppExport plugin, that let's one to convert source code to highlighted HTML and RTF (according to your colouring scheme). It works not only with python of course, but with any language you can use in Notepad++.\n" ]
[ 12, 1, 0, 0 ]
[]
[]
[ "php", "python", "syntax_highlighting" ]
stackoverflow_0000658939_php_python_syntax_highlighting.txt
Q: Should Python unittests be in a separate module? Is there a consensus about the best place to put Python unittests? Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules? Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?). I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified. I'd be interested to know peoples' thoughts on the best way of organizing unittests. A: YES, do use a separate module. It does not really make sense to use the __main__ trick. Just assume that you have several files in your module, and it does not work anymore, because you don't want to run each source file separately when testing your module. Also, when installing a module, most of the time you don't want to install the tests. Your end-user does not care about tests, only the developers should care. No, really. Put your tests in tests/, your doc in doc, and have a Makefile ready around for a make test. Any other approaches are just intermediate solutions, only valid for specific tiny modules. A: Where you have to if using a library specifying where unittests should live, in the modules themselves for small projects, or in a tests/ subdirectory in your package for larger projects. It's a matter of what works best for the project you're creating. Sometimes the libraries you're using determine where tests should go, as is the case with Django (where you put your tests in models.py, tests.py or a tests/ subdirectory in your apps). If there are no existing constraints, it's a matter of personal preference. For a small set of modules, it may be more convenient to put the unittests in the files you're creating. For anything more than a few modules I create the tests separately in a tests/ directory in the package. Having testing code mixed with the implementation adds unnecessary noise for anyone reading the code. A: Personally, I create a tests/ folder in my source directory and try to, more or less, mirror my main source code hierarchy with unit test equivalents (having 1 module = 1 unit test module as a rule of thumb). Note that I'm using nose and its philosophy is a bit different than unittest's. A: I generally keep test code in a separate module, and ship the module/package and tests in a single distribution. If the user installs using setup.py they can run the tests from the test directory to ensure that everything works in their environment, but only the module's code ends up under Lib/site-packages. A: There might be reasons other than testing to use the if __name__ == '__main__' check. Keeping the tests in other modules leaves that option open to you. Also - if you refactor the implementation of a module and your tests are in another module that was not edited - you KNOW the tests have not been changed when you run them against the refactored code. A: I usually have them in a separate folder called most often test/. Personally I am not using the if __name__ == '__main__' check, because I use nosetests and it handles the test detection by itself. A: if __name__ == '__main__', etc. is great for small tests.
Should Python unittests be in a separate module?
Is there a consensus about the best place to put Python unittests? Should the unittests be included within the same module as the functionality being tested (executed when the module is run on its own (if __name__ == '__main__', etc.)), or is it better to include the unittests within different modules? Perhaps a combination of both approaches is best, including module level tests within each module and adding higher level tests which test functionality included in more than one module as separate modules (perhaps in a /test subdirectory?). I assume that test discovery is more straightforward if the tests are included in separate modules, but there's an additional burden on the developer if he/she has to remember to update the additional test module if the module under test is modified. I'd be interested to know peoples' thoughts on the best way of organizing unittests.
[ "YES, do use a separate module.\nIt does not really make sense to use the __main__ trick. Just assume that you have several files in your module, and it does not work anymore, because you don't want to run each source file separately when testing your module.\nAlso, when installing a module, most of the time you don't want to install the tests. Your end-user does not care about tests, only the developers should care.\nNo, really. Put your tests in tests/, your doc in doc, and have a Makefile ready around for a make test. Any other approaches are just intermediate solutions, only valid for specific tiny modules.\n", "\nWhere you have to if using a library specifying where unittests should live,\nin the modules themselves for small projects, or\nin a tests/ subdirectory in your package for larger projects.\n\nIt's a matter of what works best for the project you're creating.\nSometimes the libraries you're using determine where tests should go, as is the case with Django (where you put your tests in models.py, tests.py or a tests/ subdirectory in your apps).\nIf there are no existing constraints, it's a matter of personal preference. For a small set of modules, it may be more convenient to put the unittests in the files you're creating.\nFor anything more than a few modules I create the tests separately in a tests/ directory in the package. Having testing code mixed with the implementation adds unnecessary noise for anyone reading the code.\n", "Personally, I create a tests/ folder in my source directory and try to, more or less, mirror my main source code hierarchy with unit test equivalents (having 1 module = 1 unit test module as a rule of thumb).\nNote that I'm using nose and its philosophy is a bit different than unittest's.\n", "I generally keep test code in a separate module, and ship the module/package and tests in a single distribution. If the user installs using setup.py they can run the tests from the test directory to ensure that everything works in their environment, but only the module's code ends up under Lib/site-packages.\n", "There might be reasons other than testing to use the if __name__ == '__main__' check. Keeping the tests in other modules leaves that option open to you. Also - if you refactor the implementation of a module and your tests are in another module that was not edited - you KNOW the tests have not been changed when you run them against the refactored code.\n", "I usually have them in a separate folder called most often test/. Personally I am not using the if __name__ == '__main__' check, because I use nosetests and it handles the test detection by itself.\n", "if __name__ == '__main__', etc. is great for small tests.\n" ]
[ 15, 13, 10, 4, 3, 1, 0 ]
[]
[]
[ "python", "testing", "unit_testing" ]
stackoverflow_0001340892_python_testing_unit_testing.txt
Q: Django equivalent for count and group by I have a model that looks like this: class Category(models.Model): name = models.CharField(max_length=60) class Item(models.Model): name = models.CharField(max_length=60) category = models.ForeignKey(Category) I want select count (just the count) of items for each category, so in SQL it would be as simple as this: select category_id, count(id) from item group by category_id Is there an equivalent of doing this "the Django way"? Or is plain SQL the only option? I am familiar with the count( ) method in Django, however I don't see how group by would fit there. A: Here, as I just discovered, is how to do this with the Django 1.1 aggregation API: from django.db.models import Count theanswer = Item.objects.values('category').annotate(Count('category')) A: (Update: Full ORM aggregation support is now included in Django 1.1. True to the below warning about using private APIs, the method documented here no longer works in post-1.1 versions of Django. I haven't dug in to figure out why; if you're on 1.1 or later you should use the real aggregation API anyway.) The core aggregation support was already there in 1.0; it's just undocumented, unsupported, and doesn't have a friendly API on top of it yet. But here's how you can use it anyway until 1.1 arrives (at your own risk, and in full knowledge that the query.group_by attribute is not part of a public API and could change): query_set = Item.objects.extra(select={'count': 'count(1)'}, order_by=['-count']).values('count', 'category') query_set.query.group_by = ['category_id'] If you then iterate over query_set, each returned value will be a dictionary with a "category" key and a "count" key. You don't have to order by -count here, that's just included to demonstrate how it's done (it has to be done in the .extra() call, not elsewhere in the queryset construction chain). Also, you could just as well say count(id) instead of count(1), but the latter may be more efficient. Note also that when setting .query.group_by, the values must be actual DB column names ('category_id') not Django field names ('category'). This is because you're tweaking the query internals at a level where everything's in DB terms, not Django terms. A: Since I was a little confused about how grouping in Django 1.1 works I thought I'd elaborate here on how exactly you go about using it. First, to repeat what Michael said: Here, as I just discovered, is how to do this with the Django 1.1 aggregation API: from django.db.models import Count theanswer = Item.objects.values('category').annotate(Count('category')) Note also that you need to from django.db.models import Count! This will select only the categories and then add an annotation called category__count. Depending on the default ordering this may be all you need, but if the default ordering uses a field other than category this will not work. The reason for this is that the fields required for ordering are also selected and make each row unique, so you won't get stuff grouped how you want it. One quick way to fix this is to reset the ordering: Item.objects.values('category').annotate(Count('category')).order_by() This should produce exactly the results you want. To set the name of the annotation you can use: ...annotate(mycount = Count('category'))... Then you will have an annotation called mycount in the results. Everything else about grouping was very straightforward to me. Be sure to check out the Django aggregation API for more detailed info. A: How's this? (Other than slow.) counts= [ (c, Item.filter( category=c.id ).count()) for c in Category.objects.all() ] It has the advantage of being short, even if it does fetch a lot of rows. Edit. The one query version. BTW, this is often faster than SELECT COUNT(*) in the database. Try it to see. counts = defaultdict(int) for i in Item.objects.all(): counts[i.category] += 1
Django equivalent for count and group by
I have a model that looks like this: class Category(models.Model): name = models.CharField(max_length=60) class Item(models.Model): name = models.CharField(max_length=60) category = models.ForeignKey(Category) I want select count (just the count) of items for each category, so in SQL it would be as simple as this: select category_id, count(id) from item group by category_id Is there an equivalent of doing this "the Django way"? Or is plain SQL the only option? I am familiar with the count( ) method in Django, however I don't see how group by would fit there.
[ "Here, as I just discovered, is how to do this with the Django 1.1 aggregation API:\nfrom django.db.models import Count\ntheanswer = Item.objects.values('category').annotate(Count('category'))\n\n", "(Update: Full ORM aggregation support is now included in Django 1.1. True to the below warning about using private APIs, the method documented here no longer works in post-1.1 versions of Django. I haven't dug in to figure out why; if you're on 1.1 or later you should use the real aggregation API anyway.)\nThe core aggregation support was already there in 1.0; it's just undocumented, unsupported, and doesn't have a friendly API on top of it yet. But here's how you can use it anyway until 1.1 arrives (at your own risk, and in full knowledge that the query.group_by attribute is not part of a public API and could change):\nquery_set = Item.objects.extra(select={'count': 'count(1)'}, \n order_by=['-count']).values('count', 'category')\nquery_set.query.group_by = ['category_id']\n\nIf you then iterate over query_set, each returned value will be a dictionary with a \"category\" key and a \"count\" key.\nYou don't have to order by -count here, that's just included to demonstrate how it's done (it has to be done in the .extra() call, not elsewhere in the queryset construction chain). Also, you could just as well say count(id) instead of count(1), but the latter may be more efficient.\nNote also that when setting .query.group_by, the values must be actual DB column names ('category_id') not Django field names ('category'). This is because you're tweaking the query internals at a level where everything's in DB terms, not Django terms.\n", "Since I was a little confused about how grouping in Django 1.1 works I thought I'd elaborate here on how exactly you go about using it. First, to repeat what Michael said:\n\nHere, as I just discovered, is how to do this with the Django 1.1 aggregation API:\nfrom django.db.models import Count\ntheanswer = Item.objects.values('category').annotate(Count('category'))\n\n\nNote also that you need to from django.db.models import Count!\nThis will select only the categories and then add an annotation called category__count. Depending on the default ordering this may be all you need, but if the default ordering uses a field other than category this will not work. The reason for this is that the fields required for ordering are also selected and make each row unique, so you won't get stuff grouped how you want it. One quick way to fix this is to reset the ordering:\nItem.objects.values('category').annotate(Count('category')).order_by()\n\nThis should produce exactly the results you want. To set the name of the annotation you can use:\n...annotate(mycount = Count('category'))...\n\nThen you will have an annotation called mycount in the results.\nEverything else about grouping was very straightforward to me. Be sure to check out the Django aggregation API for more detailed info.\n", "How's this? (Other than slow.)\ncounts= [ (c, Item.filter( category=c.id ).count()) for c in Category.objects.all() ]\n\nIt has the advantage of being short, even if it does fetch a lot of rows.\n\nEdit.\nThe one query version. BTW, this is often faster than SELECT COUNT(*) in the database. Try it to see.\ncounts = defaultdict(int)\nfor i in Item.objects.all():\n counts[i.category] += 1\n\n" ]
[ 136, 58, 58, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0000327807_django_python.txt
Q: python: dictionaries of lists are somehow coupled I wrote a small python program to iterate over data file (input_file) and perform calculations. If calculation result reaches certain states (stateA or stateB), information (hits) are extracted from the results. The hits to extract depend on parameters from three parameter sets. I used a dictionary of dictionaries to store my parameter sets (param_sets) and a dictionary of lists to store the hits (hits). The dictionaries param_sets and hits have the same keys. The problem is, that the lists within the hits dictionary are somehow coupled. When one list changes (by calling extract_hits function), the others change, too. Here, the (shortened) code: import os, sys, csv, pdb from operator import itemgetter # define three parameter sets param_sets = { 'A' : {'MIN_LEN' : 8, 'MAX_X' : 0, 'MAX_Z' : 0}, 'B' : {'MIN_LEN' : 8, 'MAX_X' : 1, 'MAX_Z' : 5}, 'C' : {'MIN_LEN' : 9, 'MAX_X' : 1, 'MAX_Z' : 5}} # to store hits corresponding to each parameter set hits = dict.fromkeys(param_sets, []) # calculations result = [] for input_values in input_file: # do some calculations result = do_some_calculations(result, input_values) if result == stateA: for key in param_sets.keys(): hits[key] = extract_hits(key, result, hits[key], param_sets[key]['MIN_LEN'], param_sets[key]['MAX_X'], param_sets[key]['MAX_Z']) result = [] # discard results, start empty result list elif result == stateB: for key in param_sets.keys(): local_heli[key] = extract_hits(key, result, hits[key], param_sets[key]['MIN_LEN'], param_sets[key]['MAX_X'], param_sets[key]['MAX_Z']) result = [] # discard results result = some_calculation(input_values) # start new result list else: result = some_other_calculation(result) # append result list def extract_hits(k, seq, hits, min_len, max_au, max_gu): max_len = len(seq) for sub_seq_size in reversed(range(min_len, max_len+1)): for start_pos in range(0,(max_len-sub_seq_size+1)): from_inc = start_pos to_exc = start_pos + sub_seq_size sub_seq = seq[from_inc:to_exc] # complete information about helical fragment sub_seq helical_fragment = get_helix_data(sub_seq, max_au, max_gu) if helical_fragment: hits.append(helical_fragment) # search seq regions left and right from sub_seq for further hits left_seq = seq[0:from_inc] right_seq = seq[to_exc:max_len] if len(left_seq) >= min_len: hits = sub_check_helical(left_seq, hits, min_len, max_au, max_gu) if len(right_seq) >= min_len: hits = sub_check_helical(right_seq, hits, min_len, max_au, max_gu) print 'key', k # just for testing purpose print 'new', hits # just for testing purpose print 'frag', helical_fragment # just for testing purpose pdb.set_trace() # just for testing purpose return hits # appended return hits # unchanged here, some output from the python debugger: key A new ['x', 'x', 'x', {'y': 'GGCCGGGCUUGGU'}] frag {'y': 'GGCCGGGCUUGGU'} > -> return hits (Pdb) c key B new [{'y': 'GGCCGGGCUUGGU'}, {'y': 'CCGGCCCGAGCCG'}] frag {'y': 'CCGGCCCGAGCCG'} > extract_hits() -> return hits (Pdb) c key C new [{'y': 'GGCCGGGCUUGGU'}, {'y': 'CCGGCCCGAGCCG'}, {'y': 'CCGGCCCG'}] frag {'y': 'CCGGCCCG'} > extract_hits() -> return hits the elements from key A should not be present in key B and elements from key A and key B should not be present in key C. A: Your line: hits = dict.fromkeys(param_sets, []) is equivalent to: hits = dict() onelist = [] for k in param_sets: hits[k] = onelist That is, every entry in hits has as its value the SAME list object, initially empty, no matter what key it has. Remember that assignment does NOT perform implicit copies: rather, it assigns "one more reference to the RHS object". What you want is: hits = dict() for k in param_sets: hits[k] = [] that is, a NEW AND SEPARATE list object as each entry's value. Equivalently, hits = dict((k, []) for k in param_sets) BTW, when you do need to make a (shallow) copy of a container, the most general approach is generally to call the container's type, with the old container as the argument, as in: newdict = dict(olddict) newlist = list(oldlist) newset = set(oldset) and so forth; this also work to transform containers among types (newlist = list(oldset) makes a list out of a set, and so on). A: Dictionaries and lists are passed around by reference by default. For a dictionary, instead of: hits_old = hits # just for testing purpose it would be: hits_old = hits.copy() # just for testing purpose This will copy the dictionary's key/value pairings, resulting in an equivalent dictionary, that would not contain future changes to the hits dictionary. Of course, hits_old in the second function is actually a list, not a dictionary, so you would want to do something akin to the following to copy it: hits_old = hits[:] I haven't a clue why lists don't also have the copy() function, in case you're wondering.
python: dictionaries of lists are somehow coupled
I wrote a small python program to iterate over data file (input_file) and perform calculations. If calculation result reaches certain states (stateA or stateB), information (hits) are extracted from the results. The hits to extract depend on parameters from three parameter sets. I used a dictionary of dictionaries to store my parameter sets (param_sets) and a dictionary of lists to store the hits (hits). The dictionaries param_sets and hits have the same keys. The problem is, that the lists within the hits dictionary are somehow coupled. When one list changes (by calling extract_hits function), the others change, too. Here, the (shortened) code: import os, sys, csv, pdb from operator import itemgetter # define three parameter sets param_sets = { 'A' : {'MIN_LEN' : 8, 'MAX_X' : 0, 'MAX_Z' : 0}, 'B' : {'MIN_LEN' : 8, 'MAX_X' : 1, 'MAX_Z' : 5}, 'C' : {'MIN_LEN' : 9, 'MAX_X' : 1, 'MAX_Z' : 5}} # to store hits corresponding to each parameter set hits = dict.fromkeys(param_sets, []) # calculations result = [] for input_values in input_file: # do some calculations result = do_some_calculations(result, input_values) if result == stateA: for key in param_sets.keys(): hits[key] = extract_hits(key, result, hits[key], param_sets[key]['MIN_LEN'], param_sets[key]['MAX_X'], param_sets[key]['MAX_Z']) result = [] # discard results, start empty result list elif result == stateB: for key in param_sets.keys(): local_heli[key] = extract_hits(key, result, hits[key], param_sets[key]['MIN_LEN'], param_sets[key]['MAX_X'], param_sets[key]['MAX_Z']) result = [] # discard results result = some_calculation(input_values) # start new result list else: result = some_other_calculation(result) # append result list def extract_hits(k, seq, hits, min_len, max_au, max_gu): max_len = len(seq) for sub_seq_size in reversed(range(min_len, max_len+1)): for start_pos in range(0,(max_len-sub_seq_size+1)): from_inc = start_pos to_exc = start_pos + sub_seq_size sub_seq = seq[from_inc:to_exc] # complete information about helical fragment sub_seq helical_fragment = get_helix_data(sub_seq, max_au, max_gu) if helical_fragment: hits.append(helical_fragment) # search seq regions left and right from sub_seq for further hits left_seq = seq[0:from_inc] right_seq = seq[to_exc:max_len] if len(left_seq) >= min_len: hits = sub_check_helical(left_seq, hits, min_len, max_au, max_gu) if len(right_seq) >= min_len: hits = sub_check_helical(right_seq, hits, min_len, max_au, max_gu) print 'key', k # just for testing purpose print 'new', hits # just for testing purpose print 'frag', helical_fragment # just for testing purpose pdb.set_trace() # just for testing purpose return hits # appended return hits # unchanged here, some output from the python debugger: key A new ['x', 'x', 'x', {'y': 'GGCCGGGCUUGGU'}] frag {'y': 'GGCCGGGCUUGGU'} > -> return hits (Pdb) c key B new [{'y': 'GGCCGGGCUUGGU'}, {'y': 'CCGGCCCGAGCCG'}] frag {'y': 'CCGGCCCGAGCCG'} > extract_hits() -> return hits (Pdb) c key C new [{'y': 'GGCCGGGCUUGGU'}, {'y': 'CCGGCCCGAGCCG'}, {'y': 'CCGGCCCG'}] frag {'y': 'CCGGCCCG'} > extract_hits() -> return hits the elements from key A should not be present in key B and elements from key A and key B should not be present in key C.
[ "Your line:\nhits = dict.fromkeys(param_sets, [])\n\nis equivalent to:\nhits = dict()\nonelist = []\nfor k in param_sets:\n hits[k] = onelist\n\nThat is, every entry in hits has as its value the SAME list object, initially empty, no matter what key it has. Remember that assignment does NOT perform implicit copies: rather, it assigns \"one more reference to the RHS object\".\nWhat you want is:\nhits = dict()\nfor k in param_sets:\n hits[k] = []\n\nthat is, a NEW AND SEPARATE list object as each entry's value. Equivalently,\nhits = dict((k, []) for k in param_sets)\n\nBTW, when you do need to make a (shallow) copy of a container, the most general approach is generally to call the container's type, with the old container as the argument, as in:\nnewdict = dict(olddict)\nnewlist = list(oldlist)\nnewset = set(oldset)\n\nand so forth; this also work to transform containers among types (newlist = list(oldset) makes a list out of a set, and so on).\n", "Dictionaries and lists are passed around by reference by default. For a dictionary, instead of:\nhits_old = hits # just for testing purpose\n\nit would be:\nhits_old = hits.copy() # just for testing purpose\n\nThis will copy the dictionary's key/value pairings, resulting in an equivalent dictionary, that would not contain future changes to the hits dictionary.\nOf course, hits_old in the second function is actually a list, not a dictionary, so you would want to do something akin to the following to copy it:\nhits_old = hits[:]\n\nI haven't a clue why lists don't also have the copy() function, in case you're wondering.\n" ]
[ 8, 4 ]
[]
[]
[ "dictionary", "python", "variables" ]
stackoverflow_0001341208_dictionary_python_variables.txt
Q: Locally Hosted Google App Engine (WebApp Framework / BigTable) I have been playing with Google App engine a lot lately, from home on personal projects, and I have been really enjoying it. I've converted a few of my coworkers over and we are interested in using GAE for a few of our projects at work. Our work has to be hosted locally on our own servers. I've done some searching around and I really can't find any information on using the WebApp framework and BigTable locally. Any information you could provide on setting up a GAE-ish environment on a local Windows server would be much appreciated. I know GAE is much more than just the framework and BigTable - the scalability, propogation of your application/data across many servers are all features we don't need. We just want to get the webapp framework and BigTable up and running through mod_wsgi on Apache. A: Webapp is a fine choice for a simple web framework but there are plenty of other simple python web frameworks that have instructions for setting them up in your use case (cherrypy, web.py, etc). Since google developed webapp for gae I don't believe they published instructions for setting it up behind apache. BigTable is proprietary to Google so you will not be able to run it locally. If you are looking for something with similar performance characteristics I'd look into the schemaless 'document-oriented' databases.
Locally Hosted Google App Engine (WebApp Framework / BigTable)
I have been playing with Google App engine a lot lately, from home on personal projects, and I have been really enjoying it. I've converted a few of my coworkers over and we are interested in using GAE for a few of our projects at work. Our work has to be hosted locally on our own servers. I've done some searching around and I really can't find any information on using the WebApp framework and BigTable locally. Any information you could provide on setting up a GAE-ish environment on a local Windows server would be much appreciated. I know GAE is much more than just the framework and BigTable - the scalability, propogation of your application/data across many servers are all features we don't need. We just want to get the webapp framework and BigTable up and running through mod_wsgi on Apache.
[ "Webapp is a fine choice for a simple web framework but there are plenty of other simple python web frameworks that have instructions for setting them up in your use case (cherrypy, web.py, etc). Since google developed webapp for gae I don't believe they published instructions for setting it up behind apache.\nBigTable is proprietary to Google so you will not be able to run it locally. If you are looking for something with similar performance characteristics I'd look into the schemaless 'document-oriented' databases.\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "mod_wsgi", "python" ]
stackoverflow_0001340887_google_app_engine_mod_wsgi_python.txt
Q: Problem with python and __import__ Sorry for the generic title, will change it once I understand the source of my problem I have the following structure: foo/ foo/__init__.py foo/bar/ foo/bar/__init__.py foo/bar/some_module.py When I try to import some_module by doing so: from foo.bar import some_module it works like a charm. But this is no good for me, since I only know the name of the module to import in runtime. so if I try: from foo.bar import * mod=__import__('some_module') I get an error. Am I doing something wrong? Is there a better way to do this? and why is this happening? Why is that? I am not quite sure I completely understand the concept behind python packages. I thought they were equivalent to java's packages and thus A: I believe the proper way to do this is: mod = __import__('foo.bar', fromlist = ['some_module']) This way even the 'foo.bar' part can be changed at runtime. As a result some_modulewill be available as mod.some_module; use getattr if you want it in a separate variable: the_module = getattr(mod, 'some_module') A: from foo.bar import * is a bad practice since it imports some_module into the global scope. You should be able to access your module through: import foo.bar mod = getattr(foo.bar, 'some_module') It can be easily demonstrated that this approach works: >>> import os.path >>> getattr(os.path, 'basename') <function basename at 0x00BBA468> >>> getattr(os.path, 'basename\n') Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> getattr(os.path, 'basename\n') AttributeError: 'module' object has no attribute 'basename ' P.S. If you're still interested in using your kind of import statement. You need an eval: from foo.bar import * eval('some_module') To clarify: not only it's bad practice to use *-import it's even worse in combination with eval. So just use getattr, it's designed exactly for situations like yours. A: From the docs: Direct use of __import__() is rare, except in cases where you want to import a module whose name is only known at runtime. However, the dotted notation should work: mod = __import__('foo.bar.some_module')
Problem with python and __import__
Sorry for the generic title, will change it once I understand the source of my problem I have the following structure: foo/ foo/__init__.py foo/bar/ foo/bar/__init__.py foo/bar/some_module.py When I try to import some_module by doing so: from foo.bar import some_module it works like a charm. But this is no good for me, since I only know the name of the module to import in runtime. so if I try: from foo.bar import * mod=__import__('some_module') I get an error. Am I doing something wrong? Is there a better way to do this? and why is this happening? Why is that? I am not quite sure I completely understand the concept behind python packages. I thought they were equivalent to java's packages and thus
[ "I believe the proper way to do this is:\nmod = __import__('foo.bar', fromlist = ['some_module'])\n\nThis way even the 'foo.bar' part can be changed at runtime. \nAs a result some_modulewill be available as mod.some_module; use getattr if you want it in a separate variable:\nthe_module = getattr(mod, 'some_module')\n\n", "from foo.bar import *\n\nis a bad practice since it imports some_module into the global scope. \nYou should be able to access your module through:\nimport foo.bar\nmod = getattr(foo.bar, 'some_module')\n\nIt can be easily demonstrated that this approach works:\n>>> import os.path\n>>> getattr(os.path, 'basename')\n<function basename at 0x00BBA468>\n>>> getattr(os.path, 'basename\\n')\nTraceback (most recent call last):\n File \"<pyshell#31>\", line 1, in <module>\n getattr(os.path, 'basename\\n')\nAttributeError: 'module' object has no attribute 'basename\n'\n\nP.S. If you're still interested in using your kind of import statement. You need an eval:\nfrom foo.bar import *\neval('some_module')\n\nTo clarify: not only it's bad practice to use *-import it's even worse in combination with eval. So just use getattr, it's designed exactly for situations like yours.\n", "From the docs:\n\nDirect use of __import__() is rare, except in cases where you want to import a module whose name is only known at runtime.\n\nHowever, the dotted notation should work:\nmod = __import__('foo.bar.some_module')\n\n" ]
[ 7, 1, 0 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001342128_import_python.txt
Q: Specifying constraints for fmin_cobyla in scipy I use Python 2.5. I am passing bounds to the cobyla optimisation: import numpy from numpy import asarray Initial = numpy.asarray [2, 4, 5, 3] # Initial values to start with #bounding limits (lower,upper) - for visualizing #bounds = [(1, 5000), (1, 6000), (2, 100000), (1, 50000)] # actual passed bounds b1 = lambda x: 5000 - x[0] # lambda x: bounds[0][1] - Initial[0] b2 = lambda x: x[0] - 2.0 # lambda x: Initial[0] - bounds[0][0] b3 = lambda x: 6000 - x[1] # same as above b4 = lambda x: x[1] - 4.0 b5 = lambda x: 100000 - x[2] b6 = lambda x: x[2] - 5.0 b7 = lambda x: 50000 - x[3] b8 = lambda x: x[3] - 3.0 b9 = lambda x: x[2] > x[3] # very important condition for my problem! opt= optimize.fmin_cobyla(func,Initial,cons=[b1,b2,b3,b4,b5,b6,b7,b8,b9,b10],maxfun=1500000) Based on the initial values Initial and as per/within the bounds b1 to b10 the values are passed to opt(). But the values are deviating, especially with b9. This is a very important bounding condition for my problem! "The value of x[2] passed to my function opt() at every iteration must be always greater than x[3]" -- How is it possible to achieve this? Is there anything wrong in my bounds (b1 to b9) definition ? Or is there a better way of defining of my bounds? Please help me. A: fmin_cobyla() is not an interior point method. That is, it will pass points that are outside of the bounds ("infeasible points") to the function during the course of the optmization run. On thing that you will need to fix is that b9 and b10 are not in the form that fmin_cobyla() expects. The bound functions need to return a positive number if they are within the bound, 0.0 if they are exactly on the bound, and a negative number if they are out of bounds. Ideally, these functions should be smooth. fmin_cobyla() will try to take numerical derivatives of these functions in order to let it know how to get back to the feasible region. b9 = lambda x: x[2] - x[3] I'm not sure how to implement b10 in a way that fmin_cobyla() will be able to use, though. A: for b10, a possible option could be: b10 = lambda x: min(abs(i-j)-d for i,j in itertools.combinations(x,2)) where d is a delta greater than the minimum difference you want between your variables (e.g 0.001)
Specifying constraints for fmin_cobyla in scipy
I use Python 2.5. I am passing bounds to the cobyla optimisation: import numpy from numpy import asarray Initial = numpy.asarray [2, 4, 5, 3] # Initial values to start with #bounding limits (lower,upper) - for visualizing #bounds = [(1, 5000), (1, 6000), (2, 100000), (1, 50000)] # actual passed bounds b1 = lambda x: 5000 - x[0] # lambda x: bounds[0][1] - Initial[0] b2 = lambda x: x[0] - 2.0 # lambda x: Initial[0] - bounds[0][0] b3 = lambda x: 6000 - x[1] # same as above b4 = lambda x: x[1] - 4.0 b5 = lambda x: 100000 - x[2] b6 = lambda x: x[2] - 5.0 b7 = lambda x: 50000 - x[3] b8 = lambda x: x[3] - 3.0 b9 = lambda x: x[2] > x[3] # very important condition for my problem! opt= optimize.fmin_cobyla(func,Initial,cons=[b1,b2,b3,b4,b5,b6,b7,b8,b9,b10],maxfun=1500000) Based on the initial values Initial and as per/within the bounds b1 to b10 the values are passed to opt(). But the values are deviating, especially with b9. This is a very important bounding condition for my problem! "The value of x[2] passed to my function opt() at every iteration must be always greater than x[3]" -- How is it possible to achieve this? Is there anything wrong in my bounds (b1 to b9) definition ? Or is there a better way of defining of my bounds? Please help me.
[ "fmin_cobyla() is not an interior point method. That is, it will pass points that are outside of the bounds (\"infeasible points\") to the function during the course of the optmization run.\nOn thing that you will need to fix is that b9 and b10 are not in the form that fmin_cobyla() expects. The bound functions need to return a positive number if they are within the bound, 0.0 if they are exactly on the bound, and a negative number if they are out of bounds. Ideally, these functions should be smooth. fmin_cobyla() will try to take numerical derivatives of these functions in order to let it know how to get back to the feasible region.\nb9 = lambda x: x[2] - x[3]\n\nI'm not sure how to implement b10 in a way that fmin_cobyla() will be able to use, though.\n", "for b10, a possible option could be:\nb10 = lambda x: min(abs(i-j)-d for i,j in itertools.combinations(x,2))\n\nwhere d is a delta greater than the minimum difference you want between your variables (e.g 0.001)\n" ]
[ 3, 2 ]
[]
[]
[ "function", "lambda", "python", "scipy", "specifications" ]
stackoverflow_0001336777_function_lambda_python_scipy_specifications.txt
Q: Slow regex in Python? I'm trying to match these kinds of strings {@csm.foo.bar} without matching any of these {@[email protected]} {@csm.foo.bar-42} The regex I use is r"\{@csm.((?:[a-zA-Z0-9_]+\.?)+)\}" It gets dog slow if the string contains multiple matches. Why? It runs very fast if I take away the brace matching, like this r"@csm.((?:[a-zA-Z0-9_]+\.?)+)" but that's not what I want. Any ideas? Here is sample input: <dockLayout id="popup" y="0" x="0" width="{@csm.screenWidth}" height="{@csm.screenHeight}"> <dataNumber id="selopacity_Volt" name="selopacity_Volt" value="0" /> <dataNumber id="selopacity_Amp" name="selopacity_Amp" value="0" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" triggerOn="*" targetNode="selopacity_Volt" targetAttr="value" to="1" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" triggerOn="65024" targetNode="selopacity_Volt" targetAttr="value" to="0" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" triggerOn="*" targetNode="selopacity_Amp" targetAttr="value" to="1" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" triggerOn="65024" targetNode="selopacity_Amp" targetAttr="value" to="0" dur="0ms" ease="in" /> <dockLayout id="item" width="{@csm.screenWidth}" height="{@csm.screenHeight}" depth="-1" clip="false" xmlns="http://www.tat.se/kastor/kml" > <dockLayout id="list_item_title" x="0" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="volt_amp_text" x="0" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemUnselColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="{ItemTitle}" /> </dockLayout> <dockLayout id="gear_layout" y="0" x="0" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <image id="battery_image" x="0" dockLayout.halign="left" dockLayout.valign="bottom" opacity="1" src="{@m_MenuModel.Gauges.VoltAmpereMeter.image}"/> </dockLayout> <!--DockLayout for Voltage Value--> <dockLayout id="volt_value" x="0" width="{@[email protected]_x}" height="{@[email protected]_y}"> <text id="volt_value_text" x="0" opacity="{selopacity_Volt*selopacity_Amp}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="right" dockLayout.valign="bottom" string="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" > </text> </dockLayout> <!--DockLayout for Voltage Unit--> <dockLayout id="volt_unit" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="volt_unit_text" x="0" opacity="{selopacity_Volt*selopacity_Amp}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="V" > </text> </dockLayout> <!--DockLayout for Ampere Value--> <dockLayout id="ampere_value" x="0" width="{@[email protected]_x}" height="{@[email protected]_y}"> <text id="ampere_value_text" x="0" opacity="{selopacity_Amp*selopacity_Volt}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="right" dockLayout.valign="bottom" string="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" > </text> </dockLayout> <!--DockLayout for Ampere Unit--> <dockLayout id="ampere_unit" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="ampere_unit_text" x="0" opacity="{selopacity_Amp*selopacity_Volt}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="A" > </text> </dockLayout> <!--DockLayout for containing Data Not Available text--> <dockLayout id="no_data_textline" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="no_data_text" x="0" opacity="{1-(selopacity_Amp*selopacity_Volt)}" ellipsize="false" font="{@csm.listSelFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="{text1}" > </text> </dockLayout> <!--<rect id="test_rect1" x="{151-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{237-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{160-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{246-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect8" x="0" y="{161-40}" width="320" height="1" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{109-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" />--> </dockLayout> </dockLayout> A: Can you supply a test case of a string for which the first match is "dog slow"? BTW, though I don't know if that matters to performance, there's an imprecision in the RE -- it matches any single character after the {@csm start, not just a dot; maybe a better expression (possibly faster as it doesn't make any dots "optional") could be: r'\{@csm((?:\.\w+)+)\}' A: I'm not exactly a regex expert, but it might be due to the brace at the end of the match. You might try to match r"\{@csm.((?:[a-zA-Z0-9_]+\.?)+)" and just check manually whether a closing brace occurs at the end or not. A: You probably need to give a better example of exactly what's slow. For a reasonably long string containing stuff that does and doesn't match: x="".join(['{@csm.foo.bar-%d}\n{@csm.foo.%dx.baz}\n' % (a,a) for a in xrange(10000)]) mymatch=r"\{@csm.((?:[a-zA-Z0-9_]+\.?)+)\}" for y in re.finditer(mymatch,x): print y.group(0) works fine, but if you've got a long enough string and you're searching it poorly you could have problems.
Slow regex in Python?
I'm trying to match these kinds of strings {@csm.foo.bar} without matching any of these {@[email protected]} {@csm.foo.bar-42} The regex I use is r"\{@csm.((?:[a-zA-Z0-9_]+\.?)+)\}" It gets dog slow if the string contains multiple matches. Why? It runs very fast if I take away the brace matching, like this r"@csm.((?:[a-zA-Z0-9_]+\.?)+)" but that's not what I want. Any ideas? Here is sample input: <dockLayout id="popup" y="0" x="0" width="{@csm.screenWidth}" height="{@csm.screenHeight}"> <dataNumber id="selopacity_Volt" name="selopacity_Volt" value="0" /> <dataNumber id="selopacity_Amp" name="selopacity_Amp" value="0" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" triggerOn="*" targetNode="selopacity_Volt" targetAttr="value" to="1" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" triggerOn="65024" targetNode="selopacity_Volt" targetAttr="value" to="0" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" triggerOn="*" targetNode="selopacity_Amp" targetAttr="value" to="1" dur="0ms" ease="in" /> <animate trigger="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" triggerOn="65024" targetNode="selopacity_Amp" targetAttr="value" to="0" dur="0ms" ease="in" /> <dockLayout id="item" width="{@csm.screenWidth}" height="{@csm.screenHeight}" depth="-1" clip="false" xmlns="http://www.tat.se/kastor/kml" > <dockLayout id="list_item_title" x="0" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="volt_amp_text" x="0" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemUnselColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="{ItemTitle}" /> </dockLayout> <dockLayout id="gear_layout" y="0" x="0" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <image id="battery_image" x="0" dockLayout.halign="left" dockLayout.valign="bottom" opacity="1" src="{@m_MenuModel.Gauges.VoltAmpereMeter.image}"/> </dockLayout> <!--DockLayout for Voltage Value--> <dockLayout id="volt_value" x="0" width="{@[email protected]_x}" height="{@[email protected]_y}"> <text id="volt_value_text" x="0" opacity="{selopacity_Volt*selopacity_Amp}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="right" dockLayout.valign="bottom" string="{@m_ds_ML.VIMPBM_BatteryVoltage.valstr}" > </text> </dockLayout> <!--DockLayout for Voltage Unit--> <dockLayout id="volt_unit" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="volt_unit_text" x="0" opacity="{selopacity_Volt*selopacity_Amp}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="V" > </text> </dockLayout> <!--DockLayout for Ampere Value--> <dockLayout id="ampere_value" x="0" width="{@[email protected]_x}" height="{@[email protected]_y}"> <text id="ampere_value_text" x="0" opacity="{selopacity_Amp*selopacity_Volt}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="right" dockLayout.valign="bottom" string="{@m_ds_ML.VIMPBM_BatteryCurrent.valstr}" > </text> </dockLayout> <!--DockLayout for Ampere Unit--> <dockLayout id="ampere_unit" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="ampere_unit_text" x="0" opacity="{selopacity_Amp*selopacity_Volt}" ellipsize="false" font="{@csm.listUnselFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="A" > </text> </dockLayout> <!--DockLayout for containing Data Not Available text--> <dockLayout id="no_data_textline" x="{@[email protected]_x}" width="{@csm.screenWidth}" height="{@[email protected]_y}"> <text id="no_data_text" x="0" opacity="{1-(selopacity_Amp*selopacity_Volt)}" ellipsize="false" font="{@csm.listSelFont}" color="{@csm.itemSelColor}" dockLayout.halign="left" dockLayout.valign="bottom" string="{text1}" > </text> </dockLayout> <!--<rect id="test_rect1" x="{151-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{237-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{160-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{246-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" /> <rect id="test_rect8" x="0" y="{161-40}" width="320" height="1" opacity="1" fill="#00ff00" /> <rect id="test_rect1" x="{109-28}" y="0" width="1" height="240" opacity="1" fill="#00ff00" />--> </dockLayout> </dockLayout>
[ "Can you supply a test case of a string for which the first match is \"dog slow\"? BTW, though I don't know if that matters to performance, there's an imprecision in the RE -- it matches any single character after the {@csm start, not just a dot; maybe a better expression (possibly faster as it doesn't make any dots \"optional\") could be:\nr'\\{@csm((?:\\.\\w+)+)\\}'\n\n", "I'm not exactly a regex expert, but it might be due to the brace at the end of the match. You might try to match r\"\\{@csm.((?:[a-zA-Z0-9_]+\\.?)+)\" and just check manually whether a closing brace occurs at the end or not.\n", "You probably need to give a better example of exactly what's slow. For a reasonably long string containing stuff that does and doesn't match:\nx=\"\".join(['{@csm.foo.bar-%d}\\n{@csm.foo.%dx.baz}\\n' % (a,a)\n for a in xrange(10000)])\nmymatch=r\"\\{@csm.((?:[a-zA-Z0-9_]+\\.?)+)\\}\"\n\nfor y in re.finditer(mymatch,x):\n print y.group(0)\n\nworks fine, but if you've got a long enough string and you're searching it poorly you could have problems.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001342589_python_regex.txt
Q: How can I force subtraction to be signed in Python? You can skip to the bottom line if you don't care about the background: I have the following code in Python: ratio = (point.threshold - self.points[0].value) / (self.points[1].value - self.points[0].value) Which is giving me wrong values. For instance, for: threshold: 25.0 self.points[0].value: 46 self.points[1].value: 21 I got: ratio: -0.000320556853048 Which is wrong. Looking into it, I realized that self.points[0].value and self.points[1].value] are of the typenumpy.uint16`, so I got: 21 - 46 = 65511 While I never defined a type for point.threshold. I just assigned it. I imagine it got a plain vanilla int. The Bottom Line How can I force the the subtraction of two uints to be signed? A: Well, the obvious solution would probably be to cast to floats: ratio = (float(point.threshold) - float(self.points[0].value)) / (float(self.points[1].value) - float(self.points[0].value)) Or I suppose you could cast to one of the numpy signed types. A: Almost anything but uints will work here, so just cast these to something else before you do the subtraction. Since threshold = 25.0 (note the decimal point), it's a float, so the subtraction and division will all work as long as you're not using uints. A: Do those values actually NEED to be uint16 instead of int16? Unless they have to be able to take values of 2**15 or above (but still below 2**16) you could simply keep them as int16 and be done with it -- unsigned ints, as you discovered, can be tricky (and not just in numpy;-). If you DO need the uint16, then casts as David suggests will work, but if you can simply use int16 it will be faster and more readable. BTW, it looks like that point.threshold is a float, not an int (good thing too, otherwise that division the way you code it would be a truncating one, unless you're importing true division from the future, as has been supported in many 2.* releases of Python -- and is finally THE way division works in 3.*). The .0 in 25.0 "gives it away" and shows it's a float, not an int.
How can I force subtraction to be signed in Python?
You can skip to the bottom line if you don't care about the background: I have the following code in Python: ratio = (point.threshold - self.points[0].value) / (self.points[1].value - self.points[0].value) Which is giving me wrong values. For instance, for: threshold: 25.0 self.points[0].value: 46 self.points[1].value: 21 I got: ratio: -0.000320556853048 Which is wrong. Looking into it, I realized that self.points[0].value and self.points[1].value] are of the typenumpy.uint16`, so I got: 21 - 46 = 65511 While I never defined a type for point.threshold. I just assigned it. I imagine it got a plain vanilla int. The Bottom Line How can I force the the subtraction of two uints to be signed?
[ "Well, the obvious solution would probably be to cast to floats:\nratio = (float(point.threshold) - float(self.points[0].value)) / (float(self.points[1].value) - float(self.points[0].value))\n\nOr I suppose you could cast to one of the numpy signed types.\n", "Almost anything but uints will work here, so just cast these to something else before you do the subtraction. \nSince threshold = 25.0 (note the decimal point), it's a float, so the subtraction and division will all work as long as you're not using uints.\n", "Do those values actually NEED to be uint16 instead of int16? Unless they have to be able to take values of 2**15 or above (but still below 2**16) you could simply keep them as int16 and be done with it -- unsigned ints, as you discovered, can be tricky (and not just in numpy;-). If you DO need the uint16, then casts as David suggests will work, but if you can simply use int16 it will be faster and more readable.\nBTW, it looks like that point.threshold is a float, not an int (good thing too, otherwise that division the way you code it would be a truncating one, unless you're importing true division from the future, as has been supported in many 2.* releases of Python -- and is finally THE way division works in 3.*). The .0 in 25.0 \"gives it away\" and shows it's a float, not an int.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "python", "sign", "subtraction", "uint" ]
stackoverflow_0001342782_python_sign_subtraction_uint.txt
Q: Python Packages? Ok, I think whatever I'm doing wrong, it's probably blindingly obvious, but I can't figure it out. I've read and re-read the tutorial section on packages and the only thing I can figure is that this won't work because I'm executing it directly. Here's the directory setup: eulerproject/ __init__.py euler1.py euler2.py ... eulern.py tests/ __init__.py testeulern.py Here are the contents of testeuler12.py (the first test module I've written): import unittest from .. import euler12 class Euler12UnitTests(unittest.TestCase): def testtriangle(self): """ Ensure that the triangle number generator returns the first 10 triangle numbers. """ self.seq = [1,3,6,10,15,21,28,36,45,55] self.generator = euler12.trianglegenerator() self.results = [] while len(self.results) != 10: self.results.append(self.generator.next()) self.assertEqual(self.seq, self.results) def testdivisors(self): """ Ensure that the divisors function can properly factor the number 28. """ self.number = 28 self.answer = [1,2,4,7,14,28] self.assertEqual(self.answer, euler12.divisors(self.number)) if __name__ == '__main__': unittest.main() Now, when I execute this from IDLE and from the command line while in the directory, I get the following error: Traceback (most recent call last): File "C:\Documents and Settings\jbennet\My Documents\Python\eulerproject\tests\testeuler12.py", line 2, in <module> from .. import euler12 ValueError: Attempted relative import in non-package I think the problem is that since I'm running it directly, I can't do relative imports (because __name__ changes, and my vague understanding of the packages description is that __name__ is part of how it tells what package it's in), but in that case what do you guys suggest for how to import the 'production' code stored 1 level up from the test code? A: I had the same problem. I now use nose to run my tests, and relative imports are correctly handled. Yeah, this whole relative import thing is confusing. A: Generally you would have a directory, the name of which is your package name, somewhere on your PYTHONPATH. For example: eulerproject/ euler/ __init__.py euler1.py ... tests/ ... setup.py Then, you can either install this systemwide, or make sure to set PYTHONPATH=/path/to/eulerproject/:$PYTHONPATH when invoking your script. An absolute import like this will then work: from euler import euler1 Edit: According to the Python docs, "modules intended for use as the main module of a Python application should always use absolute imports." (Cite) So a test harness like nose, mentioned by the other answer, works because it imports packages rather than running them from the command line. If you want to do things by hand, your runnable script needs to be outside the package hierarchy, like this: eulerproject/ runtests.py euler/ __init__.py euler1.py ... tests/ __init__.py testeulern.py Now, runtests.py can do from euler.tests.testeulern import TestCase and testeulern.py can do from .. import euler1
Python Packages?
Ok, I think whatever I'm doing wrong, it's probably blindingly obvious, but I can't figure it out. I've read and re-read the tutorial section on packages and the only thing I can figure is that this won't work because I'm executing it directly. Here's the directory setup: eulerproject/ __init__.py euler1.py euler2.py ... eulern.py tests/ __init__.py testeulern.py Here are the contents of testeuler12.py (the first test module I've written): import unittest from .. import euler12 class Euler12UnitTests(unittest.TestCase): def testtriangle(self): """ Ensure that the triangle number generator returns the first 10 triangle numbers. """ self.seq = [1,3,6,10,15,21,28,36,45,55] self.generator = euler12.trianglegenerator() self.results = [] while len(self.results) != 10: self.results.append(self.generator.next()) self.assertEqual(self.seq, self.results) def testdivisors(self): """ Ensure that the divisors function can properly factor the number 28. """ self.number = 28 self.answer = [1,2,4,7,14,28] self.assertEqual(self.answer, euler12.divisors(self.number)) if __name__ == '__main__': unittest.main() Now, when I execute this from IDLE and from the command line while in the directory, I get the following error: Traceback (most recent call last): File "C:\Documents and Settings\jbennet\My Documents\Python\eulerproject\tests\testeuler12.py", line 2, in <module> from .. import euler12 ValueError: Attempted relative import in non-package I think the problem is that since I'm running it directly, I can't do relative imports (because __name__ changes, and my vague understanding of the packages description is that __name__ is part of how it tells what package it's in), but in that case what do you guys suggest for how to import the 'production' code stored 1 level up from the test code?
[ "I had the same problem. I now use nose to run my tests, and relative imports are correctly handled.\nYeah, this whole relative import thing is confusing.\n", "Generally you would have a directory, the name of which is your package name, somewhere on your PYTHONPATH. For example:\neulerproject/\n euler/\n __init__.py\n euler1.py\n ...\n tests/\n ...\n setup.py\n\nThen, you can either install this systemwide, or make sure to set PYTHONPATH=/path/to/eulerproject/:$PYTHONPATH when invoking your script.\nAn absolute import like this will then work:\nfrom euler import euler1\n\nEdit:\nAccording to the Python docs, \"modules intended for use as the main module of a Python application should always use absolute imports.\" (Cite)\nSo a test harness like nose, mentioned by the other answer, works because it imports packages rather than running them from the command line.\nIf you want to do things by hand, your runnable script needs to be outside the package hierarchy, like this:\neulerproject/\n runtests.py\n euler/\n __init__.py\n euler1.py\n ...\n tests/\n __init__.py\n testeulern.py\n\nNow, runtests.py can do from euler.tests.testeulern import TestCase and testeulern.py can do from .. import euler1\n" ]
[ 10, 8 ]
[]
[]
[ "package", "python", "unit_testing" ]
stackoverflow_0001342975_package_python_unit_testing.txt
Q: Can you really use the Visual Studio 2008 IDE to code in Python? I have a friend who I am trying to teach how to program. He comes from a very basic PHP background, and for some reason is ANTI C#, I guess because some of his PHP circles condemn anything that comes from Microsoft. Anyways - I've told him its possible to use either Ruby or Python with the VS2008 IDE, because I've read somewhere that this is possible. But I was wondering. Is it really that practical, can you do EVERYTHING with Python in VS2008 that you can do with C# or VB.net. I guess without starting a debate... I want to know if you're a developer using VS IDE with a language other than VB.net or C#, then please leave an answer with your experience. If you are (like me) either a VB.net or C# developer, please don't post speculative or subjective answers. This is a serious question, and I don't want it being closed as subjective. ... Thank you very much. update So far we've established that IronPython is the right tool for the job. Now how practical is it really? Mono for example runs C# code in Linux, but... ever tried to use it? Not practical at all, lots of code refactoring needs to take place, no support for .net v3.5, etc... A: If you want to use Python together with the .NET Common Language Runtime, then you want one of: Python.NET (extension to vanilla Python that adds .NET support) IronPython (re-implementation of Python as a .NET language) Boo (Python-like language that compiles down to C#-equivalent MSIL code) Using Python in Visual Studio without using the CLR seems like a bit of a waste to me. Eclipse with PyDev would be a much better choice. A: I find it odd that your friend is against C# but is ok with Visual Studio. There is, after all, an open source development environment for .NET called SharpDevelop. The C# language is a standard. .NET is free (as in beer) and there is an open source implementation of that platform called Mono. The only "un-free" thing is Visual Studio (though there are "Express" versions which are free as in beer). A: I don't know why you would want to - perhaps something like IronPython Studio would be a happy medium. But as I said I don't know why you would want to use Visual Studio for Python development when there are much better options available. Always choose the right tool for the right job - just because you can drive a nail with the butt-end of your cordless drill doesn't mean that you should. Visual Studio was not designed for Python development and as such will not be a perfect environment for developing in it. Please use the list I have linked to choose a more appropriate editor from that list. As a side note, I am wondering why your PHP friend refuses to use C# (a free, industry standardized language) but is okay using Visual Studio (an expensive, closed-source integrated development environment). A: This has been discussed before in this thread. I personally prefer eclipse and pyDev. A: Firstly, there seems to be a question as to whether python (or various implementations) are as 'powerful' as C#. I'm not quite sure what to take powerful to mean, but in my experience of both languages it will be somewhat easier and faster to write a given piece of code in python than in C#. C# is faster than cpython (although if speed is desired, the psyco python module is well worth a look). Also I would object to your dismissal of Mono. Mono is great on Linux if you write an application for it from scratch. It is not really meant to be a compatibility layer between Windows and Linux (see Wine!), and if you treat it as such you will only be disappointed. It just seems to me that you are taking the wrong approach. If you want to convince him that not everything Microsoft is evil, and he is adamant about not learning C#, get him to learn Python (or Ruby, or LUA or whatever) until he is competent, and then introduce him to C# and get him to make his own judgement - I'm fairly in favour of open source, and am far from a rabid Microsoft supporter, but I tried C#, and found I quite liked it. I think that getting him to use python and visual studio in a suboptimal way will turn him against both of them - far from your desired goal! A: Go here for a discussion on the Visual Studio IronPython IDEs.
Can you really use the Visual Studio 2008 IDE to code in Python?
I have a friend who I am trying to teach how to program. He comes from a very basic PHP background, and for some reason is ANTI C#, I guess because some of his PHP circles condemn anything that comes from Microsoft. Anyways - I've told him its possible to use either Ruby or Python with the VS2008 IDE, because I've read somewhere that this is possible. But I was wondering. Is it really that practical, can you do EVERYTHING with Python in VS2008 that you can do with C# or VB.net. I guess without starting a debate... I want to know if you're a developer using VS IDE with a language other than VB.net or C#, then please leave an answer with your experience. If you are (like me) either a VB.net or C# developer, please don't post speculative or subjective answers. This is a serious question, and I don't want it being closed as subjective. ... Thank you very much. update So far we've established that IronPython is the right tool for the job. Now how practical is it really? Mono for example runs C# code in Linux, but... ever tried to use it? Not practical at all, lots of code refactoring needs to take place, no support for .net v3.5, etc...
[ "If you want to use Python together with the .NET Common Language Runtime, then you want one of:\n\nPython.NET (extension to vanilla Python that adds .NET support)\nIronPython (re-implementation of Python as a .NET language)\nBoo (Python-like language that compiles down to C#-equivalent MSIL code)\n\nUsing Python in Visual Studio without using the CLR seems like a bit of a waste to me. Eclipse with PyDev would be a much better choice.\n", "I find it odd that your friend is against C# but is ok with Visual Studio. There is, after all, an open source development environment for .NET called SharpDevelop. The C# language is a standard. .NET is free (as in beer) and there is an open source implementation of that platform called Mono. The only \"un-free\" thing is Visual Studio (though there are \"Express\" versions which are free as in beer).\n", "I don't know why you would want to - perhaps something like IronPython Studio would be a happy medium. But as I said I don't know why you would want to use Visual Studio for Python development when there are much better options available.\nAlways choose the right tool for the right job - just because you can drive a nail with the butt-end of your cordless drill doesn't mean that you should. Visual Studio was not designed for Python development and as such will not be a perfect environment for developing in it. Please use the list I have linked to choose a more appropriate editor from that list.\nAs a side note, I am wondering why your PHP friend refuses to use C# (a free, industry standardized language) but is okay using Visual Studio (an expensive, closed-source integrated development environment).\n", "This has been discussed before in this thread. I personally prefer eclipse and pyDev.\n", "Firstly, there seems to be a question as to whether python (or various implementations) are as 'powerful' as C#. I'm not quite sure what to take powerful to mean, but in my experience of both languages it will be somewhat easier and faster to write a given piece of code in python than in C#. C# is faster than cpython (although if speed is desired, the psyco python module is well worth a look).\nAlso I would object to your dismissal of Mono. Mono is great on Linux if you write an application for it from scratch. It is not really meant to be a compatibility layer between Windows and Linux (see Wine!), and if you treat it as such you will only be disappointed.\nIt just seems to me that you are taking the wrong approach. If you want to convince him that not everything Microsoft is evil, and he is adamant about not learning C#, get him to learn Python (or Ruby, or LUA or whatever) until he is competent, and then introduce him to C# and get him to make his own judgement - I'm fairly in favour of open source, and am far from a rabid Microsoft supporter, but I tried C#, and found I quite liked it.\nI think that getting him to use python and visual studio in a suboptimal way will turn him against both of them - far from your desired goal!\n", "Go here for a discussion on the Visual Studio IronPython IDEs.\n" ]
[ 8, 2, 1, 1, 1, 0 ]
[]
[]
[ "ironpython", "python", "visual_studio" ]
stackoverflow_0001342377_ironpython_python_visual_studio.txt
Q: In Django, how to control which DB connection and cursor a queryset will use I'm trying to get a queryset to issue its query over a different DB connection, using a different cursor class. Does anyone know if that's possible and if so how it might be done? In psuedo-code: # setup a new db connection: db = db_connect(cursorclass=AlternateCursor) # setup a generic queryset qset = blah.objects.all() # tell qset to use the new connection: qset.use_db(db) # and then apply some filters qset = qset.filter(...) # and execute the query: for object in qset: ... Thanks! A: This is possible from Django 1.0 on - the trick is to use a custom manager for your model and replace the manager's connection object. See the code at Eric Florenzano's post at http://www.eflorenzano.com/blog/post/easy-multi-database-support-django/
In Django, how to control which DB connection and cursor a queryset will use
I'm trying to get a queryset to issue its query over a different DB connection, using a different cursor class. Does anyone know if that's possible and if so how it might be done? In psuedo-code: # setup a new db connection: db = db_connect(cursorclass=AlternateCursor) # setup a generic queryset qset = blah.objects.all() # tell qset to use the new connection: qset.use_db(db) # and then apply some filters qset = qset.filter(...) # and execute the query: for object in qset: ... Thanks!
[ "This is possible from Django 1.0 on - the trick is to use a custom manager for your model and replace the manager's connection object. See the code at Eric Florenzano's post at http://www.eflorenzano.com/blog/post/easy-multi-database-support-django/\n" ]
[ 3 ]
[]
[]
[ "database", "django", "python" ]
stackoverflow_0001342594_database_django_python.txt
Q: How to make this method non-recursive? Hey. This example is pretty specific but I think it could apply to a broad range of functions. It's taken from some online programming contest. There is a game with a simple winning condition. Draw is not possible. Game cannot go on forever because every move takes you closer to the terminating condition. The function should, given a state, determine if the player who is to move now has a winning strategy. In the example, the state is an integer. A player chooses a non-zero digit and subtracts it from the number: the new state is the new integer. The winner is the player who reaches zero. I coded this: from Memoize import Memoize @Memoize def Game(x): if x == 0: return True for digit in str(x): if digit != '0' and not Game(x-int(digit)): return True return False I think it's clear how it works. I also realize that for this specific game there's probably a much smarter solution but my question is general. However this makes python go crazy even for relatively small inputs. Is there any way to make this code work with a loop? Thanks This is what I mean by translating into a loop: def fac(x): if x <= 1: return x else: return x*fac(x-1) def fac_loop(x): result = 1 for i in xrange(1,x+1): result *= i return result ## dont try: fac(10000) print fac_loop(10000) % 100 ## works A: In general, it is only possible to convert recursive functions into loops when they are primitive-recursive; this basically means that they call themselves only once in the body. Your function calls itself multiple times. Such a function really needs a stack. It is possible to make the stack explicit, e.g. with lists. One reformulation of your algorithm using an explicit stack is def Game(x): # x, str(x), position stack = [(x,str(x),0)] # return value res = None while stack: if res is not None: # we have a return value if not res: stack.pop() res = True continue # res is True, continue to search res = None x, s, pos = stack.pop() if x == 0: res = True continue if pos == len(s): # end of loop, return False res = False continue stack.append((x,s,pos+1)) digit = s[pos] if digit == '0': continue x -= int(digit) # recurse, starting with position 0 stack.append((x,str(x),0)) return res Basically, you need to make each local variable an element of a stack frame; the local variables here are x, str(x), and the iteration counter of the loop. Doing return values is a bit tricky - I chose to set res to not-None if a function has just returned. A: By "go crazy" I assume you mean: >>> Game(10000) # stuff skipped RuntimeError: maximum recursion depth exceeded in cmp You could start at the bottom instead -- a crude change would be: # after defining Game() for i in range(10000): Game(i) # Now this will work: print Game(10000) This is because, if you start with a high number, you have to recurse a long way before you reach the bottom (0), so your memoization decorator doesn't help the way it should. By starting from the bottom, you ensure that every recursive call hits the dictionary of results immediately. You probably use extra space, but you don't recurse far. You can turn any recursive function into an iterative function by using a loop and a stack -- essentially running the call stack by hand. See this question or this quesstion, for example, for some discussion. There may be a more elegant loop-based solution here, but it doesn't leap out to me. A: Well, recursion mostly is about being able to execute some code without losing previous contexts and their order. In particular, function frames put and saved onto call stack during recursion, therefore giving constraint on recursion depth because stack size is limited. You can 'increase' your recursion depth by manually managing/saving required information on each recursive call by creating a state stack on the heap memory. Usually, amount of available heap memory is larger than stack's one. Think: good quick sort implementations eliminate recursion to the larger side by creating an outer loop with ever-changing state variables (lower/upper array boundaries and pivot in QS example). While I was typing this, Martin v. Löwis posted good answer about converting recursive functions into loops. A: You could modify your recursive version a bit: def Game(x): if x == 0: return True s = set(digit for digit in str(x) if digit != '0') return any(not Game(x-int(digit)) for digit in s) This way, you don't examine digits multiple times. For example, if you are doing 111, you don't have to look at 110 three times. I'm not sure if this counts as an iterative version of the original algorithm you presented, but here is a memoized iterative version: import Queue def Game2(x): memo = {} memo[0] = True calc_dep = {} must_calc = Queue.Queue() must_calc.put(x) while not must_calc.empty(): n = must_calc.get() if n and n not in calc_dep: s = set(int(c) for c in str(n) if c != '0') elems = [n - digit for digit in s] calc_dep[n] = elems for new_elem in elems: if new_elem not in calc_dep: must_calc.put(new_elem) for k in sorted(calc_dep.keys()): v = calc_dep[k] #print k, v memo[k] = any(not memo[i] for i in v) return memo[x] It first calculates the set of numbers that x, the input, depends on. Then it calculates those numbers, starting at the bottom and going towards x. The code is so fast because of the test for calc_dep. It avoids calculating multiple dependencies. As a result, it can do Game(10000) in under 400 milliseconds whereas the original takes -- I don't know how long. A long time. Here are performance measurements: Elapsed: 1000 0:00:00.029000 Elapsed: 2000 0:00:00.044000 Elapsed: 4000 0:00:00.086000 Elapsed: 8000 0:00:00.197000 Elapsed: 16000 0:00:00.461000 Elapsed: 32000 0:00:00.969000 Elapsed: 64000 0:00:01.774000 Elapsed: 128000 0:00:03.708000 Elapsed: 256000 0:00:07.951000 Elapsed: 512000 0:00:19.148000 Elapsed: 1024000 0:00:34.960000 Elapsed: 2048000 0:01:17.960000 Elapsed: 4096000 0:02:55.013000 It's reasonably zippy.
How to make this method non-recursive?
Hey. This example is pretty specific but I think it could apply to a broad range of functions. It's taken from some online programming contest. There is a game with a simple winning condition. Draw is not possible. Game cannot go on forever because every move takes you closer to the terminating condition. The function should, given a state, determine if the player who is to move now has a winning strategy. In the example, the state is an integer. A player chooses a non-zero digit and subtracts it from the number: the new state is the new integer. The winner is the player who reaches zero. I coded this: from Memoize import Memoize @Memoize def Game(x): if x == 0: return True for digit in str(x): if digit != '0' and not Game(x-int(digit)): return True return False I think it's clear how it works. I also realize that for this specific game there's probably a much smarter solution but my question is general. However this makes python go crazy even for relatively small inputs. Is there any way to make this code work with a loop? Thanks This is what I mean by translating into a loop: def fac(x): if x <= 1: return x else: return x*fac(x-1) def fac_loop(x): result = 1 for i in xrange(1,x+1): result *= i return result ## dont try: fac(10000) print fac_loop(10000) % 100 ## works
[ "In general, it is only possible to convert recursive functions into loops when they are primitive-recursive; this basically means that they call themselves only once in the body. Your function calls itself multiple times. Such a function really needs a stack. It is possible to make the stack explicit, e.g. with lists. One reformulation of your algorithm using an explicit stack is\ndef Game(x):\n # x, str(x), position\n stack = [(x,str(x),0)]\n # return value\n res = None\n\n while stack:\n if res is not None:\n # we have a return value\n if not res:\n stack.pop()\n res = True\n continue\n # res is True, continue to search\n res = None\n x, s, pos = stack.pop()\n if x == 0:\n res = True\n continue\n if pos == len(s):\n # end of loop, return False\n res = False\n continue\n stack.append((x,s,pos+1))\n digit = s[pos]\n if digit == '0':\n continue\n x -= int(digit)\n # recurse, starting with position 0\n stack.append((x,str(x),0))\n\n return res\n\nBasically, you need to make each local variable an element of a stack frame; the local variables here are x, str(x), and the iteration counter of the loop. Doing return values is a bit tricky - I chose to set res to not-None if a function has just returned.\n", "By \"go crazy\" I assume you mean:\n>>> Game(10000)\n# stuff skipped\nRuntimeError: maximum recursion depth exceeded in cmp\n\nYou could start at the bottom instead -- a crude change would be:\n# after defining Game()\nfor i in range(10000):\n Game(i)\n\n# Now this will work:\nprint Game(10000)\n\nThis is because, if you start with a high number, you have to recurse a long way before you reach the bottom (0), so your memoization decorator doesn't help the way it should.\nBy starting from the bottom, you ensure that every recursive call hits the dictionary of results immediately. You probably use extra space, but you don't recurse far.\nYou can turn any recursive function into an iterative function by using a loop and a stack -- essentially running the call stack by hand. See this question or this quesstion, for example, for some discussion. There may be a more elegant loop-based solution here, but it doesn't leap out to me.\n", "Well, recursion mostly is about being able to execute some code without losing previous contexts and their order. In particular, function frames put and saved onto call stack during recursion, therefore giving constraint on recursion depth because stack size is limited. You can 'increase' your recursion depth by manually managing/saving required information on each recursive call by creating a state stack on the heap memory. Usually, amount of available heap memory is larger than stack's one. Think: good quick sort implementations eliminate recursion to the larger side by creating an outer loop with ever-changing state variables (lower/upper array boundaries and pivot in QS example). \nWhile I was typing this, Martin v. Löwis posted good answer about converting recursive functions into loops.\n", "You could modify your recursive version a bit:\ndef Game(x):\n if x == 0: return True\n s = set(digit for digit in str(x) if digit != '0')\n return any(not Game(x-int(digit)) for digit in s)\n\nThis way, you don't examine digits multiple times. For example, if you are doing 111, you don't have to look at 110 three times.\nI'm not sure if this counts as an iterative version of the original algorithm you presented, but here is a memoized iterative version:\nimport Queue\ndef Game2(x):\n memo = {}\n memo[0] = True\n calc_dep = {}\n must_calc = Queue.Queue()\n must_calc.put(x)\n while not must_calc.empty():\n n = must_calc.get()\n if n and n not in calc_dep:\n s = set(int(c) for c in str(n) if c != '0')\n elems = [n - digit for digit in s]\n calc_dep[n] = elems\n for new_elem in elems:\n if new_elem not in calc_dep:\n must_calc.put(new_elem)\n for k in sorted(calc_dep.keys()):\n v = calc_dep[k]\n #print k, v\n memo[k] = any(not memo[i] for i in v)\n return memo[x]\n\nIt first calculates the set of numbers that x, the input, depends on. Then it calculates those numbers, starting at the bottom and going towards x. \nThe code is so fast because of the test for calc_dep. It avoids calculating multiple dependencies. As a result, it can do Game(10000) in under 400 milliseconds whereas the original takes -- I don't know how long. A long time.\nHere are performance measurements:\nElapsed: 1000 0:00:00.029000\nElapsed: 2000 0:00:00.044000\nElapsed: 4000 0:00:00.086000\nElapsed: 8000 0:00:00.197000\nElapsed: 16000 0:00:00.461000\nElapsed: 32000 0:00:00.969000\nElapsed: 64000 0:00:01.774000\nElapsed: 128000 0:00:03.708000\nElapsed: 256000 0:00:07.951000\nElapsed: 512000 0:00:19.148000\nElapsed: 1024000 0:00:34.960000\nElapsed: 2048000 0:01:17.960000\nElapsed: 4096000 0:02:55.013000\n\nIt's reasonably zippy.\n" ]
[ 5, 3, 0, 0 ]
[]
[]
[ "python", "recursion", "refactoring" ]
stackoverflow_0001339215_python_recursion_refactoring.txt
Q: Basic tree in Python with a Django QuerySet Here's where I'm exposed as the fraud of a programmer I am. I've never created a data tree. Basically, I have a table with four fields: A, B, C, and D. I need to create a tree of unordered lists based on these fields. Ultimately, it would look something like this: A1 B1 C1 D1 D2 C2 D3 D4 B2 C2 D5 D6 C3 D7 D8 A2 B2 C1 D9 D10 C4 D11 D12 B3 C3 D13 D14 C4 D15 D16 It's a pretty basic, 4-level tree...I think. It is much larger and complex than I'm representing here, but that's enough to get the point across. I'm hoping to keep it to one database call, but I'm not sure if that's possible. I'll be fetching the data with a Django QuerySet. I have Python at my disposal. I don't even know where to start with the programming logic to keep it from becoming unwieldy. I'd appreciate any pointers, guidance, links...just about anything! A: I am not sure of your question, is there something specific you are asking? Here are a few reusable applications for storing hierarchical data: django-mptt django-treebeard What's your reasoning behind using the 4 separate fields?
Basic tree in Python with a Django QuerySet
Here's where I'm exposed as the fraud of a programmer I am. I've never created a data tree. Basically, I have a table with four fields: A, B, C, and D. I need to create a tree of unordered lists based on these fields. Ultimately, it would look something like this: A1 B1 C1 D1 D2 C2 D3 D4 B2 C2 D5 D6 C3 D7 D8 A2 B2 C1 D9 D10 C4 D11 D12 B3 C3 D13 D14 C4 D15 D16 It's a pretty basic, 4-level tree...I think. It is much larger and complex than I'm representing here, but that's enough to get the point across. I'm hoping to keep it to one database call, but I'm not sure if that's possible. I'll be fetching the data with a Django QuerySet. I have Python at my disposal. I don't even know where to start with the programming logic to keep it from becoming unwieldy. I'd appreciate any pointers, guidance, links...just about anything!
[ "I am not sure of your question, is there something specific you are asking?\nHere are a few reusable applications for storing hierarchical data:\n\ndjango-mptt\ndjango-treebeard\n\nWhat's your reasoning behind using the 4 separate fields?\n" ]
[ 4 ]
[]
[]
[ "django", "django_queryset", "iteration", "python", "tree" ]
stackoverflow_0001343845_django_django_queryset_iteration_python_tree.txt
Q: Condensing code in Python with Mappings I seem to be using this block of code alot in Python. if Y is not None: obj[X][0]=Y How do I establish a mapping from X=>Y and then iterate through this entire mapping while calling that block of code on X and Y A: mapping = {X1: Y1, X2: Y2, X3: Y3} mapping[X4] = Y4 mapping[X5] = Y5 for X,Y in mapping.items(): if Y is not None: obj[X][0] = Y A: If Y is None, you can do something like: default_value = 0 obj[X][0] = Y if not None else default_value
Condensing code in Python with Mappings
I seem to be using this block of code alot in Python. if Y is not None: obj[X][0]=Y How do I establish a mapping from X=>Y and then iterate through this entire mapping while calling that block of code on X and Y
[ "mapping = {X1: Y1, X2: Y2, X3: Y3}\nmapping[X4] = Y4\nmapping[X5] = Y5\n\nfor X,Y in mapping.items():\n if Y is not None:\n obj[X][0] = Y\n\n", "If Y is None, you can do something like:\ndefault_value = 0\nobj[X][0] = Y if not None else default_value\n\n" ]
[ 5, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001344208_python.txt
Q: How can you read keystrokes when the python program isn't in the foreground? I'm trying to analyze my keystrokes over the next month and would like to throw together a simple program to do so. I don't want to exactly log the commands but simply generate general statistics on my key presses. I am the most comfortable coding this in python, but am open to other suggestions. Is this possible, and if so what python modules should I look at? Has this already been done? I'm on OSX but would also be interested in doing this on an Ubuntu box and Windows XP. A: It looks like you need http://patorjk.com/keyboard-layout-analyzer/ This handy program will analyze a block of text and tell you how far your fingers had to travel to type it, then recommend your optimal layout. To answer your original question, on Linux you can read from /dev/event* for local keyboard, mouse and joystick events. I believe you could for example simply cat /dev/event0 > keylogger. The events are instances of struct input_event. See also http://www.linuxjournal.com/article/6429. Python's struct module is a convenient way to parse binary data. For OSX, take a look at the source code to logkext. http://code.google.com/p/logkext/ A: Unless you are planning on writing the interfaces yourself, you are going to require some library, since as other posters have pointed out, you need to access low-level key press events managed by the desktop environment. On Windows, the PyHook library would give you the functionality you need. On Linux, you can use the Python X Library (assuming you are running a graphical desktop). Both of these are used to good effect by pykeylogger. You'd be best off downloading the source (see e.g. pyxhook.py) to see specific examples of how key press events are captured. It should be trivial to modify this to sum the distribution of keys rather than recording the ordering. A: As the current X server's Record extension seems to be broken, using pykeylogger for Linux doesn't really help. Take a look at evdev and its demo function, instead. The solution is nastier, but it does at least work. It comes down to setting up a hook to the device import evdev keyboard_location = '/dev/input/event1' # get the correct one from HAL or so keyboard_device = evdev.Device(keyboard_location) Then, regularly poll the device to get the status of keys and other information: keyboard_device.poll() A: Depending on what statistics you want to collect, maybe you do not have to write this yourself; the program Workrave is a program to remind you to take small breaks and does so by monitoring keyboard and mouse activity. It keeps statistics of this activity which you probably could use (unless you want very detailed/more specific statistics). In worst case you could look at the source (C++) to find how it is done.
How can you read keystrokes when the python program isn't in the foreground?
I'm trying to analyze my keystrokes over the next month and would like to throw together a simple program to do so. I don't want to exactly log the commands but simply generate general statistics on my key presses. I am the most comfortable coding this in python, but am open to other suggestions. Is this possible, and if so what python modules should I look at? Has this already been done? I'm on OSX but would also be interested in doing this on an Ubuntu box and Windows XP.
[ "It looks like you need http://patorjk.com/keyboard-layout-analyzer/\nThis handy program will analyze a block of text and tell you how far your fingers had to travel to type it, then recommend your optimal layout.\nTo answer your original question, on Linux you can read from /dev/event* for local keyboard, mouse and joystick events. I believe you could for example simply cat /dev/event0 > keylogger. The events are instances of struct input_event. See also http://www.linuxjournal.com/article/6429.\nPython's struct module is a convenient way to parse binary data.\nFor OSX, take a look at the source code to logkext. http://code.google.com/p/logkext/\n", "Unless you are planning on writing the interfaces yourself, you are going to require some library, since as other posters have pointed out, you need to access low-level key press events managed by the desktop environment.\nOn Windows, the PyHook library would give you the functionality you need.\nOn Linux, you can use the Python X Library (assuming you are running a graphical desktop).\nBoth of these are used to good effect by pykeylogger. You'd be best off downloading the source (see e.g. pyxhook.py) to see specific examples of how key press events are captured. It should be trivial to modify this to sum the distribution of keys rather than recording the ordering.\n", "As the current X server's Record extension seems to be broken, using pykeylogger for Linux doesn't really help. Take a look at evdev and its demo function, instead. The solution is nastier, but it does at least work.\nIt comes down to setting up a hook to the device\nimport evdev\nkeyboard_location = '/dev/input/event1' # get the correct one from HAL or so\nkeyboard_device = evdev.Device(keyboard_location)\n\nThen, regularly poll the device to get the status of keys and other information: \nkeyboard_device.poll()\n\n", "Depending on what statistics you want to collect, maybe you do not have to write this yourself; the program Workrave is a program to remind you to take small breaks and does so by monitoring keyboard and mouse activity. It keeps statistics of this activity which you probably could use (unless you want very detailed/more specific statistics). In worst case you could look at the source (C++) to find how it is done.\n" ]
[ 4, 2, 2, 0 ]
[]
[]
[ "background", "keyboard", "keylogger", "python" ]
stackoverflow_0001054380_background_keyboard_keylogger_python.txt
Q: nose tests of Pylons app with models in init_model? I have a stock Pylons app created using paster create -t pylons with one controller and matched functional test, added using paster controller, and a SQLAlchemy table and mapped ORM class. The SQLAlchemy stuff is defined in the init_model() function rather than in module scope (and needs to be there). Running python setup.py test raises an exception because nose is somehow causing init_model() to be called twice within the same process, so it's trying to create a model that already exists. I can hackishly fix this by setting and checking a global variable inside init_model(), but (a) I'd rather not, and (b) third-party libraries such as AuthKit that dynamically define models break the tests as well, and can't be so easily changed. Is there a way to fix nose tests for Pylons, or should I write my own test script and just use unittest, loadapp, and webtest directly? Any working examples of this? A: I would try debugging your nosetest run. Why not put: import pdb;pdb.set_trace() in the init_model() function and see how it is getting invoked more than once. With PDB running you can see the stack trace using the where command: w(here) Print a stack trace, with the most recent frame at the bottom. An arrow indicates the "current frame", which determines the context of most commands. 'bt' is an alias for this command.
nose tests of Pylons app with models in init_model?
I have a stock Pylons app created using paster create -t pylons with one controller and matched functional test, added using paster controller, and a SQLAlchemy table and mapped ORM class. The SQLAlchemy stuff is defined in the init_model() function rather than in module scope (and needs to be there). Running python setup.py test raises an exception because nose is somehow causing init_model() to be called twice within the same process, so it's trying to create a model that already exists. I can hackishly fix this by setting and checking a global variable inside init_model(), but (a) I'd rather not, and (b) third-party libraries such as AuthKit that dynamically define models break the tests as well, and can't be so easily changed. Is there a way to fix nose tests for Pylons, or should I write my own test script and just use unittest, loadapp, and webtest directly? Any working examples of this?
[ "I would try debugging your nosetest run. Why not put:\nimport pdb;pdb.set_trace()\n\nin the init_model() function and see how it is getting invoked more than once.\nWith PDB running you can see the stack trace using the where command:\nw(here)\nPrint a stack trace, with the most recent frame at the bottom.\nAn arrow indicates the \"current frame\", which determines the\ncontext of most commands. 'bt' is an alias for this command.\n\n" ]
[ 3 ]
[]
[]
[ "nose", "nosetests", "pylons", "python", "sqlalchemy" ]
stackoverflow_0001342232_nose_nosetests_pylons_python_sqlalchemy.txt
Q: How to find Title case phrases from a passage or bunch of paragraphs How do I parse sentence case phrases from a passage. For example from this passage Conan Doyle said that the character of Holmes was inspired by Dr. Joseph Bell, for whom Doyle had worked as a clerk at the Edinburgh Royal Infirmary. Like Holmes, Bell was noted for drawing large conclusions from the smallest observations.[1] Michael Harrison argued in a 1971 article in Ellery Queen's Mystery Magazine that the character was inspired by Wendell Scherer, a "consulting detective" in a murder case that allegedly received a great deal of newspaper attention in England in 1882. We need to generate stuff like Conan Doyle, Holmes, Dr Joseph Bell, Wendell Scherr etc. I would prefer a Pythonic Solution if possible A: This kind of processing can be very tricky. This simple code does almost the right thing: for s in re.finditer(r"([A-Z][a-z]+[. ]+)+([A-Z][a-z]+)?", text): print s.group(0) produces: Conan Doyle Holmes Dr. Joseph Bell Doyle Edinburgh Royal Infirmary. Like Holmes Bell Michael Harrison Ellery Queen Mystery Magazine Wendell Scherer England To include "Dr. Joseph Bell", you need to be ok with the period in the string, which allows in "Edinburgh Royal Infirmary. Like Holmes". I had a similar problem: Separating Sentences. A: The "re" approach runs out of steam very quickly. Named entity recognition is a very complicated topic, way beyond the scope of an SO answer. If you think you have a good approach to this problem, please point it at Flann O'Brien a.k.a. Myles na cGopaleen, Sukarno, Harry S. Truman, J. Edgar Hoover, J. K. Rowling, the mathematician L'Hopital, Joe di Maggio, Algernon Douglas-Montagu-Scott, and Hugo Max Graf von und zu Lerchenfeld auf Köfering und Schönberg. Update Following is an "re"-based approach that finds a lot more valid cases. I still don't think that this is a good approach, though. N.B. I've asciified the Bavarian count's name in my text sample. If anyone really wants to use something like this, they should work in Unicode, and normalise whitespace at some stage (either on input or on output). import re text1 = """Conan Doyle said that the character of Holmes was inspired by Dr. Joseph Bell, for whom Doyle had worked as a clerk at the Edinburgh Royal Infirmary. Like Holmes, Bell was noted for drawing large conclusions from the smallest observations.[1] Michael Harrison argued in a 1971 article in Ellery Queen's Mystery Magazine that the character was inspired by Wendell Scherer, a "consulting detective" in a murder case that allegedly received a great deal of newspaper attention in England in 1882.""" text2 = """Flann O'Brien a.k.a. Myles na cGopaleen, I Zingari, Sukarno and Suharto, Harry S. Truman, J. Edgar Hoover, J. K. Rowling, the mathematician L'Hopital, Joe di Maggio, Algernon Douglas-Montagu-Scott, and Hugo Max Graf von und zu Lerchenfeld auf Koefering und Schoenberg.""" pattern1 = r"(?:[A-Z][a-z]+[. ]+)+(?:[A-Z][a-z]+)?" joiners = r"' - de la du von und zu auf van der na di il el bin binte abu etcetera".split() pattern2 = r"""(?x) (?: (?:[ .]|\b%s\b)* (?:\b[a-z]*[A-Z][a-z]*\b)? )+ """ % r'\b|\b'.join(joiners) def get_names(pattern, text): for m in re.finditer(pattern, text): s = m.group(0).strip(" .'-") if s: yield s for t in (text1, text2): print "*** text: ", t[:20], "..." print "=== Ned B" for s in re.finditer(pattern1): print repr(s.group(0)) print "=== John M ==" for name in get_names(pattern2, t): print repr(name) Output: C:\junk\so>\python26\python extract_names.py *** text: Conan Doyle said tha ... === Ned B 'Conan Doyle ' 'Holmes ' 'Dr. Joseph Bell' 'Doyle ' 'Edinburgh Royal Infirmary. Like Holmes' 'Bell ' 'Michael Harrison ' 'Ellery Queen' 'Mystery Magazine ' 'Wendell Scherer' 'England ' === John M == 'Conan Doyle' 'Holmes' 'Dr. Joseph Bell' 'Doyle' 'Edinburgh Royal Infirmary. Like Holmes' 'Bell' 'Michael Harrison' 'Ellery Queen' 'Mystery Magazine' 'Wendell Scherer' 'England' *** text: Flann O'Brien a.k.a. ... === Ned B 'Flann ' 'Brien ' 'Myles ' 'Sukarno ' 'Harry ' 'Edgar Hoover' 'Joe ' 'Algernon Douglas' 'Hugo Max Graf ' 'Lerchenfeld ' 'Koefering ' 'Schoenberg.' === John M == "Flann O'Brien" 'Myles na cGopaleen' 'I Zingari' 'Sukarno' 'Suharto' 'Harry S. Truman' 'J. Edgar Hoover' 'J. K. Rowling' "L'Hopital" 'Joe di Maggio' 'Algernon Douglas-Montagu-Scott' 'Hugo Max Graf von und zu Lerchenfeld auf Koefering und Schoenberg'
How to find Title case phrases from a passage or bunch of paragraphs
How do I parse sentence case phrases from a passage. For example from this passage Conan Doyle said that the character of Holmes was inspired by Dr. Joseph Bell, for whom Doyle had worked as a clerk at the Edinburgh Royal Infirmary. Like Holmes, Bell was noted for drawing large conclusions from the smallest observations.[1] Michael Harrison argued in a 1971 article in Ellery Queen's Mystery Magazine that the character was inspired by Wendell Scherer, a "consulting detective" in a murder case that allegedly received a great deal of newspaper attention in England in 1882. We need to generate stuff like Conan Doyle, Holmes, Dr Joseph Bell, Wendell Scherr etc. I would prefer a Pythonic Solution if possible
[ "This kind of processing can be very tricky. This simple code does almost the right thing:\nfor s in re.finditer(r\"([A-Z][a-z]+[. ]+)+([A-Z][a-z]+)?\", text):\n print s.group(0)\n\nproduces:\nConan Doyle\nHolmes\nDr. Joseph Bell\nDoyle\nEdinburgh Royal Infirmary. Like Holmes\nBell\nMichael Harrison\nEllery Queen\nMystery Magazine\nWendell Scherer\nEngland\n\nTo include \"Dr. Joseph Bell\", you need to be ok with the period in the string, which allows in \"Edinburgh Royal Infirmary. Like Holmes\".\nI had a similar problem: Separating Sentences.\n", "The \"re\" approach runs out of steam very quickly. Named entity recognition is a very complicated topic, way beyond the scope of an SO answer. If you think you have a good approach to this problem, please point it at Flann O'Brien a.k.a. Myles na cGopaleen, Sukarno, Harry S. Truman, J. Edgar Hoover, J. K. Rowling, the mathematician L'Hopital, Joe di Maggio, Algernon Douglas-Montagu-Scott, and Hugo Max Graf von und zu Lerchenfeld auf Köfering und Schönberg.\nUpdate Following is an \"re\"-based approach that finds a lot more valid cases. I still don't think that this is a good approach, though. N.B. I've asciified the Bavarian count's name in my text sample. If anyone really wants to use something like this, they should work in Unicode, and normalise whitespace at some stage (either on input or on output).\nimport re\n\ntext1 = \"\"\"Conan Doyle said that the character of Holmes was inspired by Dr. Joseph Bell, for whom Doyle had worked as a clerk at the Edinburgh Royal Infirmary. Like Holmes, Bell was noted for drawing large conclusions from the smallest observations.[1] Michael Harrison argued in a 1971 article in Ellery Queen's Mystery Magazine that the character was inspired by Wendell Scherer, a \"consulting detective\" in a murder case that allegedly received a great deal of newspaper attention in England in 1882.\"\"\"\n\ntext2 = \"\"\"Flann O'Brien a.k.a. Myles na cGopaleen, I Zingari, Sukarno and Suharto, Harry S. Truman, J. Edgar Hoover, J. K. Rowling, the mathematician L'Hopital, Joe di Maggio, Algernon Douglas-Montagu-Scott, and Hugo Max Graf von und zu Lerchenfeld auf Koefering und Schoenberg.\"\"\"\n\npattern1 = r\"(?:[A-Z][a-z]+[. ]+)+(?:[A-Z][a-z]+)?\"\n\njoiners = r\"' - de la du von und zu auf van der na di il el bin binte abu etcetera\".split()\n\npattern2 = r\"\"\"(?x)\n (?:\n (?:[ .]|\\b%s\\b)*\n (?:\\b[a-z]*[A-Z][a-z]*\\b)?\n )+\n \"\"\" % r'\\b|\\b'.join(joiners)\n\ndef get_names(pattern, text):\n for m in re.finditer(pattern, text):\n s = m.group(0).strip(\" .'-\")\n if s:\n yield s\n\nfor t in (text1, text2):\n print \"*** text: \", t[:20], \"...\"\n print \"=== Ned B\"\n for s in re.finditer(pattern1):\n print repr(s.group(0))\n print \"=== John M ==\"\n for name in get_names(pattern2, t):\n print repr(name)\n\nOutput:\nC:\\junk\\so>\\python26\\python extract_names.py\n*** text: Conan Doyle said tha ...\n=== Ned B\n'Conan Doyle '\n'Holmes '\n'Dr. Joseph Bell'\n'Doyle '\n'Edinburgh Royal Infirmary. Like Holmes'\n'Bell '\n'Michael Harrison '\n'Ellery Queen'\n'Mystery Magazine '\n'Wendell Scherer'\n'England '\n=== John M ==\n'Conan Doyle'\n'Holmes'\n'Dr. Joseph Bell'\n'Doyle'\n'Edinburgh Royal Infirmary. Like Holmes'\n'Bell'\n'Michael Harrison'\n'Ellery Queen'\n'Mystery Magazine'\n'Wendell Scherer'\n'England'\n*** text: Flann O'Brien a.k.a. ...\n=== Ned B\n'Flann '\n'Brien '\n'Myles '\n'Sukarno '\n'Harry '\n'Edgar Hoover'\n'Joe '\n'Algernon Douglas'\n'Hugo Max Graf '\n'Lerchenfeld '\n'Koefering '\n'Schoenberg.'\n=== John M ==\n\"Flann O'Brien\"\n'Myles na cGopaleen'\n'I Zingari'\n'Sukarno'\n'Suharto'\n'Harry S. Truman'\n'J. Edgar Hoover'\n'J. K. Rowling'\n\"L'Hopital\"\n'Joe di Maggio'\n'Algernon Douglas-Montagu-Scott'\n'Hugo Max Graf von und zu Lerchenfeld auf Koefering und Schoenberg'\n\n" ]
[ 5, 2 ]
[]
[]
[ "nlp", "parsing", "python", "text_parsing" ]
stackoverflow_0001343479_nlp_parsing_python_text_parsing.txt
Q: Replacing leading and trailing hyphens with spaces? What is the best way to replace each occurrence of a leading or trailing hyphen with a space? For example, I want ---ab---c-def-- to become 000ab---c-def00 (where the zeros are spaces) I'm trying to do this in Python, but I can't seem to come up with a regex that will do the substitution. I'm wondering if there is another, better way to do this? A: re.sub(r'^-+|-+$', lambda m: ' '*len(m.group()), '---ab---c-def--') Explanation: the pattern matches 1 or more leading or trailing dashes; the substitution is best performed by a callable, which receives each match object -- so m.group() is the matched substring -- and returns the string that must replace it (as many spaces as there were characters in said substring, in this case). A: Use a callable as the substitution target: s = re.sub("^(-+)", lambda m: " " * (m.end() - m.start()), s) s = re.sub("(-+)$", lambda m: " " * (m.end() - m.start()), s) A: Whenever you want to match at the end of a string, always consider carefully whether you need $ or \Z. Examples, using '0' instead of ' ' for clarity: >>> re.sub(r"^-+|-+\Z", lambda m: '0'*len(m.group()), "--ab--c-def--") '00ab--c-def00' >>> re.sub(r"^-+|-+\Z", lambda m: '0'*len(m.group()), "--ab--c-def--\n") '00ab--c-def--\n' >>> re.sub(r"^-+|-+$", lambda m: '0'*len(m.group()), "--ab--c-def--\n") '00ab--c-def00\n' >>>
Replacing leading and trailing hyphens with spaces?
What is the best way to replace each occurrence of a leading or trailing hyphen with a space? For example, I want ---ab---c-def-- to become 000ab---c-def00 (where the zeros are spaces) I'm trying to do this in Python, but I can't seem to come up with a regex that will do the substitution. I'm wondering if there is another, better way to do this?
[ "re.sub(r'^-+|-+$', lambda m: ' '*len(m.group()), '---ab---c-def--')\n\nExplanation: the pattern matches 1 or more leading or trailing dashes; the substitution is best performed by a callable, which receives each match object -- so m.group() is the matched substring -- and returns the string that must replace it (as many spaces as there were characters in said substring, in this case).\n", "Use a callable as the substitution target:\ns = re.sub(\"^(-+)\", lambda m: \" \" * (m.end() - m.start()), s)\ns = re.sub(\"(-+)$\", lambda m: \" \" * (m.end() - m.start()), s)\n\n", "Whenever you want to match at the end of a string, always consider carefully whether you need $ or \\Z. Examples, using '0' instead of ' ' for clarity:\n>>> re.sub(r\"^-+|-+\\Z\", lambda m: '0'*len(m.group()), \"--ab--c-def--\")\n'00ab--c-def00'\n>>> re.sub(r\"^-+|-+\\Z\", lambda m: '0'*len(m.group()), \"--ab--c-def--\\n\")\n'00ab--c-def--\\n'\n>>> re.sub(r\"^-+|-+$\", lambda m: '0'*len(m.group()), \"--ab--c-def--\\n\")\n'00ab--c-def00\\n'\n>>>\n\n" ]
[ 5, 3, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001345025_python_regex.txt
Q: How to efficiently determine if webpage comes from a website I have some unknown webpages and I want to determine which websites they come from. I have example webpages from each website and I assume each website has a distinctive template. I do not need complete certainty, and don't want to use too much resources matching each webpage. So crawling each website for the webpage is out of the question. I imagine the best way is to compare the tree structure of each webpage's DOM. Are there any libraries that will do this? Ideally I am after a Python based solution, but if there is an algorithm I can understand and implement then I would be interested in that too. Thanks A: You could do this via Bayes classification. Feed a few pages from each site into the classifier first, then future pages can be tested against them to see how closely they match. Bayes classifier library available here: reverend (LGPL) Simplified example: # initialisation from reverend.thomas import Bayes guesser = Bayes() guesser.train('site one', site_one_page_one_data) guesser.train('site one', site_one_page_two_data) # ...etc... guesser.train('site two', site_two_page_one_data) guesser.train('site two', site_two_page_two_data) # ...etc... guesser.save() # run time guesser.load() results = guesser.guess(page_I_want_to_classify) For better results, tokenise the HTML first. But that might not be necessary. A: A quick and dirty way you can try is to split html source in html tags, then compare the resultant collections of strings. You should end up with collection of tags and content, say: item[n] ="<p>" item[n+2] ="This is some content" item[n+2] ="</p>" I think a regex can do this in about every language. Some content, other than tags, would be the same (menus and so on). I think a numeric comparison of occurrences should be enough. You can improve by giving kinda "points" when you have same tag/content in the same position. Probably a "combo" of a decent number of collection items can give you certainty.
How to efficiently determine if webpage comes from a website
I have some unknown webpages and I want to determine which websites they come from. I have example webpages from each website and I assume each website has a distinctive template. I do not need complete certainty, and don't want to use too much resources matching each webpage. So crawling each website for the webpage is out of the question. I imagine the best way is to compare the tree structure of each webpage's DOM. Are there any libraries that will do this? Ideally I am after a Python based solution, but if there is an algorithm I can understand and implement then I would be interested in that too. Thanks
[ "You could do this via Bayes classification. Feed a few pages from each site into the classifier first, then future pages can be tested against them to see how closely they match.\nBayes classifier library available here: reverend (LGPL)\nSimplified example:\n# initialisation\nfrom reverend.thomas import Bayes\nguesser = Bayes()\nguesser.train('site one', site_one_page_one_data)\nguesser.train('site one', site_one_page_two_data)\n# ...etc...\nguesser.train('site two', site_two_page_one_data)\nguesser.train('site two', site_two_page_two_data)\n# ...etc...\nguesser.save()\n\n# run time\nguesser.load()\nresults = guesser.guess(page_I_want_to_classify)\n\nFor better results, tokenise the HTML first. But that might not be necessary.\n", "A quick and dirty way you can try is to split html source in html tags, then compare the resultant collections of strings. You should end up with collection of tags and content, say:\nitem[n] =\"<p>\"\nitem[n+2] =\"This is some content\"\nitem[n+2] =\"</p>\"\n\nI think a regex can do this in about every language.\nSome content, other than tags, would be the same (menus and so on). I think a numeric comparison of occurrences should be enough. You can improve by giving kinda \"points\" when you have same tag/content in the same position. Probably a \"combo\" of a decent number of collection items can give you certainty.\n" ]
[ 4, 0 ]
[]
[]
[ "dom", "python", "web", "webpage" ]
stackoverflow_0001345341_dom_python_web_webpage.txt
Q: What's the most Pythonic way of determining endianness? I'm trying to find the best way of working out whether the machine my code is running on is big-endian or little-endian. I have a solution that works (although I haven't tested it on a big-endian machine) but it seems a bit clunky: import struct little_endian = (struct.pack('@h', 1) == struct.pack('<h', 1)) This is just comparing a 'native' two-byte pack to a little-endian pack. Is there a prettier way? A: The answer is in the sys module: >>> import sys >>> sys.byteorder 'little' Of course depending on your machine it may return 'big'. Your method should certainly work too though.
What's the most Pythonic way of determining endianness?
I'm trying to find the best way of working out whether the machine my code is running on is big-endian or little-endian. I have a solution that works (although I haven't tested it on a big-endian machine) but it seems a bit clunky: import struct little_endian = (struct.pack('@h', 1) == struct.pack('<h', 1)) This is just comparing a 'native' two-byte pack to a little-endian pack. Is there a prettier way?
[ "The answer is in the sys module:\n>>> import sys\n>>> sys.byteorder\n'little'\n\nOf course depending on your machine it may return 'big'. Your method should certainly work too though.\n" ]
[ 106 ]
[]
[]
[ "endianness", "python" ]
stackoverflow_0001346034_endianness_python.txt
Q: How does python decide whether a parameter is a reference or a value? In C++, void somefunction(int) passes a value, while void somefunction(int&) passes a reference. In Java, primitives are passed by value, while objects are passed by reference. How does python make this decision? Edit: Since everything is passed by reference, why does this: def foo(num): num *= 2 a = 4 foo(a) print(a) print '4' instead of '8'? A: It passes everything by reference. Even when you specify a numeric value, it is a reference against a table containing that value. This is the difference between static and dynamic languages. The type stays with the value, not with the container, and variables are just references towards a "value space" where all values live. You can assume this value space containing all the possible immutable objects (integers, floats, strings) plus all the mutable ones you create (lists, dicts, objects). Of course, their existence is made concrete only when you involve them (that means, if you never use the number 42 in your program, no allocated space exist for the value 42 in the "value space") It does that because the number it is referring to is a immutable object. 4 is 4 no matter what. def foo(num): # here, num is referring to the immutable entity 4 num *= 2 # num now refers to the immutable entity 8 a = 4 # a now is pointing to the immutable entity 4 foo(a) # a is still referring to the same entity 4 print(a) # prints what a refers to, still 4 However, if you do this def foo(l): # here, l refers to the list it receives l.append(5) # the list is appended with the number 5 a = [] # a now is pointing to a specific mutable list foo(a) # a is still referring to the same specific mutable list print(a) # prints what a refers to, the specific mutable list which now contains [5] A: There is disagreement on terminology here. In the Java community, they say that everything is passed by value: primitives are passed by value; references are passed by value. (Just search this site for Java and pass by reference if you don't believe this.) Note that "objects" are not values in the language; only references to objects are. The distinction that they use is that, in Java, when you pass a reference, the original reference variable in the caller's scope can never be changed (i.e. made to point to a different object) by the callee, which should be possible in pass by reference. Only the object pointed to by the reference may be mutated, but that is irrelevant. Python values work the exact same way as references in Java. If we use the same definition, then we would say that everything in Python is a reference, and everything is passed by value. Of course, some in the Python community use a different definition. The disagreement on terminology is the source of most of the confusion. Since you mention C++, the Python code you have would be equivalent to something like this in C++: void foo(const int *num) { num = new int(*num * 2); } const int *a = new int(4); foo(a); print(a); Note that the argument is a pointer, which is most similar to references in Java and Python. A: In response to your edit, it is because integers are immutable in Python. So a is not changed for the same reason it is not changed when running this code: a = 4 num = a num *= 2 print(a) You aren't changing num (and therefore a) in place, you are creating a new number and assigning it to num. A: Arguments are actually passed by value. The function is passed the object the variable refers to, not the variable itself. A function cannot rebind a caller's variables. A function cannot change an immutable object, but can change (request changes to) a mutable one. A: Everything is passed by reference. Everything is an object, too. A: This is not really about the function call semantics but the assignment semantics. In Python assignment is done by rebinding the reference, not by overwriting the original object. This is why the example code prints 4 instead of 8 - it has nothing to do with mutability of objects as such, more that the *= operator is not a mutator but a multiplication followed by an assignment. Here the num *= 2 is essentially rebinding the 'num' name in that function to a new object of value 'num * 2'. The original value you passed in is left unaltered throughout.
How does python decide whether a parameter is a reference or a value?
In C++, void somefunction(int) passes a value, while void somefunction(int&) passes a reference. In Java, primitives are passed by value, while objects are passed by reference. How does python make this decision? Edit: Since everything is passed by reference, why does this: def foo(num): num *= 2 a = 4 foo(a) print(a) print '4' instead of '8'?
[ "It passes everything by reference. Even when you specify a numeric value, it is a reference against a table containing that value. This is the difference between static and dynamic languages. The type stays with the value, not with the container, and variables are just references towards a \"value space\" where all values live. You can assume this value space containing all the possible immutable objects (integers, floats, strings) plus all the mutable ones you create (lists, dicts, objects). Of course, their existence is made concrete only when you involve them (that means, if you never use the number 42 in your program, no allocated space exist for the value 42 in the \"value space\")\nIt does that because the number it is referring to is a immutable object. 4 is 4 no matter what.\ndef foo(num): # here, num is referring to the immutable entity 4\n num *= 2 # num now refers to the immutable entity 8\n\na = 4 # a now is pointing to the immutable entity 4\nfoo(a) # a is still referring to the same entity 4\n\nprint(a) # prints what a refers to, still 4\n\nHowever, if you do this\ndef foo(l): # here, l refers to the list it receives\n l.append(5) # the list is appended with the number 5\n\na = [] # a now is pointing to a specific mutable list \nfoo(a) # a is still referring to the same specific mutable list\n\nprint(a) # prints what a refers to, the specific mutable list which now contains [5]\n\n", "There is disagreement on terminology here. In the Java community, they say that everything is passed by value: primitives are passed by value; references are passed by value. (Just search this site for Java and pass by reference if you don't believe this.) Note that \"objects\" are not values in the language; only references to objects are.\nThe distinction that they use is that, in Java, when you pass a reference, the original reference variable in the caller's scope can never be changed (i.e. made to point to a different object) by the callee, which should be possible in pass by reference. Only the object pointed to by the reference may be mutated, but that is irrelevant.\nPython values work the exact same way as references in Java. If we use the same definition, then we would say that everything in Python is a reference, and everything is passed by value. Of course, some in the Python community use a different definition.\nThe disagreement on terminology is the source of most of the confusion.\nSince you mention C++, the Python code you have would be equivalent to something like this in C++:\nvoid foo(const int *num) {\n num = new int(*num * 2);\n}\n\nconst int *a = new int(4);\nfoo(a);\n\nprint(a);\n\nNote that the argument is a pointer, which is most similar to references in Java and Python.\n", "In response to your edit, it is because integers are immutable in Python. So a is not changed for the same reason it is not changed when running this code:\na = 4\nnum = a\nnum *= 2\nprint(a)\n\nYou aren't changing num (and therefore a) in place, you are creating a new number and assigning it to num.\n", "Arguments are actually passed by value. The function is passed the object the variable refers to, not the variable itself. A function cannot rebind a caller's variables. A function cannot change an immutable object, but can change (request changes to) a mutable one.\n", "Everything is passed by reference. Everything is an object, too.\n", "This is not really about the function call semantics but the assignment semantics. In Python assignment is done by rebinding the reference, not by overwriting the original object. This is why the example code prints 4 instead of 8 - it has nothing to do with mutability of objects as such, more that the *= operator is not a mutator but a multiplication followed by an assignment. Here the num *= 2 is essentially rebinding the 'num' name in that function to a new object of value 'num * 2'. The original value you passed in is left unaltered throughout.\n" ]
[ 11, 9, 3, 3, 1, 1 ]
[]
[]
[ "pointers", "python", "reference" ]
stackoverflow_0001342953_pointers_python_reference.txt
Q: Threaded Django task doesn't automatically handle transactions or db connections? I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread? A: After weeks of testing and reading the Django source code, I've found the answer to my own question: Transactions Django's default autocommit behavior still holds true for my threaded function. However, it states in the Django docs: As soon as you perform an action that needs to write to the database, Django produces the INSERT/UPDATE/DELETE statements and then does the COMMIT. There’s no implicit ROLLBACK. That last sentence is very literal. It DOES NOT issue a ROLLBACK command unless something in Django has set the dirty flag. Since my function was only doing SELECT statements it never set the dirty flag and didn't trigger a COMMIT. This goes against the fact that PostgreSQL thinks the transaction requires a ROLLBACK because Django issued a SET command for the timezone. In reviewing the logs, I threw myself off because I kept seeing these ROLLBACK statements and assumed Django's transaction management was the source. Turns out it's not, and that's OK. Connections The connection management is where things do get tricky. It turns out Django uses signals.request_finished.connect(close_connection) to close the database connection it normally uses. Since nothing normally happens in Django that doesn't involve a request, you take this behavior for granted. In my case, though, there was no request because the job was scheduled. No request means no signal. No signal means the database connection was never closed. Going back to transactions, it turns out that simply issuing a call to connection.close() in the absence of any changes to the transaction management issues the ROLLBACK statement in the PostgreSQL log that I'd been looking for. Solution The solution is to allow the normal Django transaction management to proceed as normal and to simply close the connection one of three ways: Write a decorator that closes the connection and wrap the necessary functions in it. Hook into the existing request signals to have Django close the connection. Close the connection manually at the end of the function. Any of those three will (and do) work. This has driven me crazy for weeks. I hope this helps someone else in the future!
Threaded Django task doesn't automatically handle transactions or db connections?
I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?
[ "After weeks of testing and reading the Django source code, I've found the answer to my own question:\nTransactions\nDjango's default autocommit behavior still holds true for my threaded function. However, it states in the Django docs:\n\nAs soon as you perform an action that needs to write to the database, Django produces the INSERT/UPDATE/DELETE statements and then does the COMMIT. There’s no implicit ROLLBACK.\n\nThat last sentence is very literal. It DOES NOT issue a ROLLBACK command unless something in Django has set the dirty flag. Since my function was only doing SELECT statements it never set the dirty flag and didn't trigger a COMMIT.\nThis goes against the fact that PostgreSQL thinks the transaction requires a ROLLBACK because Django issued a SET command for the timezone. In reviewing the logs, I threw myself off because I kept seeing these ROLLBACK statements and assumed Django's transaction management was the source. Turns out it's not, and that's OK.\nConnections\nThe connection management is where things do get tricky. It turns out Django uses signals.request_finished.connect(close_connection) to close the database connection it normally uses. Since nothing normally happens in Django that doesn't involve a request, you take this behavior for granted.\nIn my case, though, there was no request because the job was scheduled. No request means no signal. No signal means the database connection was never closed.\nGoing back to transactions, it turns out that simply issuing a call to connection.close() in the absence of any changes to the transaction management issues the ROLLBACK statement in the PostgreSQL log that I'd been looking for.\nSolution\nThe solution is to allow the normal Django transaction management to proceed as normal and to simply close the connection one of three ways:\n\nWrite a decorator that closes the connection and wrap the necessary functions in it.\nHook into the existing request signals to have Django close the connection.\nClose the connection manually at the end of the function.\n\nAny of those three will (and do) work.\nThis has driven me crazy for weeks. I hope this helps someone else in the future!\n" ]
[ 111 ]
[]
[]
[ "database", "django", "multithreading", "python", "transactions" ]
stackoverflow_0001303654_database_django_multithreading_python_transactions.txt
Q: How to find out whether computer is connected to internet? How to find out whether computer is connected to internet in python? A: If you have python2.6 you can set a timeout. Otherwise the connection might block for a long time. try: urllib2.urlopen("http://example.com", timeout=2) except urllib2.URLError: # There is no connection A: Try import urllib file = urllib.urlopen("http://stackoverflow.com/") html = file.read() and see if that works, or if it throws an exception. Even if you don't use the exact code, you should get the idea.
How to find out whether computer is connected to internet?
How to find out whether computer is connected to internet in python?
[ "If you have python2.6 you can set a timeout. Otherwise the connection might block for a long time.\ntry:\n urllib2.urlopen(\"http://example.com\", timeout=2)\nexcept urllib2.URLError:\n # There is no connection\n\n", "Try\nimport urllib\nfile = urllib.urlopen(\"http://stackoverflow.com/\")\nhtml = file.read()\n\nand see if that works, or if it throws an exception. Even if you don't use the exact code, you should get the idea.\n" ]
[ 16, 7 ]
[]
[]
[ "internet_connection", "python" ]
stackoverflow_0001346575_internet_connection_python.txt
Q: Haskell equivalent of Python's "Construct" Construct is a DSL implemented in Python used to describe data structures (binary and textual). Once you have the data structure described, construct can parse and build it for you. Which is good ("DRY", "Declarative", "Denotational-Semantics"...) Usage example: # code from construct.formats.graphics.png itxt_info = Struct("itxt_info", CString("keyword"), UBInt8("compression_flag"), compression_method, CString("language_tag"), CString("translated_keyword"), OnDemand( Field("text", lambda ctx: ctx._.length - (len(ctx.keyword) + len(ctx.language_tag) + len(ctx.translated_keyword) + 5), ), ), ) I am in need for such a tool for Haskell and I wonder if something like this exists. I know of: Data.Binary: User implements parsing and building seperately Parsec: Only for parsing? Only for text? I guess one must use Template Haskell to achieve this? A: I'd say it depends what you want to do, and if you need to comply with any existing format. Data.Binary will (surprise!) help you with binary data, both reading and writing. You can either write the code to read/write yourself, or let go of the details and generate the required code for your data structures using some additional tools like DrIFT or Derive. DrIFT works as a preprocessor, while Derive can work as a preprocessor and with TemplateHaskell. Parsec will only help you with parsing text. No binary data (as easily), and no writing. Work is done with regular Strings. There are ByteString equivalents on hackage. For your example above I'd use Data.Binary and write custom put/geters myself. Have a look at the parser category at hackage for more options. A: Currently (afaik) there is no equivalent to Construct in Haskell. One can be implemented using Template Haskell.
Haskell equivalent of Python's "Construct"
Construct is a DSL implemented in Python used to describe data structures (binary and textual). Once you have the data structure described, construct can parse and build it for you. Which is good ("DRY", "Declarative", "Denotational-Semantics"...) Usage example: # code from construct.formats.graphics.png itxt_info = Struct("itxt_info", CString("keyword"), UBInt8("compression_flag"), compression_method, CString("language_tag"), CString("translated_keyword"), OnDemand( Field("text", lambda ctx: ctx._.length - (len(ctx.keyword) + len(ctx.language_tag) + len(ctx.translated_keyword) + 5), ), ), ) I am in need for such a tool for Haskell and I wonder if something like this exists. I know of: Data.Binary: User implements parsing and building seperately Parsec: Only for parsing? Only for text? I guess one must use Template Haskell to achieve this?
[ "I'd say it depends what you want to do, and if you need to comply with any existing format.\nData.Binary will (surprise!) help you with binary data, both reading and writing.\nYou can either write the code to read/write yourself, or let go of the details and generate the required code for your data structures using some additional tools like DrIFT or Derive. DrIFT works as a preprocessor, while Derive can work as a preprocessor and with TemplateHaskell.\nParsec will only help you with parsing text. No binary data (as easily), and no writing. Work is done with regular Strings. There are ByteString equivalents on hackage.\nFor your example above I'd use Data.Binary and write custom put/geters myself.\nHave a look at the parser category at hackage for more options.\n", "Currently (afaik) there is no equivalent to Construct in Haskell.\nOne can be implemented using Template Haskell.\n" ]
[ 1, 0 ]
[ "I don't know anything about Python or Construct, so this is probably not what you are searching for, but for simple data structures you can always just derive read:\ndata Test a = I Int | S a deriving (Read,Show)\n\nNow, for the expression\nread \"S 123\" :: Test Double\n\nGHCi will emit: S 123.0\nFor anything more complex, you can make an instance of Read using Parsec.\n" ]
[ -1 ]
[ "construct", "dsl", "haskell", "parsing", "python" ]
stackoverflow_0001225053_construct_dsl_haskell_parsing_python.txt
Q: command line arg parsing through introspection I'm developing a management script that does a fairly large amount of work via a plethora of command-line options. The first few iterations of the script have used optparse to collect user input and then just run down the page, testing the value of each option in the appropriate order, and doing the action if necessary. This has resulted in a jungle of code that's really hard to read and maintain. I'm looking for something better. My hope is to have a system where I can write functions in more or less normal python fashion, and then when the script is run, have options (and help text) generated from my functions, parsed, and executed in the appropriate order. Additionally, I'd REALLY like to be able to build django-style sub-command interfaces, where myscript.py install works completely separately from myscript.py remove (separate options, help, etc.) I've found simon willison's optfunc and it does a lot of this, but seems to just miss the mark — I want to write each OPTION as a function, rather than try to compress the whole option set into a huge string of options. I imagine an architecture involving a set of classes for major functions, and each defined method of the class corresponding to a particular option in the command line. This structure provides the advantage of having each option reside near the functional code it modifies, easing maintenance. The thing I don't know quite how to deal with is the ordering of the commands, since the ordering of class methods is not deterministic. Before I go reinventing the wheel: Are there any other existing bits of code that behave similarly? Other things that would be easy to modify? Asking the question has clarified my own thinking on what would be nice, but feedback on why this is a terrible idea, or how it should work would be welcome. A: Don't waste time on "introspection". Each "Command" or "Option" is an object with two sets of method functions or attributes. Provide setup information to optparse. Actually do the work. Here's the superclass for all commands class Command( object ): name= "name" def setup_opts( self, parser ): """Add any options to the parser that this command needs.""" pass def execute( self, context, options, args ): """Execute the command in some application context with some options and args.""" raise NotImplemented You create sublcasses for Install and Remove and every other command you need. Your overall application looks something like this. commands = [ Install(), Remove(), ] def main(): parser= optparse.OptionParser() for c in commands: c.setup_opts( parser ) options, args = parser.parse() command= None for c in commands: if c.name.startswith(args[0].lower()): command= c break if command: status= command.execute( context, options, args[1:] ) else: logger.error( "Command %r is unknown", args[0] ) status= 2 sys.exit( status ) A: The WSGI library werkzeug provides Management Script Utilities which may do what you want, or at least give you a hint how to do the introspection yourself. from werkzeug import script # actions go here def action_test(): "sample with no args" pass def action_foo(name=2, value="test"): "do some foo" pass if __name__ == '__main__': script.run() Which will generate the following help message: $ python /tmp/test.py --help usage: test.py <action> [<options>] test.py --help actions: foo: do some foo --name integer 2 --value string test test: sample with no args An action is a function in the same module starting with "action_" which takes a number of arguments where every argument has a default. The type of the default value specifies the type of the argument. Arguments can then be passed by position or using --name=value from the shell.
command line arg parsing through introspection
I'm developing a management script that does a fairly large amount of work via a plethora of command-line options. The first few iterations of the script have used optparse to collect user input and then just run down the page, testing the value of each option in the appropriate order, and doing the action if necessary. This has resulted in a jungle of code that's really hard to read and maintain. I'm looking for something better. My hope is to have a system where I can write functions in more or less normal python fashion, and then when the script is run, have options (and help text) generated from my functions, parsed, and executed in the appropriate order. Additionally, I'd REALLY like to be able to build django-style sub-command interfaces, where myscript.py install works completely separately from myscript.py remove (separate options, help, etc.) I've found simon willison's optfunc and it does a lot of this, but seems to just miss the mark — I want to write each OPTION as a function, rather than try to compress the whole option set into a huge string of options. I imagine an architecture involving a set of classes for major functions, and each defined method of the class corresponding to a particular option in the command line. This structure provides the advantage of having each option reside near the functional code it modifies, easing maintenance. The thing I don't know quite how to deal with is the ordering of the commands, since the ordering of class methods is not deterministic. Before I go reinventing the wheel: Are there any other existing bits of code that behave similarly? Other things that would be easy to modify? Asking the question has clarified my own thinking on what would be nice, but feedback on why this is a terrible idea, or how it should work would be welcome.
[ "Don't waste time on \"introspection\". \nEach \"Command\" or \"Option\" is an object with two sets of method functions or attributes.\n\nProvide setup information to optparse.\nActually do the work.\n\nHere's the superclass for all commands\nclass Command( object ):\n name= \"name\"\n def setup_opts( self, parser ):\n \"\"\"Add any options to the parser that this command needs.\"\"\"\n pass\n def execute( self, context, options, args ):\n \"\"\"Execute the command in some application context with some options and args.\"\"\"\n raise NotImplemented\n\nYou create sublcasses for Install and Remove and every other command you need.\nYour overall application looks something like this.\ncommands = [ \n Install(),\n Remove(),\n]\ndef main():\n parser= optparse.OptionParser()\n for c in commands:\n c.setup_opts( parser )\n options, args = parser.parse()\n command= None\n for c in commands:\n if c.name.startswith(args[0].lower()):\n command= c\n break\n if command:\n status= command.execute( context, options, args[1:] )\n else:\n logger.error( \"Command %r is unknown\", args[0] )\n status= 2\n sys.exit( status )\n\n", "The WSGI library werkzeug provides Management Script Utilities which may do what you want, or at least give you a hint how to do the introspection yourself.\nfrom werkzeug import script\n\n# actions go here\ndef action_test():\n \"sample with no args\"\n pass\n\ndef action_foo(name=2, value=\"test\"):\n \"do some foo\"\n pass\n\nif __name__ == '__main__':\n script.run()\n\nWhich will generate the following help message:\n$ python /tmp/test.py --help\nusage: test.py <action> [<options>]\n test.py --help\n\nactions:\n foo:\n do some foo\n\n --name integer 2\n --value string test\n\n test:\n sample with no args\n\nAn action is a function in the same module starting with \"action_\" which takes a number of arguments where every argument has a default. The type of the default value specifies the type of the argument.\nArguments can then be passed by position or using --name=value from the shell.\n" ]
[ 4, 0 ]
[]
[]
[ "command_line", "parsing", "python" ]
stackoverflow_0001345448_command_line_parsing_python.txt
Q: Nice exception handling when re-trying code I have some test cases. The test cases rely on data which takes time to compute. To speed up testing, I've cached the data so that it doesn't have to be recomputed. I now have foo(), which looks at the cached data. I can't tell ahead of time what it will look at, as that depends a lot on the test case. If a test case fails cause it doesn't find the right cached data, I don't want it to fail - I want it to compute the data and then try again. I also don't know what exception in particular it will throw cause of missing data. My code right now looks like this: if cacheExists: loadCache() dataComputed = False else: calculateData() dataComputed = True try: foo() except: if not dataComputed: calculateData() dataComputed = True try: foo() except: #error handling code else: #the same error handling code What's the best way to re-structure this code? A: I disagree with the key suggestion in the existing answers, which basically boils down to treating exceptions in Python as you would in, say, C++ or Java -- that's NOT the preferred style in Python, where often the good old idea that "it's better to ask forgiveness than permission" (attempt an operation and deal with the exception, if any, rather than obscuring your code's main flow and incurring overhead by thorough preliminary checks). I do agree with Gabriel that a bare except is hardly ever a good idea (unless all it does is some form of logging followed by a raise to let the exception propagate). So, say you have a tuple with all the exception types that you do expect and want to handle the same way, say: expected_exceptions = KeyError, AttributeError, TypeError and always use except expected_exceptions: rather than bare except:. So, with that out of the way, one slightly less-repetitious approach to your needs is: try: foo1() except expected_exceptions: try: if condition: foobetter() else: raise except expected_exceptions: handleError() A different approach is to use an auxiliary function to wrap the try/except logic: def may_raise(expected_exceptions, somefunction, *a, **k): try: return False, somefunction(*a, **k) except expected_exceptions: return True, None Such a helper may often come in useful in several different situations, so it's pretty common to have something like this somewhere in a project's "utilities" modules. Now, for your case (no arguments, no results) you could use: failed, _ = may_raise(expected_exceptions, foo1) if failed and condition: failed, _ = may_raise(expected_exceptions, foobetter) if failed: handleError() which I would argue is more linear and therefore simpler. The only issue with this general approach is that an auxiliary function such as may_raise does not FORCE you to deal in some way or other with exceptions, so you might just forget to do so (just like the use of return codes, instead of exceptions, to indicate errors, is prone to those return values mistakenly being ignored); so, use it sparingly...!-) A: Using blanket exceptions isn't usually a great idea. What kind of Exception are you expecting there? Is it a KeyError, AttributeError, TypeError... Once you've identified what type of error you're looking for you can use something like hasattr() or the in operator or many other things that will test for your condition before you have to deal with exceptions. That way you can clean up your logic flow and save your exception handling for things that are really broken! A: Sometimes there's no nice way to express a flow, it's just complicated. But here's a way to call foo() in only one place, and have the error handling in only one place: if cacheExists: loadCache() dataComputed = False else: calculateData() dataComputed = True while True: try: foo() break except: if not dataComputed: calculateData() dataComputed = True continue else: #the error handling code break You may not like the loop, YMMV... Or: if cacheExists: loadCache() dataComputed = False else: calculateData() dataComputed = True done = False while !done: try: foo() done = True except: if not dataComputed: calculateData() dataComputed = True continue else: #the error handling code done = True A: I like the alternative approach proposed by Alex Martelli. What do you think about using a list of functions as argument of the may_raise. The functions would be executed until one succeed! Here is the code def foo(x): raise Exception("Arrrgh!") return 0 def foobetter(x): print "Hello", x return 1 def try_many(functions, expected_exceptions, *a, **k): ret = None for f in functions: try: ret = f(*a, **k) except expected_exceptions, e: print e else: break return ret print try_many((foo, foobetter), Exception, "World") result is Arrrgh! Hello World 1 A: Is there a way to tell if you want to do foobetter() before making the call? If you get an exception it should be because something unexpected (exceptional!) happened. Don't use exceptions for flow control.
Nice exception handling when re-trying code
I have some test cases. The test cases rely on data which takes time to compute. To speed up testing, I've cached the data so that it doesn't have to be recomputed. I now have foo(), which looks at the cached data. I can't tell ahead of time what it will look at, as that depends a lot on the test case. If a test case fails cause it doesn't find the right cached data, I don't want it to fail - I want it to compute the data and then try again. I also don't know what exception in particular it will throw cause of missing data. My code right now looks like this: if cacheExists: loadCache() dataComputed = False else: calculateData() dataComputed = True try: foo() except: if not dataComputed: calculateData() dataComputed = True try: foo() except: #error handling code else: #the same error handling code What's the best way to re-structure this code?
[ "I disagree with the key suggestion in the existing answers, which basically boils down to treating exceptions in Python as you would in, say, C++ or Java -- that's NOT the preferred style in Python, where often the good old idea that \"it's better to ask forgiveness than permission\" (attempt an operation and deal with the exception, if any, rather than obscuring your code's main flow and incurring overhead by thorough preliminary checks). I do agree with Gabriel that a bare except is hardly ever a good idea (unless all it does is some form of logging followed by a raise to let the exception propagate). So, say you have a tuple with all the exception types that you do expect and want to handle the same way, say:\nexpected_exceptions = KeyError, AttributeError, TypeError\n\nand always use except expected_exceptions: rather than bare except:.\nSo, with that out of the way, one slightly less-repetitious approach to your needs is:\ntry:\n foo1()\nexcept expected_exceptions:\n try:\n if condition:\n foobetter()\n else:\n raise\n except expected_exceptions:\n handleError()\n\nA different approach is to use an auxiliary function to wrap the try/except logic:\ndef may_raise(expected_exceptions, somefunction, *a, **k):\n try:\n return False, somefunction(*a, **k)\n except expected_exceptions:\n return True, None\n\nSuch a helper may often come in useful in several different situations, so it's pretty common to have something like this somewhere in a project's \"utilities\" modules. Now, for your case (no arguments, no results) you could use:\nfailed, _ = may_raise(expected_exceptions, foo1)\nif failed and condition:\n failed, _ = may_raise(expected_exceptions, foobetter)\nif failed:\n handleError()\n\nwhich I would argue is more linear and therefore simpler. The only issue with this general approach is that an auxiliary function such as may_raise does not FORCE you to deal in some way or other with exceptions, so you might just forget to do so (just like the use of return codes, instead of exceptions, to indicate errors, is prone to those return values mistakenly being ignored); so, use it sparingly...!-)\n", "Using blanket exceptions isn't usually a great idea. What kind of Exception are you expecting there? Is it a KeyError, AttributeError, TypeError...\nOnce you've identified what type of error you're looking for you can use something like hasattr() or the in operator or many other things that will test for your condition before you have to deal with exceptions.\nThat way you can clean up your logic flow and save your exception handling for things that are really broken!\n", "Sometimes there's no nice way to express a flow, it's just complicated. But here's a way to call foo() in only one place, and have the error handling in only one place:\nif cacheExists:\n loadCache()\n dataComputed = False\nelse:\n calculateData()\n dataComputed = True\n\nwhile True:\n try:\n foo()\n break\n except:\n if not dataComputed:\n calculateData()\n dataComputed = True\n continue \n else:\n #the error handling code\n break\n\nYou may not like the loop, YMMV...\nOr:\nif cacheExists:\n loadCache()\n dataComputed = False\nelse:\n calculateData()\n dataComputed = True\n\ndone = False\nwhile !done:\n try:\n foo()\n done = True\n except:\n if not dataComputed:\n calculateData()\n dataComputed = True\n continue \n else:\n #the error handling code\n done = True\n\n", "I like the alternative approach proposed by Alex Martelli.\nWhat do you think about using a list of functions as argument of the may_raise. The functions would be executed until one succeed!\nHere is the code\n\ndef foo(x):\n raise Exception(\"Arrrgh!\")\n return 0\n\ndef foobetter(x):\n print \"Hello\", x\n return 1\n\ndef try_many(functions, expected_exceptions, *a, **k):\n ret = None\n for f in functions:\n try:\n ret = f(*a, **k)\n except expected_exceptions, e:\n print e\n else:\n break\n return ret\n\nprint try_many((foo, foobetter), Exception, \"World\")\n\nresult is \n\nArrrgh!\nHello World\n1\n\n", "Is there a way to tell if you want to do foobetter() before making the call? If you get an exception it should be because something unexpected (exceptional!) happened. Don't use exceptions for flow control.\n" ]
[ 4, 1, 1, 1, 0 ]
[]
[]
[ "code_formatting", "exception", "exception_handling", "python" ]
stackoverflow_0001343541_code_formatting_exception_exception_handling_python.txt
Q: Is there a FileIO in Python? I know there is a StringIO stream in Python, but is there such a thing as a file stream in Python? Also is there a better way for me to look up these things? Documentation, etc... I am trying to pass a "stream" to a "writer" object I made. I was hoping that I could pass a file handle/stream to this writer object. A: I am guessing you are looking for open(). http://docs.python.org/library/functions.html#open outfile = open("/path/to/file", "w") [...] outfile.write([...]) Documentation on all the things you can do with streams (these are called "file objects" or "file-like objects" in Python): http://docs.python.org/library/stdtypes.html#file-objects A: There is a builtin file() which works much the same way. Here are the docs: http://docs.python.org/library/functions.html#file and http://python.org/doc/2.5.2/lib/bltin-file-objects.html. If you want to print all the lines of the file do: for line in file('yourfile.txt'): print line Of course there is more, like .seek(), .close(), .read(), .readlines(), ... basically the same protocol as for StringIO. Edit: You should use open() instead of file(), which has the same API - file() goes in Python 3. A: In Python, all the I/O operations are wrapped in a hight level API : the file likes objects. It means that any file likes object will behave the same, and can be used in a function expecting them. This is called duck typing, and for file like objects you can expect the following behavior : open / close / IO Exceptions iteration buffering reading / writing / seeking StringIO, File, and all the file like objects can really be replaced with each others, and you don't have to care about managing the I/O yourself. As a little demo, let's see what you can do with stdout, the standard output, which is a file like object : import sys # replace the standar ouput by a real opened file sys.stdout = open("out.txt", "w") # printing won't print anything, it will write in the file print "test" All the file like objects behave the same, and you should use them the same way : # try to open it # do not bother with checking wheter stream is available or not try : stream = open("file.txt", "w") except IOError : # if it doesn't work, too bad ! # this error is the same for stringIO, file, etc # use it and your code get hightly flexible ! pass else : stream.write("yeah !") stream.close() # in python 3, you'd do the same using context : with open("file2.txt", "w") as stream : stream.write("yeah !") # the rest is taken care automatically Note that a the file like objects methods share a common behavior, but the way to create a file like object is not standard : import urllib # urllib doesn't use "open" and doesn't raises only IOError exceptions stream = urllib.urlopen("www.google.com") # but this is a file like object and you can rely on that : for line in steam : print line Un last world, it's not because it works the same way that the underlying behavior is the same. It's important to understand what you are working with. In the last example, using the "for" loop on an Internet resource is very dangerous. Indeed, you know is you won't end up with a infinite stream of data. In that case, using : print steam.read(10000) # another file like object method is safer. Hight abstractions are powerful, but doesn't save you the need to know how the stuff works.
Is there a FileIO in Python?
I know there is a StringIO stream in Python, but is there such a thing as a file stream in Python? Also is there a better way for me to look up these things? Documentation, etc... I am trying to pass a "stream" to a "writer" object I made. I was hoping that I could pass a file handle/stream to this writer object.
[ "I am guessing you are looking for open(). http://docs.python.org/library/functions.html#open\noutfile = open(\"/path/to/file\", \"w\")\n[...]\noutfile.write([...])\n\nDocumentation on all the things you can do with streams (these are called \"file objects\" or \"file-like objects\" in Python): http://docs.python.org/library/stdtypes.html#file-objects\n", "There is a builtin file() which works much the same way. Here are the docs: http://docs.python.org/library/functions.html#file and http://python.org/doc/2.5.2/lib/bltin-file-objects.html.\nIf you want to print all the lines of the file do:\nfor line in file('yourfile.txt'):\n print line\n\nOf course there is more, like .seek(), .close(), .read(), .readlines(), ... basically the same protocol as for StringIO.\nEdit: You should use open() instead of file(), which has the same API - file() goes in Python 3.\n", "In Python, all the I/O operations are wrapped in a hight level API : the file likes objects.\nIt means that any file likes object will behave the same, and can be used in a function expecting them. This is called duck typing, and for file like objects you can expect the following behavior :\n\nopen / close / IO Exceptions\niteration\nbuffering\nreading / writing / seeking\n\nStringIO, File, and all the file like objects can really be replaced with each others, and you don't have to care about managing the I/O yourself.\nAs a little demo, let's see what you can do with stdout, the standard output, which is a file like object :\nimport sys\n# replace the standar ouput by a real opened file\nsys.stdout = open(\"out.txt\", \"w\")\n# printing won't print anything, it will write in the file\nprint \"test\"\n\nAll the file like objects behave the same, and you should use them the same way :\n# try to open it\n# do not bother with checking wheter stream is available or not\n\ntry :\n stream = open(\"file.txt\", \"w\")\nexcept IOError :\n # if it doesn't work, too bad !\n # this error is the same for stringIO, file, etc\n # use it and your code get hightly flexible !\n pass\nelse :\n stream.write(\"yeah !\")\n stream.close()\n\n# in python 3, you'd do the same using context :\n\nwith open(\"file2.txt\", \"w\") as stream :\n stream.write(\"yeah !\")\n\n# the rest is taken care automatically\n\nNote that a the file like objects methods share a common behavior, but the way to create a file like object is not standard :\nimport urllib\n# urllib doesn't use \"open\" and doesn't raises only IOError exceptions\nstream = urllib.urlopen(\"www.google.com\")\n\n# but this is a file like object and you can rely on that :\nfor line in steam :\n print line\n\nUn last world, it's not because it works the same way that the underlying behavior is the same. It's important to understand what you are working with. In the last example, using the \"for\" loop on an Internet resource is very dangerous. Indeed, you know is you won't end up with a infinite stream of data.\nIn that case, using :\nprint steam.read(10000) # another file like object method\n\nis safer. Hight abstractions are powerful, but doesn't save you the need to know how the stuff works.\n" ]
[ 8, 5, 1 ]
[]
[]
[ "file_io", "python", "stream" ]
stackoverflow_0001343666_file_io_python_stream.txt
Q: Use QAction without adding to menu (or toolbar) I'm trying to develop an application with a very modular approach to commands and thought it would be nice, sind I'm using pyqt, to use QAction's to bind shortcuts to the commands. However, it seems that actions shortcuts only works when the action is visible in a menu or toolbar. Does anyone know a way to get this action to work without it being visible? Below some example code that shows what I'm trying. Thanks, André from PyQt4 import * from PyQt4.QtCore import * from PyQt4.QtGui import * import sys class TesteMW(QMainWindow): def __init__(self, *args): QMainWindow.__init__(self, *args) self.create_action() def create_action(self): self.na = QAction(self) self.na.setText('Teste') self.na.setShortcut('Ctrl+W') self.connect(self.na, SIGNAL('triggered()'), self.action_callback) # uncomment the next line for the action to work # self.menuBar().addMenu("Teste").addAction(self.na) def action_callback(self): print 'action called!' app = QApplication(sys.argv) mw = TesteMW() mw.show() app.exec_() A: You need to add your action to a widget before it will be processed. From the QT documentation for QAction: Actions are added to widgets using QWidget::addAction() or QGraphicsWidget::addAction(). Note that an action must be added to a widget before it can be used; this is also true when the shortcut should be global (i.e., Qt::ApplicationShortcut as Qt::ShortcutContext). This does not mean that they will be visible as a menu item or whatever - just that they will be processes as part of the widgets event loop.
Use QAction without adding to menu (or toolbar)
I'm trying to develop an application with a very modular approach to commands and thought it would be nice, sind I'm using pyqt, to use QAction's to bind shortcuts to the commands. However, it seems that actions shortcuts only works when the action is visible in a menu or toolbar. Does anyone know a way to get this action to work without it being visible? Below some example code that shows what I'm trying. Thanks, André from PyQt4 import * from PyQt4.QtCore import * from PyQt4.QtGui import * import sys class TesteMW(QMainWindow): def __init__(self, *args): QMainWindow.__init__(self, *args) self.create_action() def create_action(self): self.na = QAction(self) self.na.setText('Teste') self.na.setShortcut('Ctrl+W') self.connect(self.na, SIGNAL('triggered()'), self.action_callback) # uncomment the next line for the action to work # self.menuBar().addMenu("Teste").addAction(self.na) def action_callback(self): print 'action called!' app = QApplication(sys.argv) mw = TesteMW() mw.show() app.exec_()
[ "You need to add your action to a widget before it will be processed. From the QT documentation for QAction:\n\nActions are added to widgets using\n QWidget::addAction() or\n QGraphicsWidget::addAction(). Note\n that an action must be added to a\n widget before it can be used; this is\n also true when the shortcut should be\n global (i.e., Qt::ApplicationShortcut\n as Qt::ShortcutContext).\n\nThis does not mean that they will be visible as a menu item or whatever - just that they will be processes as part of the widgets event loop.\n" ]
[ 7 ]
[]
[]
[ "pyqt", "python", "qt" ]
stackoverflow_0001346964_pyqt_python_qt.txt
Q: Django hitting MySQL even after select_related()? I'm trying to optimize the database calls coming from a fairly small Django app. At current I have a couple of models, Inquiry and InquiryStatus. When selecting all of the records from MySQL, I get a nice JOIN statement on the two tables, followed by many requests to the InquiryStatus table. Why is Django still making individual requests if I've already done a select_related()? The models look like so: class InquiryStatus(models.Model): status = models.CharField(max_length=25) status_short = models.CharField(max_length=5) class Meta: ordering = ["-default_status", "status", "status_short"] class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) type = models.CharField(max_length=50) status = models.ForeignKey(InquiryStatus) class Meta: ordering = ["-ts"] The view I threw together for debugging looks like so: def inquiries_list(request, template_name="inquiries/list_inquiries.js"): ## Notice the "print" on the following line. Forces evaluation. print models.Inquiry.objects.select_related('status').all() return HttpResponse("CRAPSTICKS") I've tried using select_related(depth=1), with no change. Each of the extraneous requests to the database are selecting one specific id in the WHERE clause. Update: So there was one bit of very important code which should have been put in with the models: from fullhistory import register_model register_model(Inquiry) register_model(InquiryStatus) As a result, fullhistory was (for reasons I cannot fathom) pulling each individual result and parsing it. A: I believe this has to do with lazy evaluation. Django only hits the DB if and when necessary, not when you invoke models.Inquiry.objects.select_related('status').all() http://docs.djangoproject.com/en/dev/topics/db/queries/#id3 A: The code you've shown shouldn't actually generate any queries at all - QuerySets are only evaluated when necessary, not when they're defined, and you don't use the value anywhere so the execution won't be done. Please show us a template or some other code that actually evaluates the qs - slices it, iterates, prints, or anything. A: It seems that fullhistory ends up serializing the object, which evaluates each field in the instance to give it a base to compare to. Take a look at the get_all_data function: http://code.google.com/p/fullhistory/source/browse/trunk/fullhistory/fullhistory.py If anybody wants to write up a detailed reason why this happens, I'll gladly mark that answer correct.
Django hitting MySQL even after select_related()?
I'm trying to optimize the database calls coming from a fairly small Django app. At current I have a couple of models, Inquiry and InquiryStatus. When selecting all of the records from MySQL, I get a nice JOIN statement on the two tables, followed by many requests to the InquiryStatus table. Why is Django still making individual requests if I've already done a select_related()? The models look like so: class InquiryStatus(models.Model): status = models.CharField(max_length=25) status_short = models.CharField(max_length=5) class Meta: ordering = ["-default_status", "status", "status_short"] class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) type = models.CharField(max_length=50) status = models.ForeignKey(InquiryStatus) class Meta: ordering = ["-ts"] The view I threw together for debugging looks like so: def inquiries_list(request, template_name="inquiries/list_inquiries.js"): ## Notice the "print" on the following line. Forces evaluation. print models.Inquiry.objects.select_related('status').all() return HttpResponse("CRAPSTICKS") I've tried using select_related(depth=1), with no change. Each of the extraneous requests to the database are selecting one specific id in the WHERE clause. Update: So there was one bit of very important code which should have been put in with the models: from fullhistory import register_model register_model(Inquiry) register_model(InquiryStatus) As a result, fullhistory was (for reasons I cannot fathom) pulling each individual result and parsing it.
[ "I believe this has to do with lazy evaluation. Django only hits the DB if and when necessary, not when you invoke models.Inquiry.objects.select_related('status').all()\nhttp://docs.djangoproject.com/en/dev/topics/db/queries/#id3\n", "The code you've shown shouldn't actually generate any queries at all - QuerySets are only evaluated when necessary, not when they're defined, and you don't use the value anywhere so the execution won't be done.\nPlease show us a template or some other code that actually evaluates the qs - slices it, iterates, prints, or anything.\n", "It seems that fullhistory ends up serializing the object, which evaluates each field in the instance to give it a base to compare to.\nTake a look at the get_all_data function:\nhttp://code.google.com/p/fullhistory/source/browse/trunk/fullhistory/fullhistory.py\nIf anybody wants to write up a detailed reason why this happens, I'll gladly mark that answer correct.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "django_models", "django_select_related", "python" ]
stackoverflow_0001344016_django_django_models_django_select_related_python.txt
Q: creating class instances from a list Using python.....I have a list that contain names. I want to use each item in the list to create instances of a class. I can't use these items in their current condition (they're strings). Does anyone know how to do this in a loop. class trap(movevariables): def __init__(self): movevariables.__init__(self) if self.X==0: self.X=input('Move Distance(mm) ') if self.Vmax==0: self.Vmax=input('Max Velocity? (mm/s) ') if self.A==0: percentg=input('Acceleration as decimal percent of g' ) self.A=percentg*9806.65 self.Xmin=((self.Vmax**2)/(2*self.A)) self.calc() def calc(self): if (self.X/2)>self.Xmin: self.ta=2*((self.Vmax)/self.A) # to reach maximum velocity, the move is a symetrical trapezoid and the (acceleration time*2) is used self.halfta=self.ta/2. # to calculate the total amount of time consumed by acceleration and deceleration self.xa=.5*self.A*(self.halfta)**2 else: # If the move is not a trap, MaxV is not reached and the acceleration time is set to zero for subsequent calculations self.ta=0 if (self.X/2)<self.Xmin: self.tva=(self.X/self.A)**.5 self.halftva=self.tva/2 self.Vtriang=self.A*self.halftva else: self.tva=0 if (self.X/2)>self.Xmin: self.tvc=(self.X-2*self.Xmin)/(self.Vmax) # calculate the Constant velocity time if you DO get to it else: self.tvc=0 self.t=(self.ta+self.tva+self.tvc) print self I'm a mechanical engineer. The trap class describes a motion profile that is common throughout the design of our machinery. There are many independent axes (trap classes) in our equipment so I need to distinguish between them by creating unique instances. The trap class inherits from movevariables many getter/setter functions structured as properties. In this way I can edit the variables by using the instance names. I'm thinking that I can initialize many machine axes at once by looping through the list instead of typing each one. A: You could use a dict, like: classes = {"foo" : foo, "bar" : bar} then you could do: myvar = classes[somestring]() this way you'll have to initialize and keep the dict, but will have control on which classes can be created. A: The getattr approach seems right, a bit more detail: def forname(modname, classname): ''' Returns a class of "classname" from module "modname". ''' module = __import__(modname) classobj = getattr(module, classname) return classobj From a blog post by Ben Snider. A: If it a list of classes in a string form you can: classes = ['foo', 'bar'] for class in classes: obj = eval(class) and to create an instance you simply do this: instance = obj(arg1, arg2, arg3) A: EDIT If you want to create several instances of the class trap, here is what to do: namelist=['lane1', 'lane2'] traps = dict((name, trap()) for name in namelist) That will create a dictionary that maps each name to the instance. Then to access each instance by name you do: traps['lane1'].Vmax A: you're probably looking for getattr.
creating class instances from a list
Using python.....I have a list that contain names. I want to use each item in the list to create instances of a class. I can't use these items in their current condition (they're strings). Does anyone know how to do this in a loop. class trap(movevariables): def __init__(self): movevariables.__init__(self) if self.X==0: self.X=input('Move Distance(mm) ') if self.Vmax==0: self.Vmax=input('Max Velocity? (mm/s) ') if self.A==0: percentg=input('Acceleration as decimal percent of g' ) self.A=percentg*9806.65 self.Xmin=((self.Vmax**2)/(2*self.A)) self.calc() def calc(self): if (self.X/2)>self.Xmin: self.ta=2*((self.Vmax)/self.A) # to reach maximum velocity, the move is a symetrical trapezoid and the (acceleration time*2) is used self.halfta=self.ta/2. # to calculate the total amount of time consumed by acceleration and deceleration self.xa=.5*self.A*(self.halfta)**2 else: # If the move is not a trap, MaxV is not reached and the acceleration time is set to zero for subsequent calculations self.ta=0 if (self.X/2)<self.Xmin: self.tva=(self.X/self.A)**.5 self.halftva=self.tva/2 self.Vtriang=self.A*self.halftva else: self.tva=0 if (self.X/2)>self.Xmin: self.tvc=(self.X-2*self.Xmin)/(self.Vmax) # calculate the Constant velocity time if you DO get to it else: self.tvc=0 self.t=(self.ta+self.tva+self.tvc) print self I'm a mechanical engineer. The trap class describes a motion profile that is common throughout the design of our machinery. There are many independent axes (trap classes) in our equipment so I need to distinguish between them by creating unique instances. The trap class inherits from movevariables many getter/setter functions structured as properties. In this way I can edit the variables by using the instance names. I'm thinking that I can initialize many machine axes at once by looping through the list instead of typing each one.
[ "You could use a dict, like:\nclasses = {\"foo\" : foo, \"bar\" : bar}\n\nthen you could do:\nmyvar = classes[somestring]()\n\nthis way you'll have to initialize and keep the dict, but will have control on which classes can be created.\n", "The getattr approach seems right, a bit more detail:\ndef forname(modname, classname):\n ''' Returns a class of \"classname\" from module \"modname\". '''\n module = __import__(modname)\n classobj = getattr(module, classname)\n return classobj\n\nFrom a blog post by Ben Snider.\n", "If it a list of classes in a string form you can:\nclasses = ['foo', 'bar']\nfor class in classes:\n obj = eval(class)\n\nand to create an instance you simply do this:\ninstance = obj(arg1, arg2, arg3)\n\n", "EDIT\nIf you want to create several instances of the class trap, here is what to do:\nnamelist=['lane1', 'lane2']\ntraps = dict((name, trap()) for name in namelist)\n\nThat will create a dictionary that maps each name to the instance.\nThen to access each instance by name you do:\ntraps['lane1'].Vmax\n\n", "you're probably looking for getattr.\n" ]
[ 2, 2, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001346969_python.txt
Q: Moving to Python 2.6.x My stuff is developed and running on Python 2.5.2 I want to move some code to 3.x, but that isn't feasible because so many of the external packages I use are not there yet. (Like numpy for instance). So, I'll do the intermediate step and go to 2.6.2. My question: If an external module runs on 2.5.2, but doesn't explicitly state that it works with 2.6.x, can I assume it'll be fine? Or not? A: Most likely they will work just fine. Some things might cause DeprecationWarnings, for example sha module, but they can be ignored safely. This is my gut feeling, of course you can hit some specific thing causing problems. Anyway, a quick look over these should tell pretty fast whether your code needs work or not: Improved and deprecated modules Porting to Python 2.6 A: The main issue will come with any C-coded extensions you may be using: depending on your system, but especially on Windows, such extensions, compiled for 2.5, are likely to not work at all (or at least not quietly and reliably) with 2.6. That's not particularly different from, e.g., migrating from 2.4 to 2.5 in the past. The simplest solution (IMHO) is to get the sources for any such extensions and reinstall them. On most platforms, and for most extensions, python setup.py install (possibly with a sudo or logged in as administrator, depending on your installation) will work -- you may need to download and install proper "developer" packages, again depending on what system exactly you're using and what you have already installed (for example, on Mac OS X you need to install XCode -- or at least the gcc subset thereof, but it's simplest to install it all -- which in turn requires you to sign up for free at Apple Developer Connection and download the large XCode package). I'm not sure how hassle-free this approach is on Windows at this time -- i.e., whether you can use free-as-in-beer compilers such as mingw or Microsoft's "express" edition of VS, or have to shell out $$ to MS to get the right compiler. However, most developers of third party extensions do go out on their way to supply ready Windows binaries, exactly because having the users recompile is (or at least used to be) a hassle on Windows, and 2.6 is already widely supported by third-party extension maintainers (since after all it IS just about a simple recompile for them, too;-), so you may be in luck and find all the precompiled binaries you need already available for the extensions you use. A: You can't assume that. However, you should be able to easily test if it works or not. Also, do not bother trying to move to 3.x for another year or two. 2.6 has many of 3.0's features back-ported to it already, so the transition won't be that bad, once you do make it. A: It is probably worth reading the What's New section of the 2.6 documentation. While 2.6 is designed to be backwards compatible, there are a few changes that could catch code, particular code that is doing something odd (example: hasattr() used to swallow all errors, now it swallows all but SystemExit and KeyboardInterrupt; not something that most people would notice, but there may be odd code where it would make a difference). Also, that code indicates changes you can make going forward that will make it easier to move to 3.x when your packages are read (such as distinguishing between str and bytes even though they are synonyms in 2.6).
Moving to Python 2.6.x
My stuff is developed and running on Python 2.5.2 I want to move some code to 3.x, but that isn't feasible because so many of the external packages I use are not there yet. (Like numpy for instance). So, I'll do the intermediate step and go to 2.6.2. My question: If an external module runs on 2.5.2, but doesn't explicitly state that it works with 2.6.x, can I assume it'll be fine? Or not?
[ "Most likely they will work just fine. Some things might cause DeprecationWarnings, for example sha module, but they can be ignored safely. This is my gut feeling, of course you can hit some specific thing causing problems. Anyway, a quick look over these should tell pretty fast whether your code needs work or not:\n\nImproved and deprecated modules\nPorting to Python 2.6\n\n", "The main issue will come with any C-coded extensions you may be using: depending on your system, but especially on Windows, such extensions, compiled for 2.5, are likely to not work at all (or at least not quietly and reliably) with 2.6. That's not particularly different from, e.g., migrating from 2.4 to 2.5 in the past.\nThe simplest solution (IMHO) is to get the sources for any such extensions and reinstall them. On most platforms, and for most extensions, python setup.py install (possibly with a sudo or logged in as administrator, depending on your installation) will work -- you may need to download and install proper \"developer\" packages, again depending on what system exactly you're using and what you have already installed (for example, on Mac OS X you need to install XCode -- or at least the gcc subset thereof, but it's simplest to install it all -- which in turn requires you to sign up for free at Apple Developer Connection and download the large XCode package).\nI'm not sure how hassle-free this approach is on Windows at this time -- i.e., whether you can use free-as-in-beer compilers such as mingw or Microsoft's \"express\" edition of VS, or have to shell out $$ to MS to get the right compiler. However, most developers of third party extensions do go out on their way to supply ready Windows binaries, exactly because having the users recompile is (or at least used to be) a hassle on Windows, and 2.6 is already widely supported by third-party extension maintainers (since after all it IS just about a simple recompile for them, too;-), so you may be in luck and find all the precompiled binaries you need already available for the extensions you use.\n", "You can't assume that. However, you should be able to easily test if it works or not.\nAlso, do not bother trying to move to 3.x for another year or two. 2.6 has many of 3.0's features back-ported to it already, so the transition won't be that bad, once you do make it.\n", "It is probably worth reading the What's New section of the 2.6 documentation. While 2.6 is designed to be backwards compatible, there are a few changes that could catch code, particular code that is doing something odd (example: hasattr() used to swallow all errors, now it swallows all but SystemExit and KeyboardInterrupt; not something that most people would notice, but there may be odd code where it would make a difference).\nAlso, that code indicates changes you can make going forward that will make it easier to move to 3.x when your packages are read (such as distinguishing between str and bytes even though they are synonyms in 2.6).\n" ]
[ 8, 3, 2, 1 ]
[]
[]
[ "python", "python_2.6" ]
stackoverflow_0001347168_python_python_2.6.txt
Q: Generate test coverage information from pyunit unittests? I have some pyunit unit tests for a simple command line programme I'm writing. Is it possible for me to generate test coverage numbers? I want to see what lines aren't being covered by my tests. A: I regularly use Ned Batchelder's coverage.py tool for exactly this purpose. A: If you run your tests with testoob you can get a coverage report with --coverage. Can install with easy_install. No changes to your tests necessary: testoob alltests.py --coverage
Generate test coverage information from pyunit unittests?
I have some pyunit unit tests for a simple command line programme I'm writing. Is it possible for me to generate test coverage numbers? I want to see what lines aren't being covered by my tests.
[ "I regularly use Ned Batchelder's coverage.py tool for exactly this purpose.\n", "If you run your tests with testoob you can get a coverage report with --coverage. Can install with easy_install. No changes to your tests necessary:\ntestoob alltests.py --coverage\n\n" ]
[ 9, 1 ]
[]
[]
[ "code_coverage", "python", "python_unittest", "testing", "unit_testing" ]
stackoverflow_0001347727_code_coverage_python_python_unittest_testing_unit_testing.txt
Q: Improving Python list slicing I've wondered why extend/append methods of Python don't return a reference to result list. To build string of all combination of list with last element, I would like to write simple: for i in range(l, 0, -1): yield " ".join(src[0:i-1].append(src[-1])) But I've got: TypeError. Instead following code with intermediate variable is used: for i in range(l, 0, -1): sub = src[0:i-1] sub.append(src[-1]) yield " ".join(sub) Correct me please if I'm wrong A: The reason mutating methods in Python do NOT return a reference to the object they've mutated can be found in the Command-Query Separation principle (CQS for short). Python does not apply CQS as thoroughly as Meyer's Eiffel language does (since -- as per the Zen of Python, aka import this, "practicality beats purity"): for example, somelist.pop() does return the just-popped element (still NOT the container that was just mutated;-), while in Eiffel popping a stack has no return value (in the common case in which you need to pop and use the top element, your first use a "query" to peek at the top, and later a "command" to make the top go away). The deep motivation of CQS is not really "mutators should return nothing useful": rather, it's "queries should have no side effect". Keeping the distinction (be it rigidly or "as more of a guideline than a rule") is supposed to help you keep it in mind, and it does work to some extent (catching some accidental errors) though it can feel inconvenient at times if you're used to smoothly flowing "expressions and statements are the same thing" languages. Another aspect of CQS (broadly speaking...) in Python is the distinction between statements and expressions. Again, that's not rigidly applied -- an expression can be used wherever a statement can, which does occasionally hide errors, e.g. when somebody forgets that to call a function they need foo(), NOT just foo;-). But, for example (and drastically different from C, Perl, etc), you can't easily assign something while at the same testing it (if(a=foo())...), which is occasionally inconvenient but does catch other kinds of accidental errors. A: Hm, maybe replace: src[0:i-1].append(src[-1]) with: src[0:i-1] + src[-1:] #note the trailing ":", we want a list not an element A: The general reasoning is that the return type in None to indicate the list is being modified in-place. A: for i in range(l-1, 0, -1): yield ' '.join(src[:i] + src[-1:]) will do. Extend/append methods modify list in place and therefore don't return the list.
Improving Python list slicing
I've wondered why extend/append methods of Python don't return a reference to result list. To build string of all combination of list with last element, I would like to write simple: for i in range(l, 0, -1): yield " ".join(src[0:i-1].append(src[-1])) But I've got: TypeError. Instead following code with intermediate variable is used: for i in range(l, 0, -1): sub = src[0:i-1] sub.append(src[-1]) yield " ".join(sub) Correct me please if I'm wrong
[ "The reason mutating methods in Python do NOT return a reference to the object they've mutated can be found in the Command-Query Separation principle (CQS for short). Python does not apply CQS as thoroughly as Meyer's Eiffel language does (since -- as per the Zen of Python, aka import this, \"practicality beats purity\"): for example, somelist.pop() does return the just-popped element (still NOT the container that was just mutated;-), while in Eiffel popping a stack has no return value (in the common case in which you need to pop and use the top element, your first use a \"query\" to peek at the top, and later a \"command\" to make the top go away).\nThe deep motivation of CQS is not really \"mutators should return nothing useful\": rather, it's \"queries should have no side effect\". Keeping the distinction (be it rigidly or \"as more of a guideline than a rule\") is supposed to help you keep it in mind, and it does work to some extent (catching some accidental errors) though it can feel inconvenient at times if you're used to smoothly flowing \"expressions and statements are the same thing\" languages.\nAnother aspect of CQS (broadly speaking...) in Python is the distinction between statements and expressions. Again, that's not rigidly applied -- an expression can be used wherever a statement can, which does occasionally hide errors, e.g. when somebody forgets that to call a function they need foo(), NOT just foo;-). But, for example (and drastically different from C, Perl, etc), you can't easily assign something while at the same testing it (if(a=foo())...), which is occasionally inconvenient but does catch other kinds of accidental errors.\n", "Hm, maybe replace:\nsrc[0:i-1].append(src[-1])\n\nwith:\nsrc[0:i-1] + src[-1:] #note the trailing \":\", we want a list not an element\n\n", "The general reasoning is that the return type in None to indicate the list is being modified in-place.\n", "for i in range(l-1, 0, -1):\n yield ' '.join(src[:i] + src[-1:])\n\nwill do.\nExtend/append methods modify list in place and therefore don't return the list.\n" ]
[ 8, 7, 1, 0 ]
[ "To operate on the list and then return it, you can use the or construction:\ndef append_and_return(li, x):\n \"\"\"silly example\"\"\"\n return (li.append(x) or li)\n\nHere it is so that X or Y evaluates X, if X is true, returns X, else evaluates and returns Y. X needs to be always negative.\nHowever, if you are only acting on a temporary list, the concatenation operation already suggested is just as good or better.\nEdit: It is not worthless\n>>> li = [1, 2, 3]\n>>> newli = append_and_return(li, 10)\n>>> li\n[1, 2, 3, 10]\n>>> newli is li\nTrue\n\n" ]
[ -1 ]
[ "list", "python" ]
stackoverflow_0001347085_list_python.txt