content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Dropdown menus in forms containing database primary keys In a framework like Django or Pylons you can set up function to handle form submissions. If your form involves a dropdown menu (i.e. a select tag) populated with objects from a database you can set the values equal to the primary key for the record like: <select> <option value="1">Volvo</option> <option value="2">Saab</option> <option value="3">Mercedes</option> <option value="4">Audi</option> </select> Is this a safe practice? Is there anything wrong with using a primary key? If you were not to use the primary key for the value how else can you make this form? A: Using the primary key is fine. What exactly are you concerned with? This is an implementation detail that won't show up to the user in the actual rendered page.
Dropdown menus in forms containing database primary keys
In a framework like Django or Pylons you can set up function to handle form submissions. If your form involves a dropdown menu (i.e. a select tag) populated with objects from a database you can set the values equal to the primary key for the record like: <select> <option value="1">Volvo</option> <option value="2">Saab</option> <option value="3">Mercedes</option> <option value="4">Audi</option> </select> Is this a safe practice? Is there anything wrong with using a primary key? If you were not to use the primary key for the value how else can you make this form?
[ "Using the primary key is fine. What exactly are you concerned with? This is an implementation detail that won't show up to the user in the actual rendered page.\n" ]
[ 3 ]
[]
[]
[ "django", "pylons", "python" ]
stackoverflow_0001091668_django_pylons_python.txt
Q: Are multiple classes in a single file recommended? Possible Duplicate: How many Python classes should I put in one file? Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why? Am I missing this convention in the PEP8? A: Here are some possible reasons: Python is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file. Python is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit. An example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC. A: There's a mantra, "flat is better than nested," that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain). Good thread from the Python mailing list, and a quote by Fredrik Lundh: even more important is that in Python, you don't use classes for every- thing; if you need factories, singletons, multiple ways to create objects, polymorphic helpers, etc, you use plain functions, not classes or static methods. once you've gotten over the "it's all classes", use modules to organize things in a way that makes sense to the code that uses your components. make the import statements look good. A: the book Expert Python Programming has something related discussion Chapter 4: Choosing Good Names:"Building the Namespace Tree" and "Splitting the Code" My line crude summary: collect some related class to one module(source file),and collect some related module to one package, is helpful for code maintain. A: In python, class can also be used for small tasks (just for grouping etc). maintaining a 1:1 relation would result in having too many files with small or little functionality. A: There is no specific convention for this - do whatever makes your code the most readable and maintainable. A: A good example of not having seperate files for each class might be the models.py file within a django app. Each django app may have a handful of classes that are related to that app, and putting them into individual files just makes more work. Similarly, having each view in a different file again is likely to be counterproductive.
Are multiple classes in a single file recommended?
Possible Duplicate: How many Python classes should I put in one file? Coming from a C++ background I've grown accustomed to organizing my classes such that, for the most part, there's a 1:1 ratio between classes and files. By making it so that a single file contains a single class I find the code more navigable. As I introduce myself to Python I'm finding lots of examples where a single file contains multiple classes. Is that the recommended way of doing things in Python? If so, why? Am I missing this convention in the PEP8?
[ "Here are some possible reasons:\n\nPython is not exclusively class-based - the natural unit of code decomposition in Python is the module. Modules are just as likely to contain functions (which are first-class objects in Python) as classes. In Java, the unit of decomposition is the class. Hence, Python has one module=one file, and Java has one (public) class=one file.\nPython is much more expressive than Java, and if you restrict yourself to one class per file (which Python does not prevent you from doing) you will end up with lots of very small files - more to keep track of with very little benefit.\n\nAn example of roughly equivalent functionality: Java's log4j => a couple of dozen files, ~8000 SLOC. Python logging => 3 files, ~ 2800 SLOC.\n", "There's a mantra, \"flat is better than nested,\" that generally discourages an overuse of hierarchy. I'm not sure there's any hard and fast rules as to when you want to create a new module -- for the most part, people just use their discretion to group logically related functionality (classes and functions that pertain to a particular problem domain).\nGood thread from the Python mailing list, and a quote by Fredrik Lundh:\n\neven more important is that in Python,\n you don't use classes for every-\n thing; if you need factories,\n singletons, multiple ways to create \n objects, polymorphic helpers, etc, you\n use plain functions, not classes or\n static methods.\nonce you've gotten over the \"it's all\n classes\", use modules to organize \n things in a way that makes sense to\n the code that uses your components.\n make the import statements look good.\n\n", "the book Expert Python Programming has something related discussion\nChapter 4: Choosing Good Names:\"Building the Namespace Tree\" and \"Splitting the Code\"\nMy line crude summary: collect some related class to one module(source file),and \ncollect some related module to one package, is helpful for code maintain.\n", "In python, class can also be used for small tasks (just for grouping etc). maintaining a 1:1 relation would result in having too many files with small or little functionality.\n", "There is no specific convention for this - do whatever makes your code the most readable and maintainable.\n", "A good example of not having seperate files for each class might be the models.py file within a django app. Each django app may have a handful of classes that are related to that app, and putting them into individual files just makes more work.\nSimilarly, having each view in a different file again is likely to be counterproductive.\n" ]
[ 71, 21, 7, 5, 4, 2 ]
[]
[]
[ "class", "python" ]
stackoverflow_0001091756_class_python.txt
Q: What is the correct way to "pipe" Maven's output in Python to the screen when used in a Python shell script? My Python utility script contains UNIX system calls such as status, output = commands.getstatusoutput("ls -ltr") print "Output: ", output print "Status: ", status Which work fine and print the output to the console but as soon as I run Maven from the same script, status, output = commands.getstatusoutput("mvn clean install -s./../../foo/bar/settings.xml -Dportal -Dmain.dir=${PWD}/../.. -o") print "Output: ", output print "Status: ", status The output is not read out like the previous regular system commands. The Maven command in quotes works fine if I simply execute it from the command line. What is the correct way to pipe Maven's output to the screen? A: Are you sure that ${PWD} is properly expanded? If not, try: status, output = commands.getstatusoutput("mvn clean install -s./../../foo/bar/settings.xml -Dportal -Dmain.dir=%s/../.. -o" % os.getcwd ())
What is the correct way to "pipe" Maven's output in Python to the screen when used in a Python shell script?
My Python utility script contains UNIX system calls such as status, output = commands.getstatusoutput("ls -ltr") print "Output: ", output print "Status: ", status Which work fine and print the output to the console but as soon as I run Maven from the same script, status, output = commands.getstatusoutput("mvn clean install -s./../../foo/bar/settings.xml -Dportal -Dmain.dir=${PWD}/../.. -o") print "Output: ", output print "Status: ", status The output is not read out like the previous regular system commands. The Maven command in quotes works fine if I simply execute it from the command line. What is the correct way to pipe Maven's output to the screen?
[ "Are you sure that ${PWD} is properly expanded? If not, try:\nstatus, output = commands.getstatusoutput(\"mvn clean install -s./../../foo/bar/settings.xml -Dportal -Dmain.dir=%s/../.. -o\" % os.getcwd ())\n\n" ]
[ 0 ]
[]
[]
[ "maven_2", "python", "shell", "unix" ]
stackoverflow_0001092299_maven_2_python_shell_unix.txt
Q: Make your program USE a gui I'd like to write a program able to "use" other programs by taking control of the mouse/keyboard and being able to "see" what's on the screen. I used AutoIt to do something similar, but I had to cheat sometimes because the language is not that powerful, or maybe it's just that I suck and I'm not able to do that much with it :P So... I need to: Take screenshots, then I will compare them to make the program "understand", but it needs to "see" Use the mouse: move, click and release, it's simple, isn't it? Using the keyboard: pressing some keys, or key combinations, including special keys like Alt,Ctrl etc... How can I do that in python? Does it works in both linux and windows? (this could be really really cool, but it is not necessary) A: You can use WATSUP under Windows. A: I've had some luck with similar tasks using PyWinAuto. pywinauto is a set of python modules to automate the Microsoft Windows GUI. At it's simplest it allows you to send mouse and keyboard actions to windows dialogs and controls. It also has some support for capturing images of dialogs and such using the Python Imaging Library PIL. A: AutoIt is completely capable of doing everything you mentioned. When I'm wanting to do some automation but use the features of Python, I find it easiest to use AutoItX which is a DLL/COM control. Taken from this answer of mine: import win32com.client oAutoItX = win32com.client.Dispatch( "AutoItX3.Control" ) oAutoItX.Opt("WinTitleMatchMode", 2) #Match text anywhere in a window title width = oAutoItX.WinGetClientSizeWidth("Firefox") height = oAutoItX.WinGetClientSizeHeight("Firefox") print width, height A: If you are comfortable with pascal, a really powerful keyboard/mouse/screen-reading program is SCAR: http://freddy1990.com/index.php?page=product&name=scar It can do OCR, bitmap finding, color finding, etc. It's often used for automating online games, but it can be used for any situation where you want to simulate a human reading the screen and giving input. A: I've used the Windows (only) Input API to write a VNC-like remote-control application in the past. It lets you fake keyboard and mouse input nicely at a system level (ie not just posting events to a single application). If you're trying to do any sort of automated testing of whole systems at the GUI level, this excellent USENIX paper describing automated responsiveness testing is a must-read.
Make your program USE a gui
I'd like to write a program able to "use" other programs by taking control of the mouse/keyboard and being able to "see" what's on the screen. I used AutoIt to do something similar, but I had to cheat sometimes because the language is not that powerful, or maybe it's just that I suck and I'm not able to do that much with it :P So... I need to: Take screenshots, then I will compare them to make the program "understand", but it needs to "see" Use the mouse: move, click and release, it's simple, isn't it? Using the keyboard: pressing some keys, or key combinations, including special keys like Alt,Ctrl etc... How can I do that in python? Does it works in both linux and windows? (this could be really really cool, but it is not necessary)
[ "You can use WATSUP under Windows.\n", "I've had some luck with similar tasks using PyWinAuto.\n\npywinauto is a set of python modules\n to automate the Microsoft Windows GUI.\n At it's simplest it allows you to send\n mouse and keyboard actions to windows\n dialogs and controls.\n\nIt also has some support for capturing images of dialogs and such using the Python Imaging Library PIL.\n", "AutoIt is completely capable of doing everything you mentioned. When I'm wanting to do some automation but use the features of Python, I find it easiest to use AutoItX which is a DLL/COM control.\nTaken from this answer of mine:\nimport win32com.client\noAutoItX = win32com.client.Dispatch( \"AutoItX3.Control\" )\n\noAutoItX.Opt(\"WinTitleMatchMode\", 2) #Match text anywhere in a window title\n\nwidth = oAutoItX.WinGetClientSizeWidth(\"Firefox\")\nheight = oAutoItX.WinGetClientSizeHeight(\"Firefox\")\n\nprint width, height\n\n", "If you are comfortable with pascal, a really powerful keyboard/mouse/screen-reading program is SCAR: http://freddy1990.com/index.php?page=product&name=scar It can do OCR, bitmap finding, color finding, etc. It's often used for automating online games, but it can be used for any situation where you want to simulate a human reading the screen and giving input.\n", "I've used the Windows (only) Input API to write a VNC-like remote-control application in the past. It lets you fake keyboard and mouse input nicely at a system level (ie not just posting events to a single application).\nIf you're trying to do any sort of automated testing of whole systems at the GUI level, this excellent USENIX paper describing automated responsiveness testing is a must-read.\n" ]
[ 2, 2, 2, 1, 0 ]
[]
[]
[ "python", "remote_control", "user_interface" ]
stackoverflow_0001084514_python_remote_control_user_interface.txt
Q: Where do I find the Python Crypto package when installing Paramiko on windows? I am trying to SFTP from Python running on windows and installed Paramiko as was recommended here. Unfortunately, it asks for Crypto.Util.randpool so I need to install the Crypto package. I found RPMS for Linux, but can't find anything or source code for windows. The readme for Paramiko states: pycrypto compiled for Win32 can be downloaded from the HashTar homepage: http://nitace.bsd.uchicago.edu:8080/hashtar. Unfortunately, that link does not work. Neither does the link to given from PCrypto's homepage. Any idea how to overcome this? A: See here for Win32 binaries for Python 2.2 through to 2.7
Where do I find the Python Crypto package when installing Paramiko on windows?
I am trying to SFTP from Python running on windows and installed Paramiko as was recommended here. Unfortunately, it asks for Crypto.Util.randpool so I need to install the Crypto package. I found RPMS for Linux, but can't find anything or source code for windows. The readme for Paramiko states: pycrypto compiled for Win32 can be downloaded from the HashTar homepage: http://nitace.bsd.uchicago.edu:8080/hashtar. Unfortunately, that link does not work. Neither does the link to given from PCrypto's homepage. Any idea how to overcome this?
[ "See here for Win32 binaries for Python 2.2 through to 2.7\n" ]
[ 6 ]
[]
[]
[ "paramiko", "python" ]
stackoverflow_0001092402_paramiko_python.txt
Q: want to get mac address of remote PC I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux. A: I have a small, signed Java Applet, which requires Java 6 runtime on the remote computer to do this. It uses the getHardwareAddress() method on NetworkInterface to obtain the MAC address. I use javascript to access a method in the applet that calls this and returns a JSON object containing the address. This gets stuffed into a hidden field in the form and posted with the rest of the fields. A: from Active code #!/usr/bin/env python import ctypes import socket import struct def get_macaddress(host): """ Returns the MAC address of a network host, requires >= WIN2K. """ # Check for api availability try: SendARP = ctypes.windll.Iphlpapi.SendARP except: raise NotImplementedError('Usage only on Windows 2000 and above') # Doesn't work with loopbacks, but let's try and help. if host == '127.0.0.1' or host.lower() == 'localhost': host = socket.gethostname() # gethostbyname blocks, so use it wisely. try: inetaddr = ctypes.windll.wsock32.inet_addr(host) if inetaddr in (0, -1): raise Exception except: hostip = socket.gethostbyname(host) inetaddr = ctypes.windll.wsock32.inet_addr(hostip) buffer = ctypes.c_buffer(6) addlen = ctypes.c_ulong(ctypes.sizeof(buffer)) if SendARP(inetaddr, 0, ctypes.byref(buffer), ctypes.byref(addlen)) != 0: raise WindowsError('Retreival of mac address(%s) - failed' % host) # Convert binary data into a string. macaddr = '' for intval in struct.unpack('BBBBBB', buffer): if intval > 15: replacestr = '0x' else: replacestr = 'x' macaddr = ''.join([macaddr, hex(intval).replace(replacestr, '')]) return macaddr.upper() if __name__ == '__main__': print 'Your mac address is %s' % get_macaddress('localhost') A: All you can access is what the user sends to you. MAC address is not part of that data. A: The dpkt package was already mentioned on SO. It allows for parsing TCP/IP packets. I have not yet used it for your case, though.
want to get mac address of remote PC
I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux.
[ "I have a small, signed Java Applet, which requires Java 6 runtime on the remote computer to do this. It uses the getHardwareAddress() method on NetworkInterface to obtain the MAC address. I use javascript to access a method in the applet that calls this and returns a JSON object containing the address. This gets stuffed into a hidden field in the form and posted with the rest of the fields.\n", "from Active code \n#!/usr/bin/env python\n\nimport ctypes\nimport socket\nimport struct\n\ndef get_macaddress(host):\n \"\"\" Returns the MAC address of a network host, requires >= WIN2K. \"\"\"\n\n # Check for api availability\n try:\n SendARP = ctypes.windll.Iphlpapi.SendARP\n except:\n raise NotImplementedError('Usage only on Windows 2000 and above')\n\n # Doesn't work with loopbacks, but let's try and help.\n if host == '127.0.0.1' or host.lower() == 'localhost':\n host = socket.gethostname()\n\n # gethostbyname blocks, so use it wisely.\n try:\n inetaddr = ctypes.windll.wsock32.inet_addr(host)\n if inetaddr in (0, -1):\n raise Exception\n except:\n hostip = socket.gethostbyname(host)\n inetaddr = ctypes.windll.wsock32.inet_addr(hostip)\n\n buffer = ctypes.c_buffer(6)\n addlen = ctypes.c_ulong(ctypes.sizeof(buffer))\n if SendARP(inetaddr, 0, ctypes.byref(buffer), ctypes.byref(addlen)) != 0:\n raise WindowsError('Retreival of mac address(%s) - failed' % host)\n\n # Convert binary data into a string.\n macaddr = ''\n for intval in struct.unpack('BBBBBB', buffer):\n if intval > 15:\n replacestr = '0x'\n else:\n replacestr = 'x'\n macaddr = ''.join([macaddr, hex(intval).replace(replacestr, '')])\n\n return macaddr.upper()\n\nif __name__ == '__main__':\n print 'Your mac address is %s' % get_macaddress('localhost')\n\n", "All you can access is what the user sends to you.\nMAC address is not part of that data.\n", "The dpkt package was already mentioned on SO. It allows for parsing TCP/IP packets. I have not yet used it for your case, though.\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001092379_python_python_3.x.txt
Q: How do I set up a basic website with registration in Python on Dreamhost? I need to write a basic website on Dreamhost. It needs to be done in Python. I discovered Dreamhost permits me to write .py files, and read them. Example: #!/usr/bin/python print "Content-type: text/html\n\n" print "hello world" So now I am looking for a basic framework, or a set of files that has already programmed the whole registration to be able to kick-off the project in a simple way. By registration I mean the files to register a new account, log in, check the email (sending a mail), and edit the user information. All this possibly using MySQL. A: Let me share my own experience with django. My prerequisits: average knowledge of python very weak idea of how web works (no js skills, just a bit of css) my day job is filled with coding in C and I just wanted to try something different, so there certainly was a passion to learn (I think this is the most important one) Why I've chosen django: I've already knew bits and pieces of python django has excelent documentation, including tutorial, which explained everything in very clear and simple manner It is worth to read complete manual first (it took me two or three weekends. I remember I could not remember/understand everything at first pass, but it helped me to learn where the information can be found when needed. There is also another source of documentaion called djangobook. Djangobook contains same information as manual, but things are explained more in detail. It's worth to read it also, it helps to catch up with MVC concept, if you have not tried that before. And finally to answer your question best: there are already also OpenId modules ready for you. I'm considering to use django-authopenid for my new project. It supports OpenId, while providing fallback to locally managed users. There is certain learning curve if you are going learn django. The more you know about the web and python the steeper the curve is. I had to also learn bits and pieces of javascript and it took me also some time. If you are able to spend full time learning django, then you can expect you'll be able to deliver first results within 4-6 weeks. It took me 6 months, since I was doing my django studies in free time. A: There are several blog entries &c pointing out some problems with Python on Dreamhost and how to work around them to run several web frameworks that could suit you. (Most of the posts are over a year old so it may be that dreamhost has fixed some of the issues since then, of course, but the only way to really find out is to try!-). Start with this page, dreamhost's own wikipage about Python -- at least you know it's quite current (was last updated earlier today!-). It gives instructions on using virtual env, building a custom Python &c if you absolutely need that, and running WSGI apps -- WSGI is the common underpinning of all modern Python web frameworks, including Django which everybody's recommending but also Pylons &c. Some notes on running Pylons on Dreamhost are here (but it does look like Dreamhost has since fixed some issues, e.g. flup is now the dreamhost-recommended FCGI layer for WSGI as you'll see at the previously mentioned URL) and links therefrom. If you do go with Pylons, here is the best place to start considering how best to do auth (authentication and authorization) with it. I'm trying to play devil's advocate since everybody else's recommending django, but for a beginner django may in fact be better than pylons (still, spending a day or so lightly researching each main alternative, before you commit to one, is a good investment of your time!-). For Django, again there's an official dreamhost wiki page and it's pretty thorough -- be sure to read through it and briefly to the other URLs it points to. The contributed auth module is no doubt the best way to do authentication and authorization if you do decide to go with Django. And, whichever way you do choose -- best of luck! A: You can try starting with django-registration. EDIT: You can probably hack something up on your own faster than learning Django. However, learning a framework will serve you better. You'll be able to easily ask a large community when you have problems, and build on work that's already been done. And of course, if you're doing something new in the future, your knowledge of the framework can be more easily reapplied. A: django framework A: Django is the way to go. You can try it locally on your PC and see do you like it. It is very nice framework and allows you to quickly build your applications. If you want to give Django quick go to see how it feels you can download Portable Python where everything is preinstalled and ready to use. You can also do what you are trying to do with apache module mod_python (which is also used to run Django) but it would require more coding. Your code snippet would work with mod_python (http://www.modpython.org/) right away. I think mod_python comes pre-installed on Dreamhost so you can try it. A: For a more complete basic setup (with lots of preprogrammed features) I would point you at Pinax which is a web site on top of Django (which I praise of course, see the dedicated page on dreamhost Wiki at http://wiki.dreamhost.com/Django) The introduction on the project's web site (pinaxproject.com) : Pinax is an open-source platform built on the Django Web Framework. By integrating numerous reusable Django apps to take care of the things that many sites have in common, it lets you focus on what makes your site different. There you will have a complete web site to customize and add features to. A: I've noticed that a lot of people recommend Django. If you're running on a shared host on Dreamhost, the performance will not be satisfactory. This is a known issue with Dreamhost shared hosting. I have installed web2py on my Dreamhost shared account and it seems to work okay; search the google groups for an install FAQ. Later edit: google Dreamhost Django performance for an idea of what I mean. A: Another voice to the choir. Go for django. It's very good and easy to use.
How do I set up a basic website with registration in Python on Dreamhost?
I need to write a basic website on Dreamhost. It needs to be done in Python. I discovered Dreamhost permits me to write .py files, and read them. Example: #!/usr/bin/python print "Content-type: text/html\n\n" print "hello world" So now I am looking for a basic framework, or a set of files that has already programmed the whole registration to be able to kick-off the project in a simple way. By registration I mean the files to register a new account, log in, check the email (sending a mail), and edit the user information. All this possibly using MySQL.
[ "Let me share my own experience with django. My prerequisits:\n\naverage knowledge of python\nvery weak idea of how web works (no js skills, just a bit of css)\nmy day job is filled with coding in C and I just wanted to try something different,\n so there certainly was a passion to learn (I think this is the most important one)\n\nWhy I've chosen django:\n\nI've already knew bits and pieces of python\ndjango has excelent documentation, including tutorial, which explained everything\n in very clear and simple manner\n\nIt is worth to read complete manual first (it took me two or three weekends. I remember I could not remember/understand everything at first pass, but it helped me to learn where\nthe information can be found when needed. There is also another source of documentaion\ncalled djangobook. Djangobook contains same information as manual, but things are explained more in detail. It's worth to read it also, it helps to catch up with MVC concept, if you have not tried that before.\nAnd finally to answer your question best: there are already also OpenId modules ready for you. I'm considering to use django-authopenid for my new project. It supports OpenId, while providing fallback to locally managed users.\nThere is certain learning curve if you are going learn django. The more you know about the web and python the steeper the curve is. I had to also learn bits and pieces of javascript and it took me also some time. If you are able to spend full time learning django, then\nyou can expect you'll be able to deliver first results within 4-6 weeks. It took me 6 months, since I was doing my django studies in free time.\n", "There are several blog entries &c pointing out some problems with Python on Dreamhost and how to work around them to run several web frameworks that could suit you. (Most of the posts are over a year old so it may be that dreamhost has fixed some of the issues since then, of course, but the only way to really find out is to try!-).\nStart with this page, dreamhost's own wikipage about Python -- at least you know it's quite current (was last updated earlier today!-). It gives instructions on using virtual env, building a custom Python &c if you absolutely need that, and running WSGI apps -- WSGI is the common underpinning of all modern Python web frameworks, including Django which everybody's recommending but also Pylons &c.\nSome notes on running Pylons on Dreamhost are here (but it does look like Dreamhost has since fixed some issues, e.g. flup is now the dreamhost-recommended FCGI layer for WSGI as you'll see at the previously mentioned URL) and links therefrom. If you do go with Pylons, here is the best place to start considering how best to do auth (authentication and authorization) with it. I'm trying to play devil's advocate since everybody else's recommending django, but for a beginner django may in fact be better than pylons (still, spending a day or so lightly researching each main alternative, before you commit to one, is a good investment of your time!-).\nFor Django, again there's an official dreamhost wiki page and it's pretty thorough -- be sure to read through it and briefly to the other URLs it points to. The contributed auth module is no doubt the best way to do authentication and authorization if you do decide to go with Django.\nAnd, whichever way you do choose -- best of luck!\n", "You can try starting with django-registration.\nEDIT: You can probably hack something up on your own faster than learning Django. However, learning a framework will serve you better. You'll be able to easily ask a large community when you have problems, and build on work that's already been done. And of course, if you're doing something new in the future, your knowledge of the framework can be more easily reapplied.\n", "django framework\n", "Django is the way to go. You can try it locally on your PC and see do you like it. It is very nice framework and allows you to quickly build your applications.\nIf you want to give Django quick go to see how it feels you can download Portable Python where everything is preinstalled and ready to use.\nYou can also do what you are trying to do with apache module mod_python (which is also used to run Django) but it would require more coding. Your code snippet would work with mod_python (http://www.modpython.org/) right away. I think mod_python comes pre-installed on Dreamhost so you can try it.\n", "For a more complete basic setup (with lots of preprogrammed features) I would point you at Pinax which is a web site on top of Django (which I praise of course, see the dedicated page on dreamhost Wiki at http://wiki.dreamhost.com/Django)\nThe introduction on the project's web site (pinaxproject.com) : \n\nPinax is an open-source platform built on the Django Web Framework.\nBy integrating numerous reusable\n Django apps to take care of the things\n that many sites have in common, it\n lets you focus on what makes your site\n different.\n\nThere you will have a complete web site to customize and add features to.\n", "I've noticed that a lot of people recommend Django. If you're running on a shared host on Dreamhost, the performance will not be satisfactory. \nThis is a known issue with Dreamhost shared hosting. I have installed web2py on my Dreamhost shared account and it seems to work okay; search the google groups for an install FAQ.\nLater edit: google Dreamhost Django performance for an idea of what I mean.\n", "Another voice to the choir.\nGo for django. It's very good and easy to use.\n" ]
[ 4, 2, 1, 1, 1, 1, 1, 0 ]
[]
[]
[ "dreamhost", "python" ]
stackoverflow_0001026030_dreamhost_python.txt
Q: AppEngine server cannot import atom module I have gdata library install on my ArchLinux, and a simple application which imports atom library at the beginning, when I run gapp engine and access that web app, $ python2.5 ./dev_appserver.py ~/myapp It throws exception 'No module named atom'. But when I run 'import atom' in Python2.5 interactive mode, it works well. How can I import atom module in my GAppEngine applications? A: Add atom.py to the same directory you keep you GAE Python sources in, and make sure it's uploaded to the server when you upload your app. (The upload happens when you do appcfg.py update myapp/ unless you go out of your way to stop it; use the --verbose flag on the command to see exactly what's being uploaded or updated). (Or, if it's a large file, make a zipfile with it and in your handler append that zipfile to sys.path; see zipimport for example). This assumes that you have a single file atom.py which is what you're importing; if that file in turns imports others you'll have to make those others available too in similar ways, and so on (see modulefinder in Python's standard library for ways to find all modules you need). If atom is not a module but a package, then what you get on import is the __init__.py file in the directory that's the package; so the same advice applies (and zipimport becomes much more attractive since you can easily package up any directory structure e.g. with a zip -r command from the Linux command line). If at any point (as modulefinder will help you discover) there is a dependency on a third party C-coded extension (a .so or .pyd file that Python can use but is not written in pure Python) that is not in the short list supplied with GAE (see here), then that Python code is not usable on GAE, as GAE supports only pure-Python. If this is the case then you must look for alternatives that are supported on GAE, i.e. pure-Python ways to obtain the same functionality you require.
AppEngine server cannot import atom module
I have gdata library install on my ArchLinux, and a simple application which imports atom library at the beginning, when I run gapp engine and access that web app, $ python2.5 ./dev_appserver.py ~/myapp It throws exception 'No module named atom'. But when I run 'import atom' in Python2.5 interactive mode, it works well. How can I import atom module in my GAppEngine applications?
[ "Add atom.py to the same directory you keep you GAE Python sources in, and make sure it's uploaded to the server when you upload your app. (The upload happens when you do appcfg.py update myapp/ unless you go out of your way to stop it; use the --verbose flag on the command to see exactly what's being uploaded or updated).\n(Or, if it's a large file, make a zipfile with it and in your handler append that zipfile to sys.path; see zipimport for example).\nThis assumes that you have a single file atom.py which is what you're importing; if that file in turns imports others you'll have to make those others available too in similar ways, and so on (see modulefinder in Python's standard library for ways to find all modules you need).\nIf atom is not a module but a package, then what you get on import is the __init__.py file in the directory that's the package; so the same advice applies (and zipimport becomes much more attractive since you can easily package up any directory structure e.g. with a zip -r command from the Linux command line).\nIf at any point (as modulefinder will help you discover) there is a dependency on a third party C-coded extension (a .so or .pyd file that Python can use but is not written in pure Python) that is not in the short list supplied with GAE (see here), then that Python code is not usable on GAE, as GAE supports only pure-Python. If this is the case then you must look for alternatives that are supported on GAE, i.e. pure-Python ways to obtain the same functionality you require.\n" ]
[ 11 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001092648_google_app_engine_python.txt
Q: BeanStalkd on Solaris doesnt return anything when called from the python library i am using Solaris 10 OS(x86). i installed beanstalkd and it starts fine by using command "beanstalkd -d -l hostip -p 11300". i have Python 2.4.4 on my system i installed YAML and beanstalkc python libraries to connect beanstalkd with python my problem is when i try to write some code: import beanstalkc beanstalk = beanstalkc.Connection(host='hostip', port=11300) no error so far but when i try to do someting on beanstalk like say listing queues. nothing happens. beanstalk.tubes() it just hangs and nothing returns. if i cancel the operation(using ctr+c on python env.) or stop the server i immediately see an output: Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 134, in tubes return self._interact_yaml('list-tubes\r\n', ['OK']) File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 83, in _interact_yaml size, = self._interact(command, expected_ok, expected_err) File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 57, in _interact status, results = self._read_response() File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 66, in _read_response response = self.socket_file.readline().split() File "/usr/lib/python2.4/socket.py", line 332, in readline data = self._sock.recv(self._rbufsize) any idea whats going? i am an Unix newbie so i have no idea what i did setup wrong to cause this. edit: seems like the problem lies within BeanStalkd itself, anyone have used this on Solaris 10? if so which version did you use? The v1.3 labeled one doesnt compile on Solaris while the latest from git code repository compiles it causes the above problem(or perhaps there is some configuration to do on Solaris?). edit2: i installed and compiled same components with beanstalkd, PyYAML, pythonbeanstalc and libevent to an UBUNTU machine and it works fine. problems seems to be about compilation of beanstalkd on solaris, i have yet to produce or read any solution. A: It seems that the python-client listens to the server, but the server has nothing to say. Is there something to read for the client? Is there a consumer AND a producer ? Look at this A: After looking in the code (beanstalkc): your client has send his 'list-tubes' message, and is waiting for an answer. (until you kill it) your server doesn't answer or can't send the answer to the client. (or the answer is shorter than the client expect) is a network-admin at your side (or site) :-) A: I might know what is wrong: don't start it in daemon (-d) mode. I have experienced the same and by accident I found out what is wrong. Or rather, I don't know what is wrong, but it works without running it in daemon mode. ./beanstalkd -p 9977 & as an alternative.
BeanStalkd on Solaris doesnt return anything when called from the python library
i am using Solaris 10 OS(x86). i installed beanstalkd and it starts fine by using command "beanstalkd -d -l hostip -p 11300". i have Python 2.4.4 on my system i installed YAML and beanstalkc python libraries to connect beanstalkd with python my problem is when i try to write some code: import beanstalkc beanstalk = beanstalkc.Connection(host='hostip', port=11300) no error so far but when i try to do someting on beanstalk like say listing queues. nothing happens. beanstalk.tubes() it just hangs and nothing returns. if i cancel the operation(using ctr+c on python env.) or stop the server i immediately see an output: Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 134, in tubes return self._interact_yaml('list-tubes\r\n', ['OK']) File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 83, in _interact_yaml size, = self._interact(command, expected_ok, expected_err) File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 57, in _interact status, results = self._read_response() File "/usr/lib/python2.4/site-packages/beanstalkc-0.1.1-py2.4.egg/beanstalkc.py", line 66, in _read_response response = self.socket_file.readline().split() File "/usr/lib/python2.4/socket.py", line 332, in readline data = self._sock.recv(self._rbufsize) any idea whats going? i am an Unix newbie so i have no idea what i did setup wrong to cause this. edit: seems like the problem lies within BeanStalkd itself, anyone have used this on Solaris 10? if so which version did you use? The v1.3 labeled one doesnt compile on Solaris while the latest from git code repository compiles it causes the above problem(or perhaps there is some configuration to do on Solaris?). edit2: i installed and compiled same components with beanstalkd, PyYAML, pythonbeanstalc and libevent to an UBUNTU machine and it works fine. problems seems to be about compilation of beanstalkd on solaris, i have yet to produce or read any solution.
[ "It seems that the python-client listens to the server,\nbut the server has nothing to say.\nIs there something to read for the client?\nIs there a consumer AND a producer ?\nLook at this\n", "After looking in the code (beanstalkc):\nyour client has send his 'list-tubes' message, and is waiting for an answer.\n(until you kill it)\nyour server doesn't answer or can't send the answer to the client.\n(or the answer is shorter than the client expect)\nis a network-admin at your side (or site) :-)\n", "I might know what is wrong: don't start it in daemon (-d) mode. I have experienced the same and by accident I found out what is wrong.\nOr rather, I don't know what is wrong, but it works without running it in daemon mode.\n./beanstalkd -p 9977 & \nas an alternative.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "beanstalkd", "python", "solaris", "yaml" ]
stackoverflow_0001044473_beanstalkd_python_solaris_yaml.txt
Q: Accidental overwrite of OSX Python system framework I got ahead of myself and downloaded and installed the OSX Python 2.6 package from www.python.org/download/ on my OSX 10.5.5 Intel Mac and installed the full package contents. Only after this did I come across http://wiki.python.org/moin/MacPython/Leopard stating that you should do a partial install of the package to avoid interfering with the system install. I'm afraid I've already overwritten the system framework through that installer and I remember reading somewhere after discovering this that I'd lose certain elements included in the OSX system install and not Python distributions. Is there any way to reverse this or restore anything I may have lost? What exactly have I lost and is it going to be a problem? A: You may have overwritten the system framework but it is more likely that you just overwrote the symlinks in /usr/bin to point to the new version. Try going to /usr/bin and seeing (with something like ls -alsh) where the python symlink points to. It may be python2.6 or 3.0, which is in turn a ln to /System/Library/Frameworks/Python.framework/Versions ... etc. First try resetting the python symlink to the stable or expected version, ie, sudo ln -s /usr/bin/python2.5 python (from the /usr/bin dir.) A: I just ran into the same thing myself. I did find that the MacPython installer modified my search $PATH and added '/Library/Frameworks/Python.framework/Versions/Current/bin' which caused the python executable there to be found before the one in '/usr/bin'. Hope this helps anyone else!
Accidental overwrite of OSX Python system framework
I got ahead of myself and downloaded and installed the OSX Python 2.6 package from www.python.org/download/ on my OSX 10.5.5 Intel Mac and installed the full package contents. Only after this did I come across http://wiki.python.org/moin/MacPython/Leopard stating that you should do a partial install of the package to avoid interfering with the system install. I'm afraid I've already overwritten the system framework through that installer and I remember reading somewhere after discovering this that I'd lose certain elements included in the OSX system install and not Python distributions. Is there any way to reverse this or restore anything I may have lost? What exactly have I lost and is it going to be a problem?
[ "You may have overwritten the system framework but it is more likely that you just overwrote the symlinks in /usr/bin to point to the new version. Try going to /usr/bin and seeing (with something like ls -alsh) where the python symlink points to. It may be python2.6 or 3.0, which is in turn a ln to /System/Library/Frameworks/Python.framework/Versions ... etc. First try resetting the python symlink to the stable or expected version, ie, sudo ln -s /usr/bin/python2.5 python (from the /usr/bin dir.)\n", "I just ran into the same thing myself. I did find that the MacPython installer modified my search $PATH and added '/Library/Frameworks/Python.framework/Versions/Current/bin' which caused the python executable there to be found before the one in '/usr/bin'.\nHope this helps anyone else!\n" ]
[ 5, 0 ]
[ "Restore from a recent Time Machine backup or somehow from DVD?\n" ]
[ -1 ]
[ "frameworks", "macos", "osx_leopard", "python" ]
stackoverflow_0000311621_frameworks_macos_osx_leopard_python.txt
Q: how can I not distribute my secret key (facebook api) while using python? I'm writing a facebook desktop application for the first time using the PyFacebook api. Up until now, since I've been experimenting, I just passed the secret key along with the api key to the Facebook constructor like so: import facebook fb = facebook.Facebook("my_api_key", "my_secret_key") and then logged in (fb.login() opens a browser) without any trouble. But now, I want to distribute the code and since it's python and opensource, I want to have some way of protecting my secret key. The wiki mentions I can use a server and ask for my secret key using the server each time my app runs (as I understand), but I have no clue as to how to start doing this, and how this should be done. I have never done web programming and don't know where I can get a server, and how to get the server to do what is needed in this case, and I don't know how can I use that server. I would really appreciate some help! Thank you. A: The relevant page on the FB developer wiki recommends a server component that just keeps your secret key and handles auth.getSession(), then gives your desktop app a session key. See that link for details. A: EDIT: cmb's session keys approach is better than the proxy described below. Config files and GAE are still applicable. /EDIT You could take a couple approaches. If your code is open-source and will be used by other developers, you could allow the secret key to be set in a configuration file. When you distribute the code, place a dummy key in the file and create some instructions on how to obtain and set the key in the config file. Alternately, if you want to do the server approach, you'll basically be creating a proxy* that will take requests, add the secret key and then forward them on to Facebook. A good, free (unless/until your app gets a lot of users) Python-based service is Google App Engine. They also have a bunch of tutorial videos to get you started. * E.g., when myservice.appspot.com/getUserInfo?uid=12345 is called, your service will execute something like the following. userinfo = fb.users.getInfo(self.request.get('uid')...) Ideally, you'd want to abstract it enough that you don't have to explicitly implement every FB API call you make. One last thing to keep in mind is that many FB API calls do not require the secret key to be passed.
how can I not distribute my secret key (facebook api) while using python?
I'm writing a facebook desktop application for the first time using the PyFacebook api. Up until now, since I've been experimenting, I just passed the secret key along with the api key to the Facebook constructor like so: import facebook fb = facebook.Facebook("my_api_key", "my_secret_key") and then logged in (fb.login() opens a browser) without any trouble. But now, I want to distribute the code and since it's python and opensource, I want to have some way of protecting my secret key. The wiki mentions I can use a server and ask for my secret key using the server each time my app runs (as I understand), but I have no clue as to how to start doing this, and how this should be done. I have never done web programming and don't know where I can get a server, and how to get the server to do what is needed in this case, and I don't know how can I use that server. I would really appreciate some help! Thank you.
[ "The relevant page on the FB developer wiki recommends a server component that just keeps your secret key and handles auth.getSession(), then gives your desktop app a session key. See that link for details.\n", "EDIT: cmb's session keys approach is better than the proxy described below. Config files and GAE are still applicable. /EDIT\nYou could take a couple approaches. If your code is open-source and will be used by other developers, you could allow the secret key to be set in a configuration file. When you distribute the code, place a dummy key in the file and create some instructions on how to obtain and set the key in the config file.\nAlternately, if you want to do the server approach, you'll basically be creating a proxy* that will take requests, add the secret key and then forward them on to Facebook. A good, free (unless/until your app gets a lot of users) Python-based service is Google App Engine. They also have a bunch of tutorial videos to get you started. \n* E.g., when myservice.appspot.com/getUserInfo?uid=12345 is called, your service will execute something like the following.\nuserinfo = fb.users.getInfo(self.request.get('uid')...)\n\nIdeally, you'd want to abstract it enough that you don't have to explicitly implement every FB API call you make.\nOne last thing to keep in mind is that many FB API calls do not require the secret key to be passed.\n" ]
[ 3, 1 ]
[]
[]
[ "facebook", "python" ]
stackoverflow_0001092942_facebook_python.txt
Q: What are some interesting features of the EveryBlock.com source code? The source code behind EveryBlock.com, a major Django-powered website founded by Adrian Holovaty, one of the co-Benevolent Dictators For Life of the Django framework, was recently open-sourced. The source is available as tarballs and on github. This large body of code from an originator of Django should have some interesting features, patterns, tricks, or techniques. What is your favorite? A: Some of the things that I noticed: The publishing system ebpub uses custom django Authentication and user system, hence cannot use django-admin. Altho' it uses Relational Database PostgreSQL, the data items for various data entries are stored in a single table, with types of fields defined in another table, for scalability. (An alternative to key-value pair storing systems, CouchDB) The system uses custom database backend, so that such a modified form database can accessed with convenience in views. The blog application is very small and sweet; Just 1 Entry model and no views, Only generic views. Some of the bots present could be used for multiple purposes, with little tweaking. A: The massive regular expression monster they use to geocode locations from English text. Sentences from news stories like "Shooting was heard in the area East of 3rd between Locust and Pine St" will generate a PostGIS polygon that they then match against the users' locations and layer onto maps. It is a great site, I have been excited about this but hadn't seen the announcement til now. A: Very cool that they released the source. It's a nice bit of code and I think there is something to learn from checking it out. I'm most interested in the geo/mapping functionality, but I also find the scraper routines used to harvest public data from various public sources quite interesting. A: I see that they have one scraper per site per city. I found this to be too much work as there are better way of using one generic scraper with various directives which will be site specific. This is the way I have written my scraper for newjoblist.com I like the look & feel for sure. Map is too washed out I think the source is good to look at and learn what to do or not do. A great reference.
What are some interesting features of the EveryBlock.com source code?
The source code behind EveryBlock.com, a major Django-powered website founded by Adrian Holovaty, one of the co-Benevolent Dictators For Life of the Django framework, was recently open-sourced. The source is available as tarballs and on github. This large body of code from an originator of Django should have some interesting features, patterns, tricks, or techniques. What is your favorite?
[ "Some of the things that I noticed:\n\nThe publishing system ebpub uses custom django Authentication and user system, hence cannot use django-admin.\nAltho' it uses Relational Database PostgreSQL, the data items for various data entries are stored in a single table, with types of fields defined in another table, for scalability. (An alternative to key-value pair storing systems, CouchDB)\nThe system uses custom database backend, so that such a modified form database can accessed with convenience in views.\nThe blog application is very small and sweet; Just 1 Entry model and no views, Only generic views.\nSome of the bots present could be used for multiple purposes, with little tweaking.\n\n", "The massive regular expression monster they use to geocode locations from English text. Sentences from news stories like \"Shooting was heard in the area East of 3rd between Locust and Pine St\" will generate a PostGIS polygon that they then match against the users' locations and layer onto maps. It is a great site, I have been excited about this but hadn't seen the announcement til now.\n", "Very cool that they released the source. It's a nice bit of code and I think there is something to learn from checking it out.\nI'm most interested in the geo/mapping functionality, but I also find the scraper routines used to harvest public data from various public sources quite interesting. \n", "I see that they have one scraper per site per city. \n\nI found this to be too much work as there are better way of using one generic scraper with various directives which will be site specific. This is the way I have written my scraper for newjoblist.com\n\nI like the look & feel for sure.\n\nMap is too washed out\n\nI think the source is good to look at and learn what to do or not do. A great reference.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "django", "open_source", "python" ]
stackoverflow_0001067360_django_open_source_python.txt
Q: Objective-C string manipulation Currently I am working on a piece of software, which is currently written 100% in Objective-C using the iPhone 3.0 SDK. I have come to a cross roads where I need to do quite a bit or string concatenation, more specifically, NSString concatenation and so far I have been doing it like this: Objective-C string concatenation: NSString *resultantString = (NSMutableString*)[@"Hello " stringByAppendingString: @"World"]; Now as you can imagine this gets quite difficult to read when I try to concatenate 6 NSStrings together. At first I contemplated mixing in an Objective-C++ class to do my string concatenation and hand it back to my Objective-C class as then I could use C++'s easy string concatenation like: C++ string concatenation: string cppString = "Hello" + "World" + "see" + "easy!"; I could use C char arrays but that would be a little more difficult to read. It then struck me that I could use a Python or Ruby bridge in Cocoa (which provide the added bonus of Regular expressions and superior string handling than the C based languages do). This made sense to me even though I have coded only small amounts of Ruby and Python, because they are billed as string manipulation languages. Perl I decided to skip because it isn't directly supported in Xcode. I am also interested in performance, so I am looking for the fastest way to do my string concatenation. So what should I do? Is there some deviously easy way I am missing in Objective-C to concatenate many strings at once say 10 strings? or is my idea to use Python or Ruby class methods that return a concatenated or regex modified strings not as incredibly insane as it sounds? Perhaps I even missed some another way to do this? Update: Yes. It seems I was rather crazy for thinking of pulling in another language runtime to do string manipulation, especially since I noted that I was concerned with speed. Adding a bridge would probably be much slower than simply using NSString/NSMutableString. A: For fixed size concatenation, you can use [NSString stringWithFormat:] like: NSString *str = [NSString stringWithFormat:@"%@ %@ %@", @"Hello", @"World", @"Yay!"]; A: you can use join operation. NSArray *chunks = ... get an array, say by splitting it; string = [chunks componentsJoinedByString: @" :-) "]; would produce something like oop :-) ack :-) bork :-) greeble :-) ponies A: Have you seen the appendString method from the NSMutableString class? appendFormat from the same class will let you do many concatenations with one statement if that is what you're really interested in. A: I would avoid mixing languages, particularly if you don't know Python or Ruby well. What you gain in readable code in your Objective-C you will loose have having to read multiple languages to understand your own code base. Seems like a maintainability nightmare to me. I'd strongly suggest taking one of the suggestions of how to do this in Objective-C directly. A: Dragging in C++ just for this seems quite heavy-handed. Using stringWithFormat: on NSString or appendFormat: with an NSMutableString, as others have suggested, is much more natural and not particularly hard to read. Also, in order to use the strings with Cocoa, you'll have to add extra code to convert back and forth from C++ strings.
Objective-C string manipulation
Currently I am working on a piece of software, which is currently written 100% in Objective-C using the iPhone 3.0 SDK. I have come to a cross roads where I need to do quite a bit or string concatenation, more specifically, NSString concatenation and so far I have been doing it like this: Objective-C string concatenation: NSString *resultantString = (NSMutableString*)[@"Hello " stringByAppendingString: @"World"]; Now as you can imagine this gets quite difficult to read when I try to concatenate 6 NSStrings together. At first I contemplated mixing in an Objective-C++ class to do my string concatenation and hand it back to my Objective-C class as then I could use C++'s easy string concatenation like: C++ string concatenation: string cppString = "Hello" + "World" + "see" + "easy!"; I could use C char arrays but that would be a little more difficult to read. It then struck me that I could use a Python or Ruby bridge in Cocoa (which provide the added bonus of Regular expressions and superior string handling than the C based languages do). This made sense to me even though I have coded only small amounts of Ruby and Python, because they are billed as string manipulation languages. Perl I decided to skip because it isn't directly supported in Xcode. I am also interested in performance, so I am looking for the fastest way to do my string concatenation. So what should I do? Is there some deviously easy way I am missing in Objective-C to concatenate many strings at once say 10 strings? or is my idea to use Python or Ruby class methods that return a concatenated or regex modified strings not as incredibly insane as it sounds? Perhaps I even missed some another way to do this? Update: Yes. It seems I was rather crazy for thinking of pulling in another language runtime to do string manipulation, especially since I noted that I was concerned with speed. Adding a bridge would probably be much slower than simply using NSString/NSMutableString.
[ "For fixed size concatenation, you can use [NSString stringWithFormat:] like:\nNSString *str = [NSString stringWithFormat:@\"%@ %@ %@\",\n @\"Hello\", @\"World\", @\"Yay!\"];\n\n", "you can use join operation.\nNSArray *chunks = ... get an array, say by splitting it;\nstring = [chunks componentsJoinedByString: @\" :-) \"];\n\nwould produce something like\noop :-) ack :-) bork :-) greeble :-) ponies\n", "Have you seen the appendString method from the NSMutableString class?\nappendFormat from the same class will let you do many concatenations with one statement if that is what you're really interested in.\n", "I would avoid mixing languages, particularly if you don't know Python or Ruby well. What you gain in readable code in your Objective-C you will loose have having to read multiple languages to understand your own code base. Seems like a maintainability nightmare to me.\nI'd strongly suggest taking one of the suggestions of how to do this in Objective-C directly.\n", "Dragging in C++ just for this seems quite heavy-handed. Using stringWithFormat: on NSString or appendFormat: with an NSMutableString, as others have suggested, is much more natural and not particularly hard to read. Also, in order to use the strings with Cocoa, you'll have to add extra code to convert back and forth from C++ strings.\n" ]
[ 9, 9, 4, 3, 2 ]
[]
[]
[ "objective_c", "objective_c++", "python", "ruby", "string" ]
stackoverflow_0001093805_objective_c_objective_c++_python_ruby_string.txt
Q: Python Error: TypeError: not all arguments converted during string formatting First python script and I'm getting an error I can't seem to get around using a config file. The first part of the script takes user input and puts that into a mysql database with no problem..Then I get to the filesystem work and things go a bit pear shaped..I can get it to work without using the config file options but I'd like to keep it consistent and pull from that file: vshare = str(raw_input('Share the user needs access to: ')) vrights = str(raw_input('Should this user be Read Only? (y/n): ')) f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername), 'wr')) #f = open("/etc/vsftpd_user_conf/%s" % (vusername) , 'wr' ) f.write("local_root=%s/%s" % (config['vsftp']['local_root_dir'], vshare)) if vrights.lower() in ['y', 'ye', 'yes']: buffer = [] for line in f.readlines(): if 'write_enable=' in line: buffer.append('write_enable=NO') else: buffer.append(line) f.writelines(buffer) f.close() The error I'm getting is: TypeError: not all arguments converted during string formatting If I uncomment the commented line it works and makes it a bit further and errors out as well..But I'll deal with that once I get this hiccup sorted. A: Your tuple is misshaped f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername), 'wr')) Should be f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername)), 'wr') A: The error is here: open("%s/%s" % (config['vsftp']['user_dir'], (vusername), 'wr')) You have three parameters, but only two %s in the string. You probably meant to say: open("%s/%s" % (config['vsftp']['user_dir'], vusername), 'wr') Although 'wr' is unclear, you probably mean w+ or r+. http://docs.python.org/library/functions.html#open A: f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername), 'wr')) You are passing three arguments (config['vsftp']['user_dir'], (vusername), 'wr') to a format string expecting two: "%s/%s". So the error is telling you that there is an argument to the format string that is not being used. A: I think you have a wrong parenthesis, your line should be: f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername)), 'wr') A: It looks like this line should be: f = open("%s/%s" % (config['vsftp']['user_dir'], vusername), 'wr') (I moved the closing parenthesis over.)
Python Error: TypeError: not all arguments converted during string formatting
First python script and I'm getting an error I can't seem to get around using a config file. The first part of the script takes user input and puts that into a mysql database with no problem..Then I get to the filesystem work and things go a bit pear shaped..I can get it to work without using the config file options but I'd like to keep it consistent and pull from that file: vshare = str(raw_input('Share the user needs access to: ')) vrights = str(raw_input('Should this user be Read Only? (y/n): ')) f = open("%s/%s" % (config['vsftp']['user_dir'], (vusername), 'wr')) #f = open("/etc/vsftpd_user_conf/%s" % (vusername) , 'wr' ) f.write("local_root=%s/%s" % (config['vsftp']['local_root_dir'], vshare)) if vrights.lower() in ['y', 'ye', 'yes']: buffer = [] for line in f.readlines(): if 'write_enable=' in line: buffer.append('write_enable=NO') else: buffer.append(line) f.writelines(buffer) f.close() The error I'm getting is: TypeError: not all arguments converted during string formatting If I uncomment the commented line it works and makes it a bit further and errors out as well..But I'll deal with that once I get this hiccup sorted.
[ "Your tuple is misshaped\nf = open(\"%s/%s\" % (config['vsftp']['user_dir'], (vusername), 'wr'))\n\nShould be\nf = open(\"%s/%s\" % (config['vsftp']['user_dir'], (vusername)), 'wr')\n\n", "The error is here:\nopen(\"%s/%s\" % (config['vsftp']['user_dir'], (vusername), 'wr'))\n\nYou have three parameters, but only two %s in the string. You probably meant to say:\nopen(\"%s/%s\" % (config['vsftp']['user_dir'], vusername), 'wr')\n\nAlthough 'wr' is unclear, you probably mean w+ or r+.\nhttp://docs.python.org/library/functions.html#open\n", "f = open(\"%s/%s\" % (config['vsftp']['user_dir'], (vusername), 'wr'))\n\nYou are passing three arguments (config['vsftp']['user_dir'], (vusername), 'wr') to a format string expecting two: \"%s/%s\". So the error is telling you that there is an argument to the format string that is not being used.\n", "I think you have a wrong parenthesis, your line should be:\nf = open(\"%s/%s\" % (config['vsftp']['user_dir'], (vusername)), 'wr')\n\n", "It looks like this line should be:\nf = open(\"%s/%s\" % (config['vsftp']['user_dir'], vusername), 'wr')\n\n(I moved the closing parenthesis over.)\n" ]
[ 3, 2, 0, 0, 0 ]
[]
[]
[ "python", "typeerror" ]
stackoverflow_0001093884_python_typeerror.txt
Q: Windows XP - mute/unmute audio in programmatically in Python My machine has two audio inputs: a mic in that I use for gaming, and a line in that I use for guitar. When using one it's important that the other be muted to remove hiss/static, so I was hoping to write a small script that would toggle which one was muted (it's fairly inconvenient to click through the tray icon, switch to my input device, mute and unmute). I thought perhaps I could do this with pywin32, but everything I could find seemed specific to setting the output volume rather than input, and I'm not familiar enough with win32 to even know where to look for better info. Could anybody point me in the right direction? A: Disclaimer: I'm not a windows programming guru by any means...but here's my best guess Per the pywin32 FAQ: How do I use the exposed Win32 functions to do xyz? In general, the trick is to not consider it a Python/PyWin32 question at all, but to search for documentation or examples of your problem, regardless of the language. This will generally give you the information you need to perform the same operations using these extensions. The included documentation will tell you the arguments and return types of the functions so you can easily determine the correct way to "spell" things in Python. Sounds like you're looking to control the "endpoint device" volumes (i.e. your sound card / line-in). Here's the API reference in that direction. Here's a slightly broader look at controlling audio devices in windows if the previous wasn't what you're looking for. Here's a blog entry from someone who did what you're trying to do in C# (I know you specified python, but you might be able to extract the correct API calls from the code). Good luck! And if you do get working code, I'm interested to see it. A: I had a similar problem and couldn't figure out how to use Windows API's to do what I wanted. I ended up just automating the GUI with AutoIt. I think that will be the fastest and easiest solution (albeit a "hacky" one). As I answered earlier today, you can use AutoIT from within Python. A: You are probably better off using ctypes - pywin32 is good if you are using one of the already included APIs, but I think you'll be out of luck with the sound APIs. Together with the example code from the C# link provided by tgray, use ctypes and winmm.dll, or alternatively, use SWIG to wrap winmm.dll. This may well be quicker as you won't have to build C structure mapping types in ctypes for the types such as MIXERCONTROLDETAILS which are used in the API calls. A: tgray seems to have pointed you in the right direction, but once you find out the right Win32 APIs to deal with, you have a couple of options: 1) Try using pywin32...but it may or may not wrap the functionality you need (it probably doesn't). So you probably only want to do this if you need to use COM to get at the functionality you need. 2) Use ctypes. It's generally pretty easy to wrap just about any C functionality with ctypes. 3) If the C# example looks like what you need, you should be able to translate it to IronPython with fairly little effort. Might be easier than using the C API. YMMV, of course.
Windows XP - mute/unmute audio in programmatically in Python
My machine has two audio inputs: a mic in that I use for gaming, and a line in that I use for guitar. When using one it's important that the other be muted to remove hiss/static, so I was hoping to write a small script that would toggle which one was muted (it's fairly inconvenient to click through the tray icon, switch to my input device, mute and unmute). I thought perhaps I could do this with pywin32, but everything I could find seemed specific to setting the output volume rather than input, and I'm not familiar enough with win32 to even know where to look for better info. Could anybody point me in the right direction?
[ "Disclaimer: I'm not a windows programming guru by any means...but here's my best guess\nPer the pywin32 FAQ:\n\nHow do I use the exposed Win32 functions to do xyz?\nIn general, the trick is to not\n consider it a Python/PyWin32 question\n at all, but to search for\n documentation or examples of your\n problem, regardless of the language. \n This will generally give you the\n information you need to perform the\n same operations using these\n extensions. The included\n documentation will tell you the\n arguments and return types of the\n functions so you can easily determine\n the correct way to \"spell\" things in\n Python.\n\nSounds like you're looking to control the \"endpoint device\" volumes (i.e. your sound card / line-in). Here's the API reference in that direction.\nHere's a slightly broader look at controlling audio devices in windows if the previous wasn't what you're looking for.\nHere's a blog entry from someone who did what you're trying to do in C# (I know you specified python, but you might be able to extract the correct API calls from the code).\nGood luck! And if you do get working code, I'm interested to see it.\n", "I had a similar problem and couldn't figure out how to use Windows API's to do what I wanted. I ended up just automating the GUI with AutoIt. I think that will be the fastest and easiest solution (albeit a \"hacky\" one). As I answered earlier today, you can use AutoIT from within Python.\n", "You are probably better off using ctypes - pywin32 is good if you are using one of the already included APIs, but I think you'll be out of luck with the sound APIs. Together with the example code from the C# link provided by tgray, use ctypes and winmm.dll, or alternatively, use SWIG to wrap winmm.dll. This may well be quicker as you won't have to build C structure mapping types in ctypes for the types such as MIXERCONTROLDETAILS which are used in the API calls.\n", "tgray seems to have pointed you in the right direction, but once you find out the right Win32 APIs to deal with, you have a couple of options:\n1) Try using pywin32...but it may or may not wrap the functionality you need (it probably doesn't). So you probably only want to do this if you need to use COM to get at the functionality you need.\n2) Use ctypes. It's generally pretty easy to wrap just about any C functionality with ctypes.\n3) If the C# example looks like what you need, you should be able to translate it to IronPython with fairly little effort. Might be easier than using the C API. YMMV, of course.\n" ]
[ 6, 3, 1, 0 ]
[]
[]
[ "audio", "python", "pywin32", "winapi", "windows" ]
stackoverflow_0001092466_audio_python_pywin32_winapi_windows.txt
Q: How to specify relations using SQLAlchemy declarative syntax? I can't find any proper documentation on how to specify relations using the declarative syntax of SQLAlchemy.. Is it unsupported? That is, should I use the "traditional" syntax? I am looking for a way to specify relations at a higher level, avoiding having to mess with foreign keys etc.. I'd like to just declare "addresses = OneToMany(Address)" and let the framework handle the details.. I know that Elixir can do that, but I was wondering if "plain" SQLA could do it too. Thanks for your help! A: Assuming you are referring to the declarative plugin, where everything I am about to say is documented with examples: class User(Base): __tablename__ = 'users' id = Column('id', Integer, primary_key=True) addresses = relation("Address", backref="user") class Address(Base): __tablename__ = 'addresses' id = Column('id', Integer, primary_key=True) user_id = Column('user_id', Integer, ForeignKey('users.id')) A: Look at the "Configuring Relations" section of the Declarative docs. Not quite as high level as "OneToMany" but better than fully specifying the relation. class Address(Base): __tablename__ = 'addresses' id = Column(Integer, primary_key=True) email = Column(String(50)) user_id = Column(Integer, ForeignKey('users.id'))
How to specify relations using SQLAlchemy declarative syntax?
I can't find any proper documentation on how to specify relations using the declarative syntax of SQLAlchemy.. Is it unsupported? That is, should I use the "traditional" syntax? I am looking for a way to specify relations at a higher level, avoiding having to mess with foreign keys etc.. I'd like to just declare "addresses = OneToMany(Address)" and let the framework handle the details.. I know that Elixir can do that, but I was wondering if "plain" SQLA could do it too. Thanks for your help!
[ "Assuming you are referring to the declarative plugin, where everything I am about to say is documented with examples:\nclass User(Base):\n __tablename__ = 'users'\n\n id = Column('id', Integer, primary_key=True)\n addresses = relation(\"Address\", backref=\"user\")\n\nclass Address(Base):\n __tablename__ = 'addresses'\n\n id = Column('id', Integer, primary_key=True)\n user_id = Column('user_id', Integer, ForeignKey('users.id'))\n\n", "Look at the \"Configuring Relations\" section of the Declarative docs. Not quite as high level as \"OneToMany\" but better than fully specifying the relation. \nclass Address(Base):\n __tablename__ = 'addresses'\n\n id = Column(Integer, primary_key=True)\n email = Column(String(50))\n user_id = Column(Integer, ForeignKey('users.id'))\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0000250398_python_sqlalchemy.txt
Q: Elegant way to abstract multiple function calls? Example: >>> def write_to_terminal(fmt, *args): ... print fmt % args >>> LOG = logging.getLogger(__name__) >>> info = multicall(write_to_terminal, LOG.info) >>> debug = multicall(write_debug_to_terminal, LOG.debug) >>> ... >>> info('Hello %s', 'guido') # display in terminal *and* log the message Is there an elegant way to write multicall? Perhaps with the help of the standard library .. without reinventing the wheel? A: Something like this? def multicall(*functions): def call_functions(*args, **kwds): for function in functions: function(*args, **kwds) return call_functions And if you want to aggregate the results: def multicall(*functions): def call_functions(*args, **kwds): return [function(*args, **kwds) for function in functions] return call_functions EDIT Decorators were suggested; in that case it would look like this: def appendcalls(*functions): def decorator(decorated_function): all_functions = [decorated_function] + list(functions) def call_functions(*args, **kwds): for function in all_functions: function(*args, **kwds) return call_functions return decorator LOG = logging.getLogger(__name__) @appendcalls(LOG.info) def info(fmt, *args): print fmt % args info('Hello %s', 'guido') appendcalls() takes any number of functions to be called after the decorated function. You may want to implement the decorator differently, depending on what return value you want -- the original from the decorated function, a list of all function results or nothing at all. A: You could look into Python decorators. A clear description is here: http://www.artima.com/weblogs/viewpost.jsp?thread=240808
Elegant way to abstract multiple function calls?
Example: >>> def write_to_terminal(fmt, *args): ... print fmt % args >>> LOG = logging.getLogger(__name__) >>> info = multicall(write_to_terminal, LOG.info) >>> debug = multicall(write_debug_to_terminal, LOG.debug) >>> ... >>> info('Hello %s', 'guido') # display in terminal *and* log the message Is there an elegant way to write multicall? Perhaps with the help of the standard library .. without reinventing the wheel?
[ "Something like this?\ndef multicall(*functions):\n def call_functions(*args, **kwds):\n for function in functions:\n function(*args, **kwds)\n return call_functions\n\nAnd if you want to aggregate the results:\ndef multicall(*functions):\n def call_functions(*args, **kwds):\n return [function(*args, **kwds) for function in functions]\n return call_functions\n\nEDIT\nDecorators were suggested; in that case it would look like this:\ndef appendcalls(*functions):\n def decorator(decorated_function):\n all_functions = [decorated_function] + list(functions)\n def call_functions(*args, **kwds):\n for function in all_functions:\n function(*args, **kwds)\n return call_functions\n return decorator\n\n\nLOG = logging.getLogger(__name__)\n\n@appendcalls(LOG.info)\ndef info(fmt, *args):\n print fmt % args\n\ninfo('Hello %s', 'guido')\n\nappendcalls() takes any number of functions to be called after the decorated function. You may want to implement the decorator differently, depending on what return value you want -- the original from the decorated function, a list of all function results or nothing at all.\n", "You could look into Python decorators.\nA clear description is here: http://www.artima.com/weblogs/viewpost.jsp?thread=240808\n" ]
[ 5, 1 ]
[]
[]
[ "function", "python" ]
stackoverflow_0001094611_function_python.txt
Q: Is there a Python language specification? Is there anything in Python akin to Java's JLS or C#'s spec? A: There's no specification per se. The closest thing is the Python Language Reference, which details the syntax and semantics of the language. A: You can check out the Python Reference A: No, python is defined by its implementation.
Is there a Python language specification?
Is there anything in Python akin to Java's JLS or C#'s spec?
[ "There's no specification per se. The closest thing is the Python Language Reference, which details the syntax and semantics of the language.\n", "You can check out the Python Reference\n", "No, python is defined by its implementation.\n" ]
[ 35, 2, 0 ]
[]
[]
[ "python", "specifications" ]
stackoverflow_0001094961_python_specifications.txt
Q: Python strings / match case I have a CSV file which has the following format: id,case1,case2,case3 Here is a sample: 123,null,X,Y 342,X,X,Y 456,null,null,null 789,null,null,X For each line I need to know which of the cases is not null. Is there an easy way to find out which case(s) are not null without splitting the string and going through each element? This is what the result should look like: 123,case2:case3 342,case1:case2:case3 456:None 789:case3 A: You probably want to take a look at the CSV module, which has readers and writers that will enable you to create transforms. >>> from StringIO import StringIO >>> from csv import DictReader >>> fh = StringIO(""" ... id,case1,case2,case3 ... ... 123,null,X,Y ... ... 342,X,X,Y ... ... 456,null,null,null ... ... 789,null,null,X ... """.strip()) >>> dr = DictReader(fh) >>> dr.next() {'case1': 'null', 'case3': 'Y', 'case2': 'X', 'id': '123'} At which point you can do something like: >>> from csv import DictWriter >>> out_fh = StringIO() >>> writer = DictWriter(fh, fieldnames=dr.fieldnames) >>> for mapping in dr: ... writer.write(dict((k, v) for k, v in mapping.items() if v != 'null')) ... The last bit is just pseudocode -- not sure dr.fieldnames is actually a property. Replace out_fh with the filehandle that you'd like to output to. A: Anyway you slice it, you are still going to have to go through the list. There are more and less elegant ways to do it. Depending on the python version you are using, you can use list comprehensions. ids=line.split(",") print "%s:%s" % (ids[0], ":".join(["case%d" % x for x in range(1, len(ids)) if ids[x] != "null"]) A: Why do you treat spliting as a problem? For performance reasons? Literally you could avoid splitting with smart regexps (like: \d+,null,\w+,\w+ \d+,\w+,null,\w+ ... but I find it a worse solution than reparsing the data into lists. A: You could use the Python csv module, comes in with the standard installation of python... It will not be much easier, though...
Python strings / match case
I have a CSV file which has the following format: id,case1,case2,case3 Here is a sample: 123,null,X,Y 342,X,X,Y 456,null,null,null 789,null,null,X For each line I need to know which of the cases is not null. Is there an easy way to find out which case(s) are not null without splitting the string and going through each element? This is what the result should look like: 123,case2:case3 342,case1:case2:case3 456:None 789:case3
[ "You probably want to take a look at the CSV module, which has readers and writers that will enable you to create transforms.\n>>> from StringIO import StringIO\n>>> from csv import DictReader\n>>> fh = StringIO(\"\"\"\n... id,case1,case2,case3\n... \n... 123,null,X,Y\n... \n... 342,X,X,Y\n... \n... 456,null,null,null\n... \n... 789,null,null,X\n... \"\"\".strip())\n>>> dr = DictReader(fh)\n>>> dr.next()\n{'case1': 'null', 'case3': 'Y', 'case2': 'X', 'id': '123'}\n\nAt which point you can do something like:\n>>> from csv import DictWriter\n>>> out_fh = StringIO()\n>>> writer = DictWriter(fh, fieldnames=dr.fieldnames)\n>>> for mapping in dr:\n... writer.write(dict((k, v) for k, v in mapping.items() if v != 'null'))\n...\n\nThe last bit is just pseudocode -- not sure dr.fieldnames is actually a property. Replace out_fh with the filehandle that you'd like to output to.\n", "Anyway you slice it, you are still going to have to go through the list. There are more and less elegant ways to do it. Depending on the python version you are using, you can use list comprehensions.\nids=line.split(\",\")\nprint \"%s:%s\" % (ids[0], \":\".join([\"case%d\" % x for x in range(1, len(ids)) if ids[x] != \"null\"])\n\n", "Why do you treat spliting as a problem? For performance reasons?\nLiterally you could avoid splitting with smart regexps (like:\n\\d+,null,\\w+,\\w+\n\\d+,\\w+,null,\\w+\n...\n\nbut I find it a worse solution than reparsing the data into lists.\n", "You could use the Python csv module, comes in with the standard installation of python... It will not be much easier, though...\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0001095026_csv_python.txt
Q: Can I send SIGINT to a Python subprocess on Windows? I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb) It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms? (I am using the subprocess module, and python 2.5/2.6) A: Windows doesn't have the unix signals IPC mechanism. I would look at sending a CTRL-C to the gdb process.
Can I send SIGINT to a Python subprocess on Windows?
I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb) It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms? (I am using the subprocess module, and python 2.5/2.6)
[ "Windows doesn't have the unix signals IPC mechanism.\nI would look at sending a CTRL-C to the gdb process.\n" ]
[ 1 ]
[]
[]
[ "python", "sigint", "signal_handling", "subprocess", "windows" ]
stackoverflow_0001095549_python_sigint_signal_handling_subprocess_windows.txt
Q: Does get_or_create() have to save right away? (Django) I need to use something like get_or_create() but the problem is that I have a lot of fields and I don't want to set defaults (which don't make sense anyway), and if I don't set defaults it returns an error, because it saves the object right away apparently. I can set the fields to null=True, but I don't want null fields. Is there any other method or any extra parameter that can be sent to get_or_create() so that it instantiates an object but doesn't save it until I call save() on it? Thanks. A: You can just do: try: obj = Model.objects.get(**kwargs) except Model.DoesNotExist: obj = Model(**dict((k,v) for (k,v) in kwargs.items() if '__' not in k)) which is pretty much what get_or_create does, sans commit.
Does get_or_create() have to save right away? (Django)
I need to use something like get_or_create() but the problem is that I have a lot of fields and I don't want to set defaults (which don't make sense anyway), and if I don't set defaults it returns an error, because it saves the object right away apparently. I can set the fields to null=True, but I don't want null fields. Is there any other method or any extra parameter that can be sent to get_or_create() so that it instantiates an object but doesn't save it until I call save() on it? Thanks.
[ "You can just do:\ntry:\n obj = Model.objects.get(**kwargs)\nexcept Model.DoesNotExist:\n obj = Model(**dict((k,v) for (k,v) in kwargs.items() if '__' not in k))\n\nwhich is pretty much what get_or_create does, sans commit.\n" ]
[ 39 ]
[]
[]
[ "django", "django_models", "django_orm", "python" ]
stackoverflow_0001095663_django_django_models_django_orm_python.txt
Q: Alternative Python imaging libraries on Google App Engine? I am thinking about uploading images to Google App Engine, but I need to brighten parts of the image. I am not sure if the App Engine imagine API will be sufficient. I consider to try an overlay with a white image and partial opacity. However, if that does not yield the desired results, would there be another Python imaging library that works with App Engine? Basically it would have to be pure Python (no associated C code or anything). A: PNGcanvas might help, if PNG input and output is satisfactory -- it doesn't directly offer the "brighten" functionality you require, but it does let you load and save PNG files into memory and access them directly from Python, and it IS a single, simple Python source file.
Alternative Python imaging libraries on Google App Engine?
I am thinking about uploading images to Google App Engine, but I need to brighten parts of the image. I am not sure if the App Engine imagine API will be sufficient. I consider to try an overlay with a white image and partial opacity. However, if that does not yield the desired results, would there be another Python imaging library that works with App Engine? Basically it would have to be pure Python (no associated C code or anything).
[ "PNGcanvas might help, if PNG input and output is satisfactory -- it doesn't directly offer the \"brighten\" functionality you require, but it does let you load and save PNG files into memory and access them directly from Python, and it IS a single, simple Python source file.\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "python", "python_imaging_library" ]
stackoverflow_0001095325_google_app_engine_python_python_imaging_library.txt
Q: Finding partial strings in a list of strings - python I am trying to check if a user is a member of an Active Directory group, and I have this: ldap.set_option(ldap.OPT_REFERRALS, 0) try: con = ldap.initialize(LDAP_URL) con.simple_bind_s(userid+"@"+ad_settings.AD_DNS_NAME, password) ADUser = con.search_ext_s(ad_settings.AD_SEARCH_DN, ldap.SCOPE_SUBTREE, \ "sAMAccountName=%s" % userid, ad_settings.AD_SEARCH_FIELDS)[0][1] except ldap.LDAPError: return None ADUser returns a list of strings: {'givenName': ['xxxxx'], 'mail': ['[email protected]'], 'memberOf': ['CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group2,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group3,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group4,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'], 'sAMAccountName': ['myloginid'], 'sn': ['Xxxxxxxx']} Of course in the real world the group names are verbose and of varied structure, and users will belong to tens or hundreds of groups. If I get the list of groups out as ADUser.get('memberOf')[0], what is the best way to check if any members of a separate list exist in the main list? For example, the check list would be ['group2', 'group16'] and I want to get a true/false answer as to whether any of the smaller list exist in the main list. A: If the format example you give is somewhat reliable, something like: import re grps = re.compile(r'CN=(\w+)').findall def anyof(short_group_list, adu): all_groups_of_user = set(g for gs in adu.get('memberOf',()) for g in grps(gs)) return sorted(all_groups_of_user.intersection(short_group_list)) where you pass your list such as ['group2', 'group16'] as the first argument, your ADUser dict as the second argument; this returns an alphabetically sorted list (possibly empty, meaning "none") of the groups, among those in short_group_list, to which the user belongs. It's probably not much faster to just a bool, but, if you insist, changing the second statement of the function to: return any(g for g in short_group_list if g in all_groups_of_user) might possibly save a certain amount of time in the "true" case (since any short-circuits) though I suspect not in the "false" case (where the whole list must be traversed anyway). If you care about the performance issue, best is to benchmark both possibilities on data that's realistic for your use case! If performance isn't yet good enough (and a bool yes/no is sufficient, as you say), try reversing the looping logic: def anyof_v2(short_group_list, adu): gset = set(short_group_list) return any(g for gs in adu.get('memberOf',()) for g in grps(gs) if g in gset) any's short-circuit abilities might prove more useful here (at least in the "true" case, again -- because, again, there's no way to give a "false" result without examining ALL the possibilities anyway!-). A: You can use set intersection (& operator) once you parse the group list out. For example: > memberOf = 'CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com' > groups = [token.split('=')[1] for token in memberOf.split(',')] > groups ['group1', 'Projects', 'Office', 'company', 'domain', 'com'] > checklist1 = ['group1', 'group16'] > set(checklist1) & set(groups) set(['group1']) > checklist2 = ['group2', 'group16'] > set(checklist2) & set(groups) set([]) Note that a conditional evaluation on a set works the same as for lists and tuples. True if there are any elements in the set, False otherwise. So, "if set(checklist2) & set(groups): ..." would not execute since the condition evaluates to False in the above example (the opposite is true for the checklist1 test). Also see: http://docs.python.org/library/sets.html
Finding partial strings in a list of strings - python
I am trying to check if a user is a member of an Active Directory group, and I have this: ldap.set_option(ldap.OPT_REFERRALS, 0) try: con = ldap.initialize(LDAP_URL) con.simple_bind_s(userid+"@"+ad_settings.AD_DNS_NAME, password) ADUser = con.search_ext_s(ad_settings.AD_SEARCH_DN, ldap.SCOPE_SUBTREE, \ "sAMAccountName=%s" % userid, ad_settings.AD_SEARCH_FIELDS)[0][1] except ldap.LDAPError: return None ADUser returns a list of strings: {'givenName': ['xxxxx'], 'mail': ['[email protected]'], 'memberOf': ['CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group2,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group3,OU=Projects,OU=Office,OU=company,DC=domain,DC=com', 'CN=group4,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'], 'sAMAccountName': ['myloginid'], 'sn': ['Xxxxxxxx']} Of course in the real world the group names are verbose and of varied structure, and users will belong to tens or hundreds of groups. If I get the list of groups out as ADUser.get('memberOf')[0], what is the best way to check if any members of a separate list exist in the main list? For example, the check list would be ['group2', 'group16'] and I want to get a true/false answer as to whether any of the smaller list exist in the main list.
[ "If the format example you give is somewhat reliable, something like:\nimport re\ngrps = re.compile(r'CN=(\\w+)').findall\n\ndef anyof(short_group_list, adu):\n all_groups_of_user = set(g for gs in adu.get('memberOf',()) for g in grps(gs))\n return sorted(all_groups_of_user.intersection(short_group_list))\n\nwhere you pass your list such as ['group2', 'group16'] as the first argument, your ADUser dict as the second argument; this returns an alphabetically sorted list (possibly empty, meaning \"none\") of the groups, among those in short_group_list, to which the user belongs. \nIt's probably not much faster to just a bool, but, if you insist, changing the second statement of the function to:\n return any(g for g in short_group_list if g in all_groups_of_user)\n\nmight possibly save a certain amount of time in the \"true\" case (since any short-circuits) though I suspect not in the \"false\" case (where the whole list must be traversed anyway). If you care about the performance issue, best is to benchmark both possibilities on data that's realistic for your use case!\nIf performance isn't yet good enough (and a bool yes/no is sufficient, as you say), try reversing the looping logic:\ndef anyof_v2(short_group_list, adu):\n gset = set(short_group_list)\n return any(g for gs in adu.get('memberOf',()) for g in grps(gs) if g in gset)\n\nany's short-circuit abilities might prove more useful here (at least in the \"true\" case, again -- because, again, there's no way to give a \"false\" result without examining ALL the possibilities anyway!-).\n", "You can use set intersection (& operator) once you parse the group list out. For example:\n> memberOf = 'CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'\n\n> groups = [token.split('=')[1] for token in memberOf.split(',')]\n\n> groups\n['group1', 'Projects', 'Office', 'company', 'domain', 'com']\n\n> checklist1 = ['group1', 'group16']\n\n> set(checklist1) & set(groups)\nset(['group1'])\n\n> checklist2 = ['group2', 'group16']\n\n> set(checklist2) & set(groups)\nset([])\n\nNote that a conditional evaluation on a set works the same as for lists and tuples. True if there are any elements in the set, False otherwise. So, \"if set(checklist2) & set(groups): ...\" would not execute since the condition evaluates to False in the above example (the opposite is true for the checklist1 test).\nAlso see:\nhttp://docs.python.org/library/sets.html\n" ]
[ 2, 1 ]
[]
[]
[ "list", "python", "regex" ]
stackoverflow_0001095270_list_python_regex.txt
Q: Comparing and updating array values in Python I'm developing a Sirius XM radio desktop player in Python, in which I want the ability to display a table of all the channels and what is currently playing on each of them. This channel data is obtained from their website as a JSON string. I'm looking for the best data structure that would allow the cleanest way to compare and update the channel data. Arrays are problematic because I would want to be able to refer to an item by its channel number, but if I manually set each index I lose the ability to sort the array, as it would remap the index sequentially (while the channels aren't in a perfect sequence). The other possibility (I can see) is using Sqlite, however I'm not sure if this is overkill. is there a cleaner approach for referring and maintaining this data? A: Why not a dict, with channel number as the key and "what's playing" as the value? Easy to make from JSON, easy to sort (sorted(thedict) sorts by channel, sorted(thedict, key=thedict.get) sorts by value -- all operations are pretty easy (if you specify better exactly what operations you want to do I'll be happy to show corresponding code samples). A: In this kind of situation, I often use a dict. It looks to me as the simplest solution. I think that Sqlite will cause some unecessary overhead. However it would give you persistence of data. But I guess that your app needs to be online so you don't really need persistence
Comparing and updating array values in Python
I'm developing a Sirius XM radio desktop player in Python, in which I want the ability to display a table of all the channels and what is currently playing on each of them. This channel data is obtained from their website as a JSON string. I'm looking for the best data structure that would allow the cleanest way to compare and update the channel data. Arrays are problematic because I would want to be able to refer to an item by its channel number, but if I manually set each index I lose the ability to sort the array, as it would remap the index sequentially (while the channels aren't in a perfect sequence). The other possibility (I can see) is using Sqlite, however I'm not sure if this is overkill. is there a cleaner approach for referring and maintaining this data?
[ "Why not a dict, with channel number as the key and \"what's playing\" as the value? Easy to make from JSON, easy to sort (sorted(thedict) sorts by channel, sorted(thedict, key=thedict.get) sorts by value -- all operations are pretty easy (if you specify better exactly what operations you want to do I'll be happy to show corresponding code samples).\n", "In this kind of situation, I often use a dict. It looks to me as the simplest solution. \nI think that Sqlite will cause some unecessary overhead. However it would give you persistence of data. But I guess that your app needs to be online so you don't really need persistence\n" ]
[ 4, 2 ]
[]
[]
[ "arrays", "data_structures", "list", "python" ]
stackoverflow_0001096003_arrays_data_structures_list_python.txt
Q: What is the easiest way to handle dates/times in Python? My use case is that I'm just making a website that I want people all over the world to be able to use, and I want to be able to say things like "This happened at 5:33pm on October 5" and also "This happened 5 minutes ago," etc. Should I use the datetime module? Or just strftime? Or something fancier that isn't part of the std distro of Python? A: Take a look at the dateutil module: http://labix.org/python-dateutil It's good at doing the types of things you're looking for - see some of the examples in the documentation. A: You may have a look at Django's humanize module. It is part of Django, but I think it would be quite easy to adapt it to your needs. A: If you're going to use datetime, make sure you read this recent and most excellent article: Tips on using python's datetime module. datetime will take care of most of the niceties of handling time arithmetic, but it won't give you the English-language pretty printing you're looking for. A: The datetime module in Python will allow you to get/set/manipulate dates and times. A question about relative date formatting in Python has already been asked: Stack Overflow Post but with very little responce. A: Try relativeDates Module module. It exactly brings you the stuff you wanted. A: I have always been very happy using the datetime package. You get a lot of stuff for free, and it's pretty easy to create datetime objects as well, calculate duration ect. A: There is also the Time module.
What is the easiest way to handle dates/times in Python?
My use case is that I'm just making a website that I want people all over the world to be able to use, and I want to be able to say things like "This happened at 5:33pm on October 5" and also "This happened 5 minutes ago," etc. Should I use the datetime module? Or just strftime? Or something fancier that isn't part of the std distro of Python?
[ "Take a look at the dateutil module:\nhttp://labix.org/python-dateutil\nIt's good at doing the types of things you're looking for - see some of the examples in the documentation.\n", "You may have a look at Django's humanize module.\nIt is part of Django, but I think it would be quite easy to adapt it to your needs.\n", "If you're going to use datetime, make sure you read this recent and most excellent article:\n Tips on using python's datetime module. datetime will take care of most of the niceties of handling time arithmetic, but it won't give you the English-language pretty printing you're looking for.\n", "The datetime module in Python will allow you to get/set/manipulate dates and times. \nA question about relative date formatting in Python has already been asked: Stack Overflow Post\nbut with very little responce. \n", "Try relativeDates Module module. It exactly brings you the stuff you wanted.\n", "I have always been very happy using the datetime package. You get a lot of stuff for free, and it's pretty easy to create datetime objects as well, calculate duration ect.\n", "There is also the Time module.\n" ]
[ 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0001096396_datetime_python.txt
Q: Can I view the doc string of a function in Python using VIM? Is there any way to view a function's doc string when writing Python in VIM? For instance: def MyFunction(spam): """A function that foobars the spam returns eggs""" return foobar(spam).eggs() I'd like to be able to type MyFunction(spam0) and see the doc string, either as a tooltip or in the status bar or any other way that VIM allows. A: The pythoncomplete script is probably what you are looking for.
Can I view the doc string of a function in Python using VIM?
Is there any way to view a function's doc string when writing Python in VIM? For instance: def MyFunction(spam): """A function that foobars the spam returns eggs""" return foobar(spam).eggs() I'd like to be able to type MyFunction(spam0) and see the doc string, either as a tooltip or in the status bar or any other way that VIM allows.
[ "The pythoncomplete script is probably what you are looking for.\n" ]
[ 2 ]
[]
[]
[ "docstring", "python", "vim" ]
stackoverflow_0001096912_docstring_python_vim.txt
Q: Incorporate custom template into the django admin interface and session I have made a custom formwizard and incorporated it into my admin interface. Basically I have taken the change_form.html and left it under the admin interface url: (r'^admin/compilation/evaluation/add/$', EvaluationWizard([EvaluationForm1, EvaluationForm2])), It works, but the admin "session" is not kept. I can access the page without being logged in to the admin interface, and the admin variables like the breadcrumbs are not working. How do I incorporate it under the "admin interface session" so to speak? Thanks, John A: If you need to make sure only authorised users access the page, you need to check for an admin user in your request handler. This will be the __call__ method in your EvaluationWizard class. Basically, the logic used by the admin is available for viewing here. Look for this in the AdminSite class: if not self.has_permission(request): return self.login(request) and use similar logic, or whatever you need. You'll need a similar statement at the top of your __call__ method. The has_permission method of AdminSite is a one-liner, which you can use as-is, but you'll need to adapt the login method to your specific needs.
Incorporate custom template into the django admin interface and session
I have made a custom formwizard and incorporated it into my admin interface. Basically I have taken the change_form.html and left it under the admin interface url: (r'^admin/compilation/evaluation/add/$', EvaluationWizard([EvaluationForm1, EvaluationForm2])), It works, but the admin "session" is not kept. I can access the page without being logged in to the admin interface, and the admin variables like the breadcrumbs are not working. How do I incorporate it under the "admin interface session" so to speak? Thanks, John
[ "If you need to make sure only authorised users access the page, you need to check for an admin user in your request handler. This will be the __call__ method in your EvaluationWizard class.\nBasically, the logic used by the admin is available for viewing here. Look for this in the AdminSite class:\nif not self.has_permission(request): \n return self.login(request) \n\nand use similar logic, or whatever you need. You'll need a similar statement at the top of your __call__ method. The has_permission method of AdminSite is a one-liner, which you can use as-is, but you'll need to adapt the login method to your specific needs.\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "django_templates", "python" ]
stackoverflow_0001096607_django_django_admin_django_templates_python.txt
Q: Issue with Facebook showAddSectionButton I am going absolutely batty trying to figure out how to get showAddSectionButton to work. The problem: I'm trying to get the 'add section button' to show up. There is nothing showing up right now. My code: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" > <body> <div id="s1"></div> <script type="text/javascript" src="{{ fb_js }}"></script> <script type="text/javascript"> window.onload = function() { FB_RequireFeatures(["XFBML"], function() { FB.Facebook.init('{{ api_key }}','{{ receiver_path }}', null); FB.Connect.showAddSectionButton("profile", document.getElementById("s1")); }); }; </script> <div id="s2"></div> </body> </html> Stuff I've tried: Copy-pasted the code from the working Facebook example app Smiley and made only the minimal changes to customize it to my settings Manually checked to make sure all of the links (js library, xd_receiver) work receiver_path is a relative path confirmed that the facebook js include is supposed to be in the body of page I'm pretty new at firebug, but I've taken a poke around, and it looks like the facebook js has re-written the HTML, specifically, there is an iframe inside of the [div id="s1"][/div] which looks like it should be a button. Unfortunately, I don't see anything displayed. Any help would be greatly appreciated. A: I did finally figure out the problem. Facebook doesn't allow anything but FBML on the profile page (for some reason I thought I could use an iframe), which means you have to call setFBML with the profile_main property filled first. Once I did that, the button popped right up.
Issue with Facebook showAddSectionButton
I am going absolutely batty trying to figure out how to get showAddSectionButton to work. The problem: I'm trying to get the 'add section button' to show up. There is nothing showing up right now. My code: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" > <body> <div id="s1"></div> <script type="text/javascript" src="{{ fb_js }}"></script> <script type="text/javascript"> window.onload = function() { FB_RequireFeatures(["XFBML"], function() { FB.Facebook.init('{{ api_key }}','{{ receiver_path }}', null); FB.Connect.showAddSectionButton("profile", document.getElementById("s1")); }); }; </script> <div id="s2"></div> </body> </html> Stuff I've tried: Copy-pasted the code from the working Facebook example app Smiley and made only the minimal changes to customize it to my settings Manually checked to make sure all of the links (js library, xd_receiver) work receiver_path is a relative path confirmed that the facebook js include is supposed to be in the body of page I'm pretty new at firebug, but I've taken a poke around, and it looks like the facebook js has re-written the HTML, specifically, there is an iframe inside of the [div id="s1"][/div] which looks like it should be a button. Unfortunately, I don't see anything displayed. Any help would be greatly appreciated.
[ "I did finally figure out the problem.\nFacebook doesn't allow anything but FBML on the profile page (for some reason I thought I could use an iframe), which means you have to call setFBML with the profile_main property filled first.\nOnce I did that, the button popped right up.\n" ]
[ 0 ]
[]
[]
[ "facebook", "google_app_engine", "python" ]
stackoverflow_0001052613_facebook_google_app_engine_python.txt
Q: Override namespace in Python Say there is a folder, '/home/user/temp/a40bd22344'. The name is completely random and changes in every iteration. I need to be able to import this folder in Python using a fixed name, say 'project'. I know I can add this folder to sys.path to enable import lookup, but is there a way to replace 'a40bd22344' with 'project'? Maybe some clever hacks in init.py? Added: It needs to be global - that is, other scripts loading 'project' via the standard: import project Have to work properly, loading a40bd22344 instead. A: Here's one way to do it, without touching sys.path, using the imp module in Python: import imp f, filename, desc = imp.find_module('a40bd22344', ['/home/user/temp/']) project = imp.load_module('a40bd22344', f, filename, desc) project.some_func() Here is a link to some good documentation on the imp module: imp — Access the import internals A: You first import it with import: >>> __import__('temp/a40bd22344') <module 'temp/a40bd22344' from 'temp/a40bd22344/__init__.py'> Then you make sure that this module gets known to Python as project: >>> import sys >>> sys.modules['project'] = sys.modules.pop('temp/a40bd22344') After this, anything importing project in the current Python session will get the original module >>> import project >>> project <module 'temp/a40bd22344' from 'temp/a40bd22344/__init__.py'> This will work also for sub-modules: if you have a foobar.py in the same location you'll get >>> import project.foobar >>> project.foobar <module 'project.foobar' from 'temp/a40bd22344/foobar.py'> Addendum. Here's what I'm running: >>> print sys.version 2.5.2 (r252:60911, Jul 31 2008, 17:28:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] A: Sure, project = __import__('a40bd22344') after sys.path is set properly will just work. Suppose you want to do it in a function taking the full path as an argument and setting the global import of project properly (as well as magically making import project work afterwards in other modules). Piece of cake: def weirdimport(fullpath): global project import os import sys sys.path.append(os.path.dirname(fullpath)) try: project = __import__(os.path.basename(fullpath)) sys.modules['project'] = project finally: del sys.path[-1] this also leaves sys.path as it found it.
Override namespace in Python
Say there is a folder, '/home/user/temp/a40bd22344'. The name is completely random and changes in every iteration. I need to be able to import this folder in Python using a fixed name, say 'project'. I know I can add this folder to sys.path to enable import lookup, but is there a way to replace 'a40bd22344' with 'project'? Maybe some clever hacks in init.py? Added: It needs to be global - that is, other scripts loading 'project' via the standard: import project Have to work properly, loading a40bd22344 instead.
[ "Here's one way to do it, without touching sys.path, using the imp module in Python:\nimport imp\n\nf, filename, desc = imp.find_module('a40bd22344', ['/home/user/temp/'])\nproject = imp.load_module('a40bd22344', f, filename, desc)\n\nproject.some_func()\n\nHere is a link to some good documentation on the imp module:\n\nimp — Access the import internals\n\n", "You first import it with import:\n>>> __import__('temp/a40bd22344')\n<module 'temp/a40bd22344' from 'temp/a40bd22344/__init__.py'>\n\nThen you make sure that this module gets known to Python as project:\n>>> import sys\n>>> sys.modules['project'] = sys.modules.pop('temp/a40bd22344')\n\nAfter this, anything importing project in the current Python session will get the original module\n>>> import project\n>>> project\n<module 'temp/a40bd22344' from 'temp/a40bd22344/__init__.py'>\n\nThis will work also for sub-modules: if you have a foobar.py in the same location you'll get\n>>> import project.foobar\n>>> project.foobar\n<module 'project.foobar' from 'temp/a40bd22344/foobar.py'>\n\nAddendum. Here's what I'm running:\n>>> print sys.version\n2.5.2 (r252:60911, Jul 31 2008, 17:28:52) \n[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)]\n\n", "Sure, project = __import__('a40bd22344') after sys.path is set properly will just work.\nSuppose you want to do it in a function taking the full path as an argument and setting the global import of project properly (as well as magically making import project work afterwards in other modules). Piece of cake:\ndef weirdimport(fullpath):\n global project\n\n import os\n import sys\n sys.path.append(os.path.dirname(fullpath))\n try:\n project = __import__(os.path.basename(fullpath))\n sys.modules['project'] = project\n finally:\n del sys.path[-1]\n\nthis also leaves sys.path as it found it.\n" ]
[ 25, 19, 17 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0001096216_python_python_import.txt
Q: How do I upload pickled data to django FileField? I would like to store large dataset generated in Python in a Django model. My idea was to pickle the data to a string and upload it to FileField of my model. My django model is: #models.py from django.db import models class Data(models.Model): label = models.CharField(max_length=30) file = models.FileField(upload_to="data") In my Python program I would like to do the following: import random, pickle data_entry = Data(label="somedata") somedata = [random.random() for i in range(10000)] # Next line does NOT work #data_entry.file.save(filename, pickle.dumps(somedata)) How should I modify the last line to store somedata in file preserving the paths defined with upload_to parameter? A: Based on the answers to the questions I came up with the following solution: from django.core.files.base import ContentFile import pickle content = pickle.dumps(somedata) fid = ContentFile(content) data_entry.file.save(filename, fid) fid.close() All of it is done on the server side and and users are NOT allowed to upload pickles. I tested it and it works all fine, but I am open to any suggestions. A: In you database the file attribute is just a path to the file. So, since you are not doing an actual upload you need to store the file on the disk and then save the path in database. f = open(filename, 'w') pickle.dump(somedata, f) f.close() data_entry.file=filename data_entry.save() A: Might you not be better off storing your data in a text field? It's not a file upload, after all. A: I've never done this, but based on reading a bit of the relevant code, I'd start by looking into creating an instance of django.core.files.base.ContentFile and assigning that as the value of the field. A: NOTE: See other answers and comments below - old info and broken links removed (can't delete a once-accepted answer). Marty Alchin has a section on this in chapter 3 of Pro Django, review here.
How do I upload pickled data to django FileField?
I would like to store large dataset generated in Python in a Django model. My idea was to pickle the data to a string and upload it to FileField of my model. My django model is: #models.py from django.db import models class Data(models.Model): label = models.CharField(max_length=30) file = models.FileField(upload_to="data") In my Python program I would like to do the following: import random, pickle data_entry = Data(label="somedata") somedata = [random.random() for i in range(10000)] # Next line does NOT work #data_entry.file.save(filename, pickle.dumps(somedata)) How should I modify the last line to store somedata in file preserving the paths defined with upload_to parameter?
[ "Based on the answers to the questions I came up with the following solution:\nfrom django.core.files.base import ContentFile\nimport pickle\n\ncontent = pickle.dumps(somedata)\nfid = ContentFile(content)\ndata_entry.file.save(filename, fid)\nfid.close()\n\nAll of it is done on the server side and and users are NOT allowed to upload pickles. I tested it and it works all fine, but I am open to any suggestions.\n", "In you database the file attribute is just a path to the file. So, since you are not doing an actual upload you need to store the file on the disk and then save the path in database.\nf = open(filename, 'w')\npickle.dump(somedata, f)\nf.close()\ndata_entry.file=filename\ndata_entry.save()\n\n", "Might you not be better off storing your data in a text field? It's not a file upload, after all.\n", "I've never done this, but based on reading a bit of the relevant code, I'd start by looking into creating an instance of django.core.files.base.ContentFile and assigning that as the value of the field.\n", "NOTE: See other answers and comments below - old info and broken links removed (can't delete a once-accepted answer).\nMarty Alchin has a section on this in chapter 3 of Pro Django, review here.\n" ]
[ 10, 1, 0, 0, -3 ]
[]
[]
[ "django", "file", "pickle", "python", "upload" ]
stackoverflow_0000847904_django_file_pickle_python_upload.txt
Q: Programmatic control of python optimization? I've been playing with pyglet. It's very nice. However, if I run my code, which is in an executable file (call it game.py) prefixed with the usual #!/usr/bin/env python by doing ./game.py then it's a bit clunky. But if I run it with python -O ./game.py or PYTHONOPTIMIZE=1 ./game.py then its super-smooth. I'm don't care much why it runs slow without optimization; pyglet's documentation mentions that optimizing disables numerous asserts and also OpenGL's error checking, and I'm happy to leave it at that. My question is: how do people distributing Python code make sure the end users (with zero interest in debugging or modifying the code) run the optimized version of the code. Surely there's some better way than just telling people to make sure they use optimization in the release notes (which they probably won't read anyway) ? On Linux I can easily provide a ./game script to run the file for end users: #!/bin/sh PYTHONOPTIMIZE=1 ./game.py $* but that's not very cross-platform. I have an idea I ought to be able to change the #! line to #!/usr/bin/env PYTHONOPTIMIZE=1 python or #!/usr/bin/env python -O but those don't seem to work as expected, and I'm not sure what they'd do on Windows. Is there some way of controlling optimization from within the code I'm unaware of ? Something like: import runtime runtime.optimize(True) What's considered best-practice in this area by people shipping multi-platform python code ? A: "On Linux I can easily provide a ./game script to run the file for end users:" Correct. "but that's not very cross-platform." Half-correct. There are exactly two shell languages that matter. Standard Linux "sh" and Non-standard Windows "bat" (a/k/a cmd.exe) and that's all there is nowadays. [When I was a kid, there was Open VMS DCL and Data General's weird shell language and RSX-11 and all kinds of great stuff. Thank God for the Posix standard.] game.sh python -O game.py game.bat python -O game.py Interestingly the files are the same, only the extension (and the file format) had to be changed to make the various OS's happy. If you want true one-size-fits-all cross platform, you have to remember that Python is a shell language. This kind of thing works, also. game-startup.py import subprocess subprocess.Popen( "python -O game.py" ) A: Answering your question (as opposing to fixing your problem, which S. Lott did perfectly), I think a lot of the time people who distribute Python code don't worry about this, because it's rare for the optimisation flag to have any effect. I believe Pyglet is the only exception I've heard of in years of using Python. Quoting from the Python docs, "The optimizer currently doesn’t help much; it only removes assert statements".
Programmatic control of python optimization?
I've been playing with pyglet. It's very nice. However, if I run my code, which is in an executable file (call it game.py) prefixed with the usual #!/usr/bin/env python by doing ./game.py then it's a bit clunky. But if I run it with python -O ./game.py or PYTHONOPTIMIZE=1 ./game.py then its super-smooth. I'm don't care much why it runs slow without optimization; pyglet's documentation mentions that optimizing disables numerous asserts and also OpenGL's error checking, and I'm happy to leave it at that. My question is: how do people distributing Python code make sure the end users (with zero interest in debugging or modifying the code) run the optimized version of the code. Surely there's some better way than just telling people to make sure they use optimization in the release notes (which they probably won't read anyway) ? On Linux I can easily provide a ./game script to run the file for end users: #!/bin/sh PYTHONOPTIMIZE=1 ./game.py $* but that's not very cross-platform. I have an idea I ought to be able to change the #! line to #!/usr/bin/env PYTHONOPTIMIZE=1 python or #!/usr/bin/env python -O but those don't seem to work as expected, and I'm not sure what they'd do on Windows. Is there some way of controlling optimization from within the code I'm unaware of ? Something like: import runtime runtime.optimize(True) What's considered best-practice in this area by people shipping multi-platform python code ?
[ "\"On Linux I can easily provide a ./game script to run the file for end users:\"\nCorrect.\n\"but that's not very cross-platform.\"\nHalf-correct. There are exactly two shell languages that matter. Standard Linux \"sh\" and Non-standard Windows \"bat\" (a/k/a cmd.exe) and that's all there is nowadays. [When I was a kid, there was Open VMS DCL and Data General's weird shell language and RSX-11 and all kinds of great stuff. Thank God for the Posix standard.]\ngame.sh\npython -O game.py\n\ngame.bat\npython -O game.py\n\nInterestingly the files are the same, only the extension (and the file format) had to be changed to make the various OS's happy.\nIf you want true one-size-fits-all cross platform, you have to remember that Python is a shell language. This kind of thing works, also.\ngame-startup.py\nimport subprocess\nsubprocess.Popen( \"python -O game.py\" )\n\n", "Answering your question (as opposing to fixing your problem, which S. Lott did perfectly), I think a lot of the time people who distribute Python code don't worry about this, because it's rare for the optimisation flag to have any effect. I believe Pyglet is the only exception I've heard of in years of using Python. Quoting from the Python docs, \"The optimizer currently doesn’t help much; it only removes assert statements\".\n" ]
[ 14, 2 ]
[]
[]
[ "multiplatform", "optimization", "pyglet", "python" ]
stackoverflow_0001092243_multiplatform_optimization_pyglet_python.txt
Q: Combining two JSON objects in to one I have two JSON objects. One is python array which is converted using json,dumps() and other contains records from database and is serialized using json serializer. I want to combine them into a single JSON object. For eg: obj1 = ["a1", "a2", "a3"] obj2 = [{ "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "abcd" } }, { "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "hij" } } ] I want to merge them into single object as below: finalObj = { obj1: ["a1", "a2", "a3"], obj2: [{ "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "abcd" } }, { "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "hij" } } ] } How can i do this? A: You can't do it once they're in JSON format - JSON is just text. You need to combine them in Python first: data = { 'obj1' : obj1, 'obj2' : obj2 } json.dumps(data) A: Not sure if I'm missing something, but I think this works (tested in python 2.5) with the output you specify: import simplejson finalObj = { 'obj1': obj1, 'obj2': obj2 } simplejson.dumps(finalObj) A: You have two techniques. The list version suffers from the limitation that the order matters. However, the JSON is slightly simpler-looking. The dictionary version has nested data, which looks more complex. data = { 'obj1' : obj1, 'obj2' : obj2 } json.dumps(data,indent=2) data = [ obj1, obj2 ] json.dumps(data,indent=2)
Combining two JSON objects in to one
I have two JSON objects. One is python array which is converted using json,dumps() and other contains records from database and is serialized using json serializer. I want to combine them into a single JSON object. For eg: obj1 = ["a1", "a2", "a3"] obj2 = [{ "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "abcd" } }, { "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "hij" } } ] I want to merge them into single object as below: finalObj = { obj1: ["a1", "a2", "a3"], obj2: [{ "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "abcd" } }, { "pk": "e1", "model": "AB.abc", "fields": { "e_desc": "hij" } } ] } How can i do this?
[ "You can't do it once they're in JSON format - JSON is just text. You need to combine them in Python first:\ndata = { 'obj1' : obj1, 'obj2' : obj2 }\njson.dumps(data)\n\n", "Not sure if I'm missing something, but I think this works (tested in python 2.5) with the output you specify:\nimport simplejson\n\nfinalObj = { 'obj1': obj1, 'obj2': obj2 }\nsimplejson.dumps(finalObj)\n\n", "You have two techniques. The list version suffers from the limitation that the order matters. However, the JSON is slightly simpler-looking. The dictionary version has nested data, which looks more complex.\ndata = { 'obj1' : obj1, 'obj2' : obj2 }\njson.dumps(data,indent=2)\n\n\ndata = [ obj1, obj2 ]\njson.dumps(data,indent=2)\n\n" ]
[ 22, 6, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0001096554_json_python.txt
Q: Display triangles in 3D using Python Disclaimer: The context is a project I'm working on as part of my Master's degree. I guess it qualifies as homework. Introduction (feel free to skip to the bottom line) Curved 3D surfaces are commonly displayed as a large set of very small triangles. Each triangle has the following properties: 3 corners uniform color Such that when they're all displayed together, you get the illusion of a smooth surface. This is similar to the way pixels of a uniform color are used to get the illusion of a smooth image. My project involves generating and displaying all the triangles that make up a given surfaces. Assuming that I have code that generates a set of triangles, how can I display them? The code that generates the set of triangles is in Python. I'd prefer to use Python to display the triangles, but I'm not picky. Bottom line How can I display a triangle in 3D using Python, when the input is the coordinates of the 3 corners of the triangle. A: Well, that depends, very much, like which OS you are using, do you need to be portable, etc. But a generic answer is probably to use something like OpenGL, which is a portable API, and has Python bindings. http://pyopengl.sourceforge.net/ On Windows you can use Direct3D, but that isn't particularily portable, and I wouldn't be surprised if there is something special for OS X too. A: Here's an example integrating OpenGL with wxPython. http://code.activestate.com/recipes/325392/ A: EDIT: I misread your question. This is only for 2D: You can use a wx.Canvas of wxpython or implement your application using pygame.
Display triangles in 3D using Python
Disclaimer: The context is a project I'm working on as part of my Master's degree. I guess it qualifies as homework. Introduction (feel free to skip to the bottom line) Curved 3D surfaces are commonly displayed as a large set of very small triangles. Each triangle has the following properties: 3 corners uniform color Such that when they're all displayed together, you get the illusion of a smooth surface. This is similar to the way pixels of a uniform color are used to get the illusion of a smooth image. My project involves generating and displaying all the triangles that make up a given surfaces. Assuming that I have code that generates a set of triangles, how can I display them? The code that generates the set of triangles is in Python. I'd prefer to use Python to display the triangles, but I'm not picky. Bottom line How can I display a triangle in 3D using Python, when the input is the coordinates of the 3 corners of the triangle.
[ "Well, that depends, very much, like which OS you are using, do you need to be portable, etc.\nBut a generic answer is probably to use something like OpenGL, which is a portable API, and has Python bindings. http://pyopengl.sourceforge.net/\nOn Windows you can use Direct3D, but that isn't particularily portable, and I wouldn't be surprised if there is something special for OS X too.\n", "Here's an example integrating OpenGL with wxPython.\nhttp://code.activestate.com/recipes/325392/\n", "EDIT: I misread your question. This is only for 2D:\nYou can use a wx.Canvas of wxpython or implement your application using pygame. \n" ]
[ 7, 2, 0 ]
[]
[]
[ "3d", "geometry", "python" ]
stackoverflow_0001096476_3d_geometry_python.txt
Q: Pylons/Routes rewrite POST or GET to fancy URL The behavior I propose: A user loads up my "search" page, www.site.com/search, types their query into a form, clicks submit, and then ends up at www.site.com/search/the+query instead of www.site.com/search?q=the+query. I've gone through a lot of the Pylons documentation already and just finished reading the Routes documentation and am wondering if this can/should happen at the Routes layer. I have already set up my application to perform a search when given www.site.com/search/the+query, but can not figure out how to send a form to this destination. Or is this something that should happen inside a controller with a redirect_to()? Or somewhere else? Followup: This is less an actual "set in stone" desire right now and more a curiosity for brainstorming future features. I'm designing an application which uses a Wikipedia dump and have observed that when a user performs a search on Wikipedia and the search isn't too ambiguous it redirects directly to an article link: en.wikipedia.org/wiki/Apple. It is actually performing an in-between HTTP 302 redirect step, and I am just curious if there's a more elegant/cute way of doing this in Pylons. A: HTML forms are designed to go to a specific URL with a query string (?q=) or an equivalent body in a POST -- either you write clever and subtle Javascript to intercept the form submission and rewrite it in your preferred weird way, or use redirect_to (and the latter will take some doing). But why do you need such weird behavior rather than just following the standard?! Please explain your use case in terms of application-level needs...! A: You can send whatever content you want for any URL, but if you want a particular URL to appear in the browser's address bar, you have to use a redirect. This is independent of whether you use Pylons, Django or Rails on the server side. In the handling for /search (whether POST or GET), one would normally run the query in the back end, and if there was only one search result (or one overwhelmingly relevant result) you would redirect to that result, otherwise to a page showing links to the top N results. That's just normal practice, AFAIK.
Pylons/Routes rewrite POST or GET to fancy URL
The behavior I propose: A user loads up my "search" page, www.site.com/search, types their query into a form, clicks submit, and then ends up at www.site.com/search/the+query instead of www.site.com/search?q=the+query. I've gone through a lot of the Pylons documentation already and just finished reading the Routes documentation and am wondering if this can/should happen at the Routes layer. I have already set up my application to perform a search when given www.site.com/search/the+query, but can not figure out how to send a form to this destination. Or is this something that should happen inside a controller with a redirect_to()? Or somewhere else? Followup: This is less an actual "set in stone" desire right now and more a curiosity for brainstorming future features. I'm designing an application which uses a Wikipedia dump and have observed that when a user performs a search on Wikipedia and the search isn't too ambiguous it redirects directly to an article link: en.wikipedia.org/wiki/Apple. It is actually performing an in-between HTTP 302 redirect step, and I am just curious if there's a more elegant/cute way of doing this in Pylons.
[ "HTML forms are designed to go to a specific URL with a query string (?q=) or an equivalent body in a POST -- either you write clever and subtle Javascript to intercept the form submission and rewrite it in your preferred weird way, or use redirect_to (and the latter will take some doing).\nBut why do you need such weird behavior rather than just following the standard?! Please explain your use case in terms of application-level needs...!\n", "You can send whatever content you want for any URL, but if you want a particular URL to appear in the browser's address bar, you have to use a redirect. This is independent of whether you use Pylons, Django or Rails on the server side.\nIn the handling for /search (whether POST or GET), one would normally run the query in the back end, and if there was only one search result (or one overwhelmingly relevant result) you would redirect to that result, otherwise to a page showing links to the top N results. That's just normal practice, AFAIK.\n" ]
[ 2, 2 ]
[]
[]
[ "pylons", "python", "routes" ]
stackoverflow_0001096300_pylons_python_routes.txt
Q: Reallocating list in python Ok this is my problem. I am trying something like this: for i in big_list: del glist[:] for j in range(0:val) glist.append(blah[j]) The idea is to reset the list and reuse it for the next set of data points. The problem is, for some reason, if the first list has 3 points, glist[0] glist[1] glist[2] The next list will continue from index 3 and store the last 3 elements in those indexes glist[0] = 4th elem of new list glist[1] = 5th elem of new list glist[2] = 6th elem of new list glist[3] = 1st elem of new list glist[4] = 2nd elem of new list glist[5] = 3rd elem of new list I'm sure it is an issue with allocated space. But how can I achieve this del g_list[:] so the result is, glist[0] = 1st elem of new list glist[1] = 2nd elem of new list glist[2] = 3rd elem of new list glist[3] = 4th elem of new list glist[4] = 5th elem of new list glist[5] = 6th elem of new list Allocating variable from within loop is not an option. Any ideas? A: Change del glist[:] to glist = []. You don't need to "reuse" or "reallocate" in Python, the garbagecollector will take care of that for you. Also, you use 'i' as the loop variable in both loops. That's going to confuse you sooner or later. :) A: you can try glist=[] A: del glist[:] works fine for clearing a list. You need to show us your exact code. As shown below, the behavior you're describing does not happen. The append after the del a[:] puts the item at index 0. >>> a = [1,2,3] >>> del a[:] >>> a [] >>> a.append(4) >>> a [4]
Reallocating list in python
Ok this is my problem. I am trying something like this: for i in big_list: del glist[:] for j in range(0:val) glist.append(blah[j]) The idea is to reset the list and reuse it for the next set of data points. The problem is, for some reason, if the first list has 3 points, glist[0] glist[1] glist[2] The next list will continue from index 3 and store the last 3 elements in those indexes glist[0] = 4th elem of new list glist[1] = 5th elem of new list glist[2] = 6th elem of new list glist[3] = 1st elem of new list glist[4] = 2nd elem of new list glist[5] = 3rd elem of new list I'm sure it is an issue with allocated space. But how can I achieve this del g_list[:] so the result is, glist[0] = 1st elem of new list glist[1] = 2nd elem of new list glist[2] = 3rd elem of new list glist[3] = 4th elem of new list glist[4] = 5th elem of new list glist[5] = 6th elem of new list Allocating variable from within loop is not an option. Any ideas?
[ "Change del glist[:] to glist = []. You don't need to \"reuse\" or \"reallocate\" in Python, the garbagecollector will take care of that for you.\nAlso, you use 'i' as the loop variable in both loops. That's going to confuse you sooner or later. :)\n", "you can try \nglist=[]\n\n", "del glist[:] works fine for clearing a list. You need to show us your exact code. As shown below, the behavior you're describing does not happen. The append after the del a[:] puts the item at index 0.\n>>> a = [1,2,3]\n>>> del a[:]\n>>> a\n[]\n>>> a.append(4)\n>>> a\n[4]\n\n" ]
[ 5, 1, 1 ]
[]
[]
[ "collections", "list", "memory_management", "python" ]
stackoverflow_0001098202_collections_list_memory_management_python.txt
Q: Looking for elegant glob-like DNA string expansion I'm trying to make a glob-like expansion of a set of DNA strings that have multiple possible bases. The base of my DNA strings contains the letters A, C, G, and T. However, I can have special characters like M which could be an A or a C. For example, say I have the string: ATMM I would like to take this string as input and output the four possible matching strings: ATAA ATAC ATCA ATCC Rather than brute force a solution, I feel like there must be some elegant Python/Perl/Regular Expression trick to do this. Thank you for any advice. Edit, thanks cortex for the product operator. This is my solution: Still a Python newbie, so I bet there's a better way to handle each dictionary key than another for loop. Any suggestions would be great. import sys from itertools import product baseDict = dict(M=['A','C'],R=['A','G'],W=['A','T'],S=['C','G'], Y=['C','T'],K=['G','T'],V=['A','C','G'], H=['A','C','T'],D=['A','G','T'],B=['C','G','T']) def glob(str): strings = [str] ## this loop visits very possible base in the dictionary ## probably a cleaner way to do it for base in baseDict: oldstrings = strings strings = [] for string in oldstrings: strings += map("".join,product(*[baseDict[base] if x == base else [x] for x in string])) return strings for line in sys.stdin.readlines(): line = line.rstrip('\n') permutations = glob(line) for x in permutations: print x A: Agree with other posters that it seems like a strange thing to want to do. Of course, if you really want to, there is (as always) an elegant way to do it in Python (2.6+): from itertools import product map("".join, product(*[['A', 'C'] if x == "M" else [x] for x in "GMTTMCA"])) Full solution with input handling: import sys from itertools import product base_globs = {"M":['A','C'], "R":['A','G'], "W":['A','T'], "S":['C','G'], "Y":['C','T'], "K":['G','T'], "V":['A','C','G'], "H":['A','C','T'], "D":['A','G','T'], "B":['C','G','T'], } def base_glob(glob_sequence): production_sequence = [base_globs.get(base, [base]) for base in glob_sequence] return map("".join, product(*production_sequence)) for line in sys.stdin.readlines(): productions = base_glob(line.strip()) print "\n".join(productions) A: You probably could do something like this in python using the yield operator def glob(str): if str=='': yield '' return if str[0]!='M': for tail in glob(str[1:]): yield str[0] + tail else: for c in ['A','G','C','T']: for tail in glob(str[1:]): yield c + tail return EDIT: As correctly pointed out I was making a few mistakes. Here is a version which I tried out and works. A: This isn't really an "expansion" problem and it's almost certainly not doable with any sensible regular expression. I believe what you're looking for is "how to generate permutations". A: You could for example do this recursively. Pseudo-code: printSequences(sequence s) switch "first special character in sequence" case ... case M: s1 = s, but first M replaced with A printSequences(s1) s2 = s, but first M replaced with C printSequences(s2) case none: print s; A: Regexps match strings, they're not intended to be turned into every string they might match. Also, you're looking at a lot of strings being output from this - for instance: MMMMMMMMMMMMMMMM (16 M's) produces 65,536 16 character strings - and I'm guessing that DNA sequences are usually longer than that. Arguably any solution to this is pretty much 'brute force' from a computer science perspective, because your algorithm is O(2^n) on the original string length. There's actually quite a lot of work to be done. Why do you want to produce all the combinations? What are you going to do with them? (If you're thinking to produce every string possibility and then look for it in a large DNA sequence, then there are much better ways of doing that.)
Looking for elegant glob-like DNA string expansion
I'm trying to make a glob-like expansion of a set of DNA strings that have multiple possible bases. The base of my DNA strings contains the letters A, C, G, and T. However, I can have special characters like M which could be an A or a C. For example, say I have the string: ATMM I would like to take this string as input and output the four possible matching strings: ATAA ATAC ATCA ATCC Rather than brute force a solution, I feel like there must be some elegant Python/Perl/Regular Expression trick to do this. Thank you for any advice. Edit, thanks cortex for the product operator. This is my solution: Still a Python newbie, so I bet there's a better way to handle each dictionary key than another for loop. Any suggestions would be great. import sys from itertools import product baseDict = dict(M=['A','C'],R=['A','G'],W=['A','T'],S=['C','G'], Y=['C','T'],K=['G','T'],V=['A','C','G'], H=['A','C','T'],D=['A','G','T'],B=['C','G','T']) def glob(str): strings = [str] ## this loop visits very possible base in the dictionary ## probably a cleaner way to do it for base in baseDict: oldstrings = strings strings = [] for string in oldstrings: strings += map("".join,product(*[baseDict[base] if x == base else [x] for x in string])) return strings for line in sys.stdin.readlines(): line = line.rstrip('\n') permutations = glob(line) for x in permutations: print x
[ "Agree with other posters that it seems like a strange thing to want to do. Of course, if you really want to, there is (as always) an elegant way to do it in Python (2.6+):\nfrom itertools import product\nmap(\"\".join, product(*[['A', 'C'] if x == \"M\" else [x] for x in \"GMTTMCA\"]))\n\nFull solution with input handling:\nimport sys\nfrom itertools import product\n\nbase_globs = {\"M\":['A','C'], \"R\":['A','G'], \"W\":['A','T'],\n \"S\":['C','G'], \"Y\":['C','T'], \"K\":['G','T'],\n\n \"V\":['A','C','G'], \"H\":['A','C','T'],\n \"D\":['A','G','T'], \"B\":['C','G','T'],\n }\n\ndef base_glob(glob_sequence):\n production_sequence = [base_globs.get(base, [base]) for base in glob_sequence]\n return map(\"\".join, product(*production_sequence))\n\nfor line in sys.stdin.readlines():\n productions = base_glob(line.strip())\n print \"\\n\".join(productions)\n\n", "You probably could do something like this in python using the yield operator\ndef glob(str):\n if str=='': \n yield ''\n return \n\n if str[0]!='M':\n for tail in glob(str[1:]): \n yield str[0] + tail \n else:\n for c in ['A','G','C','T']:\n for tail in glob(str[1:]):\n yield c + tail \n return\n\nEDIT: As correctly pointed out I was making a few mistakes. Here is a version which I tried out and works.\n", "This isn't really an \"expansion\" problem and it's almost certainly not doable with any sensible regular expression.\nI believe what you're looking for is \"how to generate permutations\".\n", "You could for example do this recursively. Pseudo-code:\nprintSequences(sequence s)\n switch \"first special character in sequence\"\n case ...\n case M:\n s1 = s, but first M replaced with A\n printSequences(s1)\n s2 = s, but first M replaced with C\n printSequences(s2)\n case none:\n print s;\n\n", "Regexps match strings, they're not intended to be turned into every string they might match.\nAlso, you're looking at a lot of strings being output from this - for instance:\nMMMMMMMMMMMMMMMM (16 M's)\n\nproduces 65,536 16 character strings - and I'm guessing that DNA sequences are usually longer than that. \nArguably any solution to this is pretty much 'brute force' from a computer science perspective, because your algorithm is O(2^n) on the original string length. There's actually quite a lot of work to be done.\nWhy do you want to produce all the combinations? What are you going to do with them? (If you're thinking to produce every string possibility and then look for it in a large DNA sequence, then there are much better ways of doing that.)\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "dna_sequence", "glob", "permutation", "python" ]
stackoverflow_0001098461_dna_sequence_glob_permutation_python.txt
Q: Extract only numbers from data in Jython Here is my problem. I'm working on a Jython program and I have to extract numbers from a PyJavaInstance: [{string1="foo", xxx1, xxx2, ..., xxxN, string2="bar"}] (where xxx are the floating point numbers). My question is how can I extract the numbers and put them in a more simple structure like a python list. Thank you in advance. A: A PyJavaInstance is a Jython wrapper around a Java instance; how you extract numbers from it depends on what it is. If you need to get a bunch of stuff - some of which are strings and some of which are floats, then: float_list = [] for item in instance_properties: try: float_list.append(float(item)) except ValueError: pass A: can you iterate and check whether an item is float? The method you're looking for is isinstance. I hope it's implemented in Jython. A: Thank you Vinay. It's also the kind of solution I've just found: new_inst=[] for element in instance: try: float(element) new_inst.append(float(element)) except ValueError: del(element) @SilentGhost: Good suggestion. The issue was to find what method could determine if each element I iterate is a float number.
Extract only numbers from data in Jython
Here is my problem. I'm working on a Jython program and I have to extract numbers from a PyJavaInstance: [{string1="foo", xxx1, xxx2, ..., xxxN, string2="bar"}] (where xxx are the floating point numbers). My question is how can I extract the numbers and put them in a more simple structure like a python list. Thank you in advance.
[ "A PyJavaInstance is a Jython wrapper around a Java instance; how you extract numbers from it depends on what it is. If you need to get a bunch of stuff - some of which are strings and some of which are floats, then:\nfloat_list = []\nfor item in instance_properties:\n try:\n float_list.append(float(item))\n except ValueError:\n pass\n\n", "can you iterate and check whether an item is float? The method you're looking for is isinstance. I hope it's implemented in Jython.\n", "Thank you Vinay. It's also the kind of solution I've just found:\n new_inst=[]\nfor element in instance:\n try:\n float(element)\n new_inst.append(float(element))\n except ValueError:\n del(element)\n\n@SilentGhost: Good suggestion. The issue was to find what method could determine if each element I iterate is a float number. \n" ]
[ 2, 1, 0 ]
[]
[]
[ "extract", "jython", "list", "python" ]
stackoverflow_0001098613_extract_jython_list_python.txt
Q: Help me lambda-nize this To help me better understand lambda I wrote this short snippet that rotates and transforms a quad (I hope I got the math right). Now, I want to replace the three steps below with one liner lambdas, possibly in conjunction with map(). Im using a vector class but hopefully, the functions are clear as to what they do. self.orientation = vector(1,0) self.orientation.rotate(90.0) #the four corners of a quad points = (vector(-1,-1),vector(1,-1),vector(1,1),vector(-1,1)) print points #apply rotation to points according to orientation rot_points = [] for i in points: rot_points.append(i.rotated(self.orientation.get_angle())) print rot_points #transform the point according to world position and scale real_points = [] for i in rot_points: real_points.append(self.pos+i*self.scale) print real_points return real_points A: You could use map, reduce, et al, but nowadays list comprehensions are the preferred way to do things in Python: rot_points = (i.rotated(self.orientation.get_angle()) for i in points) real_points = [self.pos+i*self.scale for i in rot_points] Notice how I used (parentheses) instead of [brackets] in the first line. That is called a generator expression. It allows rot_points to be constructed on the fly as the points are used in the second line rather than constructing all of the rot_points in memory first and then iterating through them. It could save some unnecessary memory usage, basically, if that's a concern. A: Note that you're unnecessarily calling get_angle() for each point, when really it's constant over the life of the loop. I'd try this: angle = self.orientation.get_angle() real_points = [self.pos+point.rotated(angle)*self.scale for point in points] I don't think it's a bad idea to create a helper function in this case either, since you're doing quite a bit to each point. A new function is more readable: angle = self.orientation.get_angle() def adjust_point(point): point = point.rotated(angle) point *= self.scale point += self.pos return point real_points = [adjust_point(p) for p in point]
Help me lambda-nize this
To help me better understand lambda I wrote this short snippet that rotates and transforms a quad (I hope I got the math right). Now, I want to replace the three steps below with one liner lambdas, possibly in conjunction with map(). Im using a vector class but hopefully, the functions are clear as to what they do. self.orientation = vector(1,0) self.orientation.rotate(90.0) #the four corners of a quad points = (vector(-1,-1),vector(1,-1),vector(1,1),vector(-1,1)) print points #apply rotation to points according to orientation rot_points = [] for i in points: rot_points.append(i.rotated(self.orientation.get_angle())) print rot_points #transform the point according to world position and scale real_points = [] for i in rot_points: real_points.append(self.pos+i*self.scale) print real_points return real_points
[ "You could use map, reduce, et al, but nowadays list comprehensions are the preferred way to do things in Python:\nrot_points = (i.rotated(self.orientation.get_angle()) for i in points)\nreal_points = [self.pos+i*self.scale for i in rot_points]\n\nNotice how I used (parentheses) instead of [brackets] in the first line. That is called a generator expression. It allows rot_points to be constructed on the fly as the points are used in the second line rather than constructing all of the rot_points in memory first and then iterating through them. It could save some unnecessary memory usage, basically, if that's a concern.\n", "Note that you're unnecessarily calling get_angle() for each point, when really it's constant over the life of the loop. \nI'd try this:\nangle = self.orientation.get_angle()\nreal_points = [self.pos+point.rotated(angle)*self.scale for point in points]\n\nI don't think it's a bad idea to create a helper function in this case either, since you're doing quite a bit to each point. A new function is more readable:\nangle = self.orientation.get_angle()\n\ndef adjust_point(point):\n point = point.rotated(angle)\n point *= self.scale\n point += self.pos\n return point\n\nreal_points = [adjust_point(p) for p in point]\n\n" ]
[ 8, 0 ]
[]
[]
[ "lambda", "python" ]
stackoverflow_0001098841_lambda_python.txt
Q: Replacing variable with function/class indicating dynamic value In my program, I draw some quads. I want to add the functionality for them to scale up, then down, then go back to being static (to draw attention). In the quads I have: self.scale = 10 Making scale change according to sin would be nice. But adding frequency, amplitude and logic to my already bloated quad class is something I take as a challenge to avoid. Something like this: class mysin: def __init__(self): self.tick = 0.0 self.freq = 1.0 self.ampl = 1.0 def update(self, amount): self.tick += amount def value(self): return math.sin(self.tick) That class would also add itself to the logic system (getting update calls every frame). I would then do: quad.scale = 10 # for static quad quad.scale = mysin() # for cool scaling quad The problem is that some calculations expect scale to hold a value. I could of course add another class where value() returns a (previously saved) constant value and adapt all the calculations. What I want to know now is... does this have a name, is it a valid technique? I read the wiki article on functional programming and this idea sprung to mind as a wacky implementation (although Im not sure it qualifies as FP). I could very well have been driven mad by that article. Put me back in line fellow coders. A: The distinction between quad.scale= 10 and quad.scale= MySin() Is minor. Within the Quad class definition the "scale" attribute can be a property with proper getter and setter functions. class Quad( object ): @property def scale( self ): return self._scale @scale.setter def set_scale( self, value ): # handle numeric and MySin() values appropriately. Alternate version with the explicit property function (which I prefer). class Quad( object ): def get_scale( self ): return self._scale def set_scale( self, value ) # Handle numeric and MySin() values scale = property( get_scale, set_scale ) Any other class should NOT know or care what type of value scale has. If some client does this quad.scale * 2 Then you have design issues. You haven't properly encapsulated your design and Quad's client classes are too friendly with Quad. If you absolutely must do this -- because you can't write a method function of Quad to encapsulate this -- then you have to make MySin a proper numeric class so it can respond to quad.scale * 2 requests properly. A: It sounds like you want your quads to be dumb, and to have an animator class which is smart. So,here are some suggestions: Give the quads an attribute which indicates how to animate them (in addition to the scale and whatever else). In an Animator class, on a frame update, iterate over your quads and decide how to treat each one, based on that attribute. In the treatment of a quad, update the scale property of each dynamically changing quad to the appropriate float value. For static quads it never changes, for dynamic ones it changes based on any algorithm you like. One advantage this approach is that it allows you to vary different attributes (scale, opacity, fill colour ... you name it) while keeping the logic in the animator. A: It's sort of like lazy-evaluation. It is definitely a valid tecnique when used properly, but I don't think this is the right place to use it. It makes the code kind of confusing. A: It sure is a valid technique, but a name? Having an object.value() instead of an int? Uhm. Object orientation? :) If the methods that use this value requires an integer, and won't call any method on it, you could in fact create your own integer class, that behaves exactly like an integer, but changes the value.
Replacing variable with function/class indicating dynamic value
In my program, I draw some quads. I want to add the functionality for them to scale up, then down, then go back to being static (to draw attention). In the quads I have: self.scale = 10 Making scale change according to sin would be nice. But adding frequency, amplitude and logic to my already bloated quad class is something I take as a challenge to avoid. Something like this: class mysin: def __init__(self): self.tick = 0.0 self.freq = 1.0 self.ampl = 1.0 def update(self, amount): self.tick += amount def value(self): return math.sin(self.tick) That class would also add itself to the logic system (getting update calls every frame). I would then do: quad.scale = 10 # for static quad quad.scale = mysin() # for cool scaling quad The problem is that some calculations expect scale to hold a value. I could of course add another class where value() returns a (previously saved) constant value and adapt all the calculations. What I want to know now is... does this have a name, is it a valid technique? I read the wiki article on functional programming and this idea sprung to mind as a wacky implementation (although Im not sure it qualifies as FP). I could very well have been driven mad by that article. Put me back in line fellow coders.
[ "The distinction between\nquad.scale= 10\n\nand\nquad.scale= MySin()\n\nIs minor. Within the Quad class definition the \"scale\" attribute can be a property with proper getter and setter functions.\nclass Quad( object ):\n @property\n def scale( self ):\n return self._scale\n\n @scale.setter\n def set_scale( self, value ):\n # handle numeric and MySin() values appropriately.\n\nAlternate version with the explicit property function (which I prefer).\nclass Quad( object ):\n def get_scale( self ):\n return self._scale\n def set_scale( self, value )\n # Handle numeric and MySin() values \n scale = property( get_scale, set_scale )\n\nAny other class should NOT know or care what type of value scale has. If some client does this\nquad.scale * 2\n\nThen you have design issues. You haven't properly encapsulated your design and Quad's client classes are too friendly with Quad. \nIf you absolutely must do this -- because you can't write a method function of Quad to encapsulate this -- then you have to make MySin a proper numeric class so it can respond to quad.scale * 2 requests properly.\n", "It sounds like you want your quads to be dumb, and to have an animator class which is smart. So,here are some suggestions:\n\nGive the quads an attribute which indicates how to animate them (in addition to the scale and whatever else).\nIn an Animator class, on a frame update, iterate over your quads and decide how to treat each one, based on that attribute.\nIn the treatment of a quad, update the scale property of each dynamically changing quad to the appropriate float value. For static quads it never changes, for dynamic ones it changes based on any algorithm you like.\n\nOne advantage this approach is that it allows you to vary different attributes (scale, opacity, fill colour ... you name it) while keeping the logic in the animator.\n", "It's sort of like lazy-evaluation. It is definitely a valid tecnique when used properly, but I don't think this is the right place to use it. It makes the code kind of confusing.\n", "It sure is a valid technique, but a name? Having an object.value() instead of an int? Uhm. Object orientation? :)\nIf the methods that use this value requires an integer, and won't call any method on it, you could in fact create your own integer class, that behaves exactly like an integer, but changes the value.\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001099328_python.txt
Q: for statement and i.find in list for a in ('90','52.6', '26.5'): if a == '90': z = (' 0',) elif a == '52.6': z = ('0', '5') else: z = ('25') for b in z: cmd = exepath + ' -a ' + str(a) + ' -b ' + str(b) process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] outputlist = outputstring.splitlines() for i in outputlist: if i.find('The student says') != -1: print i Working on an assignment and this a snippet of my code. There is a portion above this code but all it's doing is defining exepath and just printing exepath to the screen. When I run this, I don't get an error or anything but the program just ends when put into the command prompt. Why? and how do I fix it? EDIT: Sorry for the quotes but the problem. I updated the code to fix that, but it still gives me nothing back it just exits... What could the problem be? A: you are missing quotes around you first for statement try for a in ('90','52.6', '26.5'): A: Once you've fixed the missing quote, you'll get weird behavior from this part: else: z = ('25') for b in z: the parentheses here do nothing at all, and b in the loop will be '2' then '5'. You probably mean to use, instead: z = ('25',) which makes z a tuple with just one item (the trailing comma here is what tells the Python compiler that it's a tuple -- would work just as well w/o the parentheses), so b in the loop will be '25'. A: Looking at your code it will only actually output something if the i.find('The student says') successfully matches so you could either run this in a debugger or add some print statements to see what is in outputstring for each time round the loop.
for statement and i.find in list
for a in ('90','52.6', '26.5'): if a == '90': z = (' 0',) elif a == '52.6': z = ('0', '5') else: z = ('25') for b in z: cmd = exepath + ' -a ' + str(a) + ' -b ' + str(b) process = Popen(cmd, shell=True, stderr=STDOUT, stdout=PIPE) outputstring = process.communicate()[0] outputlist = outputstring.splitlines() for i in outputlist: if i.find('The student says') != -1: print i Working on an assignment and this a snippet of my code. There is a portion above this code but all it's doing is defining exepath and just printing exepath to the screen. When I run this, I don't get an error or anything but the program just ends when put into the command prompt. Why? and how do I fix it? EDIT: Sorry for the quotes but the problem. I updated the code to fix that, but it still gives me nothing back it just exits... What could the problem be?
[ "you are missing quotes around you first for statement try\nfor a in ('90','52.6', '26.5'):\n\n", "Once you've fixed the missing quote, you'll get weird behavior from this part:\nelse:\n z = ('25')\n\nfor b in z:\n\nthe parentheses here do nothing at all, and b in the loop will be '2' then '5'. You probably mean to use, instead:\n z = ('25',)\n\nwhich makes z a tuple with just one item (the trailing comma here is what tells the Python compiler that it's a tuple -- would work just as well w/o the parentheses), so b in the loop will be '25'.\n", "Looking at your code it will only actually output something if the i.find('The student says') successfully matches so you could either run this in a debugger or add some print statements to see what is in outputstring for each time round the loop.\n" ]
[ 6, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001098970_python.txt
Q: How to properly do one-to-many joins on an (Python) Google App Engine datasource? I have some models set up like: class Apps(db.Model): name = db.StringProperty(multiline=False) description = db.TextProperty() class AppScreenshots(db.Model): image_file = db.StringProperty(multiline=False) description = db.StringProperty(multiline=False) app = db.ReferenceProperty(Apps) I'm trying to reference a "parent" app in a screenshot like so: a = Apps.get(app_key) ss = AppScreenshots( image_file = 'foo', description = 'bar', app = a ) ss.put() But it complains to me saying: BadArgumentError('_app should be a string; received ag1raWxsZXItcm9ib3RzcgoLEgRBcHBzGAkM (a Key):',) I've tried going over a few examples on the internet and they all seem to work JUST like the above. One set of documentation Google has up suggests doing it a little differently, like this: a = Apps.get(app_key) ss = AppScreenshots( image_file = 'foo', description = 'bar', app = a.key() ) ss.put() But that gives me the exact same error. What am I doing wrong? A: The problem I found when trying to run your code was that apparently you need to change the name of 'app' in AppScreenshots to something else such as 'apps'. The word 'app' must be reserved in this context. Try this Query instead. You could do .filter() too on this if you don't want the first entity. class AppScreenshots(db.Model): image_file = db.StringProperty() description = db.StringProperty() apps = db.ReferenceProperty(Apps) appsObject = db.Query(Apps).get() ss = AppScreenshots(image_file = 'foo', description = 'bar',apps = appsObject) Here is a helpful article on modeling relationships link. Also a related question here on SO
How to properly do one-to-many joins on an (Python) Google App Engine datasource?
I have some models set up like: class Apps(db.Model): name = db.StringProperty(multiline=False) description = db.TextProperty() class AppScreenshots(db.Model): image_file = db.StringProperty(multiline=False) description = db.StringProperty(multiline=False) app = db.ReferenceProperty(Apps) I'm trying to reference a "parent" app in a screenshot like so: a = Apps.get(app_key) ss = AppScreenshots( image_file = 'foo', description = 'bar', app = a ) ss.put() But it complains to me saying: BadArgumentError('_app should be a string; received ag1raWxsZXItcm9ib3RzcgoLEgRBcHBzGAkM (a Key):',) I've tried going over a few examples on the internet and they all seem to work JUST like the above. One set of documentation Google has up suggests doing it a little differently, like this: a = Apps.get(app_key) ss = AppScreenshots( image_file = 'foo', description = 'bar', app = a.key() ) ss.put() But that gives me the exact same error. What am I doing wrong?
[ "The problem I found when trying to run your code was that apparently you need to change the name of 'app' in AppScreenshots to something else such as 'apps'. The word 'app' must be reserved in this context.\nTry this Query instead. You could do .filter() too on this if you don't want the first entity.\nclass AppScreenshots(db.Model):\n image_file = db.StringProperty()\n description = db.StringProperty()\n apps = db.ReferenceProperty(Apps)\n\nappsObject = db.Query(Apps).get()\n\nss = AppScreenshots(image_file = 'foo', description = 'bar',apps = appsObject)\n\nHere is a helpful article on modeling relationships link.\nAlso a related question here on SO\n" ]
[ 5 ]
[]
[]
[ "google_app_engine", "gql", "python" ]
stackoverflow_0001100472_google_app_engine_gql_python.txt
Q: Are there any class diagram generating tools for python source code? Are there any reverse-egineering UML tools for Python? A: This looks good: http://www.andypatterns.com/index.php?cID=65 How about this? http://www.greenteapress.com/thinkpython/swampy/lumpy.html A: pyUML may be what you are looking for. Although it is an Eclipse plugin for PyDev. A: Sparx Enterprise Architect can reverse engineer Python code, at least according to the documentation. I've never tried it, so I don't know for certain.
Are there any class diagram generating tools for python source code?
Are there any reverse-egineering UML tools for Python?
[ "This looks good: http://www.andypatterns.com/index.php?cID=65\nHow about this? http://www.greenteapress.com/thinkpython/swampy/lumpy.html\n", "pyUML may be what you are looking for. Although it is an Eclipse plugin for PyDev.\n", "Sparx Enterprise Architect can reverse engineer Python code, at least according to the documentation. I've never tried it, so I don't know for certain.\n" ]
[ 5, 2, 2 ]
[]
[]
[ "python", "uml" ]
stackoverflow_0001101301_python_uml.txt
Q: Why my Python test generator simply doesn't work? This is a sample script to test the use of yield... am I doing it wrong? It always returns '1'... #!/usr/bin/python def testGen(): for a in [1,2,3,4,5,6,7,8,9,10]: yield a w = 0 while w < 10: print testGen().next() w += 1 A: You're creating a new generator each time. You should only call testGen() once and then use the object returned. Try: w = 0 g = testGen() while w < 10: print g.next() w += 1 Then of course there's the normal, idiomatic generator usage: for n in testGen(): print n Note that this will only call testGen() once at the start of the loop, not once per iteration.
Why my Python test generator simply doesn't work?
This is a sample script to test the use of yield... am I doing it wrong? It always returns '1'... #!/usr/bin/python def testGen(): for a in [1,2,3,4,5,6,7,8,9,10]: yield a w = 0 while w < 10: print testGen().next() w += 1
[ "You're creating a new generator each time. You should only call testGen() once and then use the object returned. Try:\nw = 0\ng = testGen()\nwhile w < 10:\n print g.next()\n w += 1\n\nThen of course there's the normal, idiomatic generator usage:\nfor n in testGen():\n print n\n\nNote that this will only call testGen() once at the start of the loop, not once per iteration.\n" ]
[ 10 ]
[]
[]
[ "generator", "python", "testing", "yield" ]
stackoverflow_0001101550_generator_python_testing_yield.txt
Q: Help needed improving Python code using List Comprehensions I've been writing little Python programs at home to learn more about the language. The most recent feature I've tried to understand are List Comprehensions. I created a little script that estimates when my car needs its next oil change based on how frequently I've gotten the oil changed in the past. In the code snippet below, oil_changes is a list of the mileages at which I got the oil changed. # Compute a list of the mileage differences between each oil change. diffs = [j - i for i, j in zip(oil_changes[:-1], oil_changes[1:])] # Use the average difference between oil changes to estimate the next change. next_oil = oil_changes[-1] + sum(diffs) / len(diffs) The code produces the right answer (did the math by hand to check) but it doesn't feel quite Pythonic yet. Am I doing a lot of needless copying of the original list in the first line? I feel like there's a much better way to do this but I don't know what it is. A: Try this: assert len(oil_changes) >= 2 sum_of_diffs = oil_changes[-1] - oil_changes[0] number_of_diffs = len(oil_changes) - 1 average_diff = sum_of_diffs / float(number_of_diffs) A: As other answers pointed out, you don't really need to worry unless your oil_changes list is extremely long. However, as a fan of "stream-based" computing, I think it's interesting to point out that itertools offers all the tools you need to compute your next_oil value in O(1) space (and O(N) time of course!-) no matter how big N, that is, len(next_oil), gets. izip per se is insufficient, because it only reduces a bit the multiplicative constant but leaves your space demands as O(N). The key idea to bring those demands down to O(1) is to pair izip with tee -- and avoiding the list comprehension, which would be O(N) in space anyway, in favor of a good simple old-fashioned loop!-). Here comes: it = iter(oil_changes) a, b = itertools.tee(it) b.next() thesum = 0 for thelen, (i, j) in enumerate(itertools.izip(a, b)): thesum += j - i last_one = j next_oil = last_one + thesum / (thelen + 1) Instead of taking slices from the list, we take an iterator on it, tee it (making two independently advanceable clones thereof), and advance, once, one of the clones, b. tee takes space O(x) where x is the maximum absolute difference between the advancement of the various clones; here, the two clones' advancement only differs by 1 at most, so the space requirement is clearly O(1). izip makes a one-at-a-time "zipping" of the two slightly-askew clone iterators, and we dress it up in enumerate so we can track how many times we go through the loop, i.e. the length of the iterable we're iterating on (we need the +1 in the final expression, because enumerate starts from 0!-). We compute the sum with a simple +=, which is fine for numbers (sum is even better, but it wouldn't track the length!-). It's tempting after the loop to use last_one = a.next(), but that would not work because a is actually exhausted -- izip advances its argument iterables left to right, so it has advanced a one last time before it realizes b is over!-). That's OK, because Python loop variables are NOT limited in scope to the loop itself -- after the loop, j still has the value that was last extracted by advancing b before izip gave up (just like thelen still has the last count value returned by enumerate). I'm still naming the value last_one rather than using j directly in the final expression, because I think it's clearer and more readable. So there it is -- I hope it was instructive!-) -- although for the solution of the specific problem that you posed this time, it's almost certain to be overkill. We Italians have an ancient proverb -- "Impara l'Arte, e mettila da parte!"... "Learn the Art, and then set it aside" -- which I think is quite applicable here: it's a good thing to learn advanced and sophisticated ways to solve very hard problems, in case you ever meet them, but for all that you need to go for simplicity and directness in the vastly more common case of simple, ordinary problems -- not apply advanced solutions that most likely won't be needed!-) A: The itertools package provides additional generator-style functions. For instance, you can use izip in place of zip to save on some memory. You could also perhaps write an average function so you can turn diffs into a generator instead of a list comprehension: from itertools import izip def average(items): sum, count = 0, 0 for item in items: sum += item count += 1 return sum / count diffs = (j - i for i, j in izip(oil_changes[:-1], oil_changes[1:]) next_oil = oil_changes[-1] + average(diffs) Alternatively, you could change your definition of diffs to: diffs = [oil_changes[i] - oil_changes[i-1] for i in xrange(1, len(oil_changes))] I dunno, it's not really a huge improvement. Your code is pretty good as is. A: It seems fine, really. Not everything is simple (you've got several steps in an otherwise simple calculation, no matter how you frame it). There are options to reduce the copies, like using itertools.islice and itertools.izip, but (aside from izip) the extra steps in the code would just complicate it further. Not everything needs to be a list comprehension, but it is a judgement call sometimes. What looks cleaner to you? What will the next guy that reads it understand best? What will you understand when you come back to fix that bug in three months? A: Am I doing a lot of needless copying of the original list in the first line? Technically, yes. Realistically, no. Unless you've changed your oil literally millions of times, the speed penalty is unlikely to be significant. You could change zip to izip, but it hardly seems worth it (and in python 3.0, zip effectively is izip). Insert that old quote by Knuth here. (you could also replace oil_changes[:-1] with just oil_changes, since zip() truncates to the length of the shortest input sequence anyway)
Help needed improving Python code using List Comprehensions
I've been writing little Python programs at home to learn more about the language. The most recent feature I've tried to understand are List Comprehensions. I created a little script that estimates when my car needs its next oil change based on how frequently I've gotten the oil changed in the past. In the code snippet below, oil_changes is a list of the mileages at which I got the oil changed. # Compute a list of the mileage differences between each oil change. diffs = [j - i for i, j in zip(oil_changes[:-1], oil_changes[1:])] # Use the average difference between oil changes to estimate the next change. next_oil = oil_changes[-1] + sum(diffs) / len(diffs) The code produces the right answer (did the math by hand to check) but it doesn't feel quite Pythonic yet. Am I doing a lot of needless copying of the original list in the first line? I feel like there's a much better way to do this but I don't know what it is.
[ "Try this: \nassert len(oil_changes) >= 2\nsum_of_diffs = oil_changes[-1] - oil_changes[0]\nnumber_of_diffs = len(oil_changes) - 1\naverage_diff = sum_of_diffs / float(number_of_diffs)\n\n", "As other answers pointed out, you don't really need to worry unless your oil_changes list is extremely long. However, as a fan of \"stream-based\" computing, I think it's interesting to point out that itertools offers all the tools you need to compute your next_oil value in O(1) space (and O(N) time of course!-) no matter how big N, that is, len(next_oil), gets. \nizip per se is insufficient, because it only reduces a bit the multiplicative constant but leaves your space demands as O(N). The key idea to bring those demands down to O(1) is to pair izip with tee -- and avoiding the list comprehension, which would be O(N) in space anyway, in favor of a good simple old-fashioned loop!-). Here comes:\n it = iter(oil_changes)\n a, b = itertools.tee(it)\n b.next()\n thesum = 0\n for thelen, (i, j) in enumerate(itertools.izip(a, b)):\n thesum += j - i\n last_one = j\n next_oil = last_one + thesum / (thelen + 1)\n\nInstead of taking slices from the list, we take an iterator on it, tee it (making two independently advanceable clones thereof), and advance, once, one of the clones, b. tee takes space O(x) where x is the maximum absolute difference between the advancement of the various clones; here, the two clones' advancement only differs by 1 at most, so the space requirement is clearly O(1).\nizip makes a one-at-a-time \"zipping\" of the two slightly-askew clone iterators, and we dress it up in enumerate so we can track how many times we go through the loop, i.e. the length of the iterable we're iterating on (we need the +1 in the final expression, because enumerate starts from 0!-). We compute the sum with a simple +=, which is fine for numbers (sum is even better, but it wouldn't track the length!-).\nIt's tempting after the loop to use last_one = a.next(), but that would not work because a is actually exhausted -- izip advances its argument iterables left to right, so it has advanced a one last time before it realizes b is over!-). That's OK, because Python loop variables are NOT limited in scope to the loop itself -- after the loop, j still has the value that was last extracted by advancing b before izip gave up (just like thelen still has the last count value returned by enumerate). I'm still naming the value last_one rather than using j directly in the final expression, because I think it's clearer and more readable.\nSo there it is -- I hope it was instructive!-) -- although for the solution of the specific problem that you posed this time, it's almost certain to be overkill. We Italians have an ancient proverb -- \"Impara l'Arte, e mettila da parte!\"... \"Learn the Art, and then set it aside\" -- which I think is quite applicable here: it's a good thing to learn advanced and sophisticated ways to solve very hard problems, in case you ever meet them, but for all that you need to go for simplicity and directness in the vastly more common case of simple, ordinary problems -- not apply advanced solutions that most likely won't be needed!-)\n", "The itertools package provides additional generator-style functions. For instance, you can use izip in place of zip to save on some memory.\nYou could also perhaps write an average function so you can turn diffs into a generator instead of a list comprehension:\nfrom itertools import izip\n\ndef average(items):\n sum, count = 0, 0\n\n for item in items:\n sum += item\n count += 1\n\n return sum / count\n\ndiffs = (j - i for i, j in izip(oil_changes[:-1], oil_changes[1:])\nnext_oil = oil_changes[-1] + average(diffs)\n\nAlternatively, you could change your definition of diffs to:\ndiffs = [oil_changes[i] - oil_changes[i-1] for i in xrange(1, len(oil_changes))]\n\nI dunno, it's not really a huge improvement. Your code is pretty good as is.\n", "It seems fine, really. Not everything is simple (you've got several steps in an otherwise simple calculation, no matter how you frame it). There are options to reduce the copies, like using itertools.islice and itertools.izip, but (aside from izip) the extra steps in the code would just complicate it further. Not everything needs to be a list comprehension, but it is a judgement call sometimes. What looks cleaner to you? What will the next guy that reads it understand best? What will you understand when you come back to fix that bug in three months?\n", "\nAm I doing a lot of needless copying\n of the original list in the first\n line?\n\nTechnically, yes. Realistically, no. Unless you've changed your oil literally millions of times, the speed penalty is unlikely to be significant. You could change zip to izip, but it hardly seems worth it (and in python 3.0, zip effectively is izip).\nInsert that old quote by Knuth here.\n(you could also replace oil_changes[:-1] with just oil_changes, since zip() truncates to the length of the shortest input sequence anyway)\n" ]
[ 9, 8, 3, 2, 2 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001101611_list_comprehension_python.txt
Q: Create PyQt menu from a list of strings I have a list of strings and want to create a menu entry for each of those strings. When the user clicks on one of the entries, always the same function shall be called with the string as an argument. After some trying and research I came up with something like this: import sys from PyQt4 import QtGui, QtCore class MainWindow(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) self.menubar = self.menuBar() menuitems = ["Item 1","Item 2","Item 3"] menu = self.menubar.addMenu('&Stuff') for item in menuitems: entry = menu.addAction(item) self.connect(entry,QtCore.SIGNAL('triggered()'), lambda: self.doStuff(item)) menu.addAction(entry) print "init done" def doStuff(self, item): print item app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() sys.exit(app.exec_()) Now the problem is that each of the menu items will print the same output: "Item 3" instead of the corresponding one. I'm thankful for any ideas about how I can get this right. Thanks. A: You're meeting what's been often referred to (maybe not entirely pedantically-correctly;-) as the "scoping problem" in Python -- the binding is late (lexical lookup at call-time) while you'd like it early (at def-time). So where you now have: for item in menuitems: entry = menu.addAction(item) self.connect(entry,QtCore.SIGNAL('triggered()'), lambda: self.doStuff(item)) try instead: for item in menuitems: entry = menu.addAction(item) self.connect(entry,QtCore.SIGNAL('triggered()'), lambda item=item: self.doStuff(item)) This "anticipates" the binding, since default values (as the item one here) get computed once an for all at def-time. Adding one level of function nesting (e.g. a double lambda) works too, but it's a bit of an overkill here!-) You could alternatively use functools.partial(self.doStuff, item) (with an import functools at the top of course) which is another fine solution, but I think I'd go for the simplest (and most common) "fake default-value for argument" idiom. A: This should work, but I'm pretty sure there was a better way that I can't recall right now. def do_stuff_caller(self, item): return lambda: self.doStuff(item) ... self.connect(entry, QtCore.SIGNAL('triggered()'), self.do_stuff_caller(item)) Edit: Shorter version, that still isn't what I'm thinking about... or maybe it was in another language? :) (lambda x: lambda self.do_stuff(x))(item)
Create PyQt menu from a list of strings
I have a list of strings and want to create a menu entry for each of those strings. When the user clicks on one of the entries, always the same function shall be called with the string as an argument. After some trying and research I came up with something like this: import sys from PyQt4 import QtGui, QtCore class MainWindow(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) self.menubar = self.menuBar() menuitems = ["Item 1","Item 2","Item 3"] menu = self.menubar.addMenu('&Stuff') for item in menuitems: entry = menu.addAction(item) self.connect(entry,QtCore.SIGNAL('triggered()'), lambda: self.doStuff(item)) menu.addAction(entry) print "init done" def doStuff(self, item): print item app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() sys.exit(app.exec_()) Now the problem is that each of the menu items will print the same output: "Item 3" instead of the corresponding one. I'm thankful for any ideas about how I can get this right. Thanks.
[ "You're meeting what's been often referred to (maybe not entirely pedantically-correctly;-) as the \"scoping problem\" in Python -- the binding is late (lexical lookup at call-time) while you'd like it early (at def-time). So where you now have:\n for item in menuitems:\n entry = menu.addAction(item)\n self.connect(entry,QtCore.SIGNAL('triggered()'), lambda: self.doStuff(item))\n\ntry instead:\n for item in menuitems:\n entry = menu.addAction(item)\n self.connect(entry,QtCore.SIGNAL('triggered()'), lambda item=item: self.doStuff(item))\n\nThis \"anticipates\" the binding, since default values (as the item one here) get computed once an for all at def-time. Adding one level of function nesting (e.g. a double lambda) works too, but it's a bit of an overkill here!-)\nYou could alternatively use functools.partial(self.doStuff, item) (with an import functools at the top of course) which is another fine solution, but I think I'd go for the simplest (and most common) \"fake default-value for argument\" idiom.\n", "This should work, but I'm pretty sure there was a better way that I can't recall right now.\ndef do_stuff_caller(self, item):\n return lambda: self.doStuff(item)\n\n...\nself.connect(entry, QtCore.SIGNAL('triggered()'), self.do_stuff_caller(item))\n\nEdit:\nShorter version, that still isn't what I'm thinking about... or maybe it was in another language? :)\n(lambda x: lambda self.do_stuff(x))(item)\n" ]
[ 25, 3 ]
[]
[]
[ "pyqt", "python", "qt", "signals_slots" ]
stackoverflow_0001100775_pyqt_python_qt_signals_slots.txt
Q: Unicode problem Django-Python-URLLIB-MySQL I am fetching a webpage (http://autoweek.com) and trying to process it but getting encoding error. Autoweek declares "iso-8859-1" encoding and has the word "Nürburgring" (u with umlaut) I do: # -*- encoding: utf-8 -*- import urllib webpage = urllib.urlopen(feed.crawl_url).read() webpage.decode("utf-8") it gives me the following error: 'utf8' codec can't decode bytes in position 7768-7773: unsupported Unicode code range" if I bypass .decode step and do some parsing with lxml library, it raises an error when I am saving parsed title to database: 'utf8' codec can't decode bytes in position 45-50: unsupported Unicode code range My database has character set utf8 and collation utf-general-ci My settings: Django Python 2.4.3 MySQL 5.0.22 MySQL-python 1.2.1 mod_python 3.2.8 A: If the webpage declares encoding iso-8859-1, can't you just do webpage.decode("iso-8859-1")? At that point, webpage is decoded for your app. When it is written into the database, the mapping there should handle the char-to-utf8 encoding. To get the correct encoding, either tell the webserver that you only accept, say, UTF-8 and then that's what you'll (hopefully) always get, since just about everyone reads UTF-8 (or you could try it with ISO-8859-1); or use .info to inspect the encoding name of the stream returned. See urllib2 - The Missing Manual and Quick reference to HTTP headers for details. A: autoweek.com seems confused about it's own encoding. It declares conflicting charset definitions: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> and later... <meta charset=iso-8859-1"/>. iso-8859-1 is the correct one since this is returned in the header from the web server and by the .info() method (and it actually decodes), but this demonstrates that you can't necessarily rely on the Content-Type declaration in web pages. You should follow the method described by lavinio.
Unicode problem Django-Python-URLLIB-MySQL
I am fetching a webpage (http://autoweek.com) and trying to process it but getting encoding error. Autoweek declares "iso-8859-1" encoding and has the word "Nürburgring" (u with umlaut) I do: # -*- encoding: utf-8 -*- import urllib webpage = urllib.urlopen(feed.crawl_url).read() webpage.decode("utf-8") it gives me the following error: 'utf8' codec can't decode bytes in position 7768-7773: unsupported Unicode code range" if I bypass .decode step and do some parsing with lxml library, it raises an error when I am saving parsed title to database: 'utf8' codec can't decode bytes in position 45-50: unsupported Unicode code range My database has character set utf8 and collation utf-general-ci My settings: Django Python 2.4.3 MySQL 5.0.22 MySQL-python 1.2.1 mod_python 3.2.8
[ "If the webpage declares encoding iso-8859-1, can't you just do webpage.decode(\"iso-8859-1\")?\nAt that point, webpage is decoded for your app. When it is written into the database, the mapping there should handle the char-to-utf8 encoding.\nTo get the correct encoding, either tell the webserver that you only accept, say, UTF-8 and then that's what you'll (hopefully) always get, since just about everyone reads UTF-8 (or you could try it with ISO-8859-1); or use .info to inspect the encoding name of the stream returned.\nSee urllib2 - The Missing Manual and Quick reference to HTTP headers for details.\n", "autoweek.com seems confused about it's own encoding. It declares conflicting charset definitions:\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" /> \n\nand later...\n<meta charset=iso-8859-1\"/>.\n\niso-8859-1 is the correct one since this is returned in the header from the web server and by the .info() method (and it actually decodes), but this demonstrates that you can't necessarily rely on the Content-Type declaration in web pages. You should follow the method described by lavinio.\n" ]
[ 3, 0 ]
[]
[]
[ "encoding", "python", "unicode", "urllib", "utf_8" ]
stackoverflow_0001101715_encoding_python_unicode_urllib_utf_8.txt
Q: Is there a way to plan and diagram an architecture for dynamic scripting languages like groovy or python? Say I want to write a large application in groovy, and take advantage of closures, categories and other concepts (that I regularly use to separate concerns). Is there a way to diagram or otherwise communicate in a simple way the architecture of some of this stuff? How do you detail (without verbose documentation) the things that a map of closures might do, for example? I understand that dynamic language features aren't usually recommended on a larger scale because they are seen as complex but does that have to be the case? A: UML isn't too well equipped to handle such things, but you can still use it to communicate your design if you are willing to do some mental mapping. You can find an isomorphism between most dynamic concepts and UMLs static object-model. For example you can think of a closure as an object implementing a one method interface. It's probably useful to model such interfaces as something a bit more specific than interface Callable { call(args[0..*]: Object) : Object }. Duck typing can similarly though of as an interface. If you have a method that takes something that can quack, model it as taking an object that is a specialization of the interface _interface Quackable { quack() }. You can use your imagination for other concepts. Keep in mind that the purpose of design diagrams is to communicate ideas. So don't get overly pedantic about modeling everything 100%, think what do you want your diagrams to say, make sure that they say that and eliminate any extraneous detail that would dilute the message. And if you use some concepts that aren't obvious to your target audience, explain them. Also, if UML really can't handle what you want to say, try other ways to visualize your message. UML is only a good choice because it gives you a common vocabulary so you don't have to explain every concept on your diagram. A: If you don't want to generate verbose documentation, a picture is worth a thousand words. I've found tools like FreeMind useful, both for clarifying my ideas and for communicating them to others. And if you are willing to invest in a medium (or at least higher) level of documentation, I would recommend Sphinx. It is pretty easy to use, and although it's oriented towards documentation of Python modules, it can generate completely generic documentation which looks professional and easy on the eye. Your documentation can contain diagrams such as are created using Graphviz.
Is there a way to plan and diagram an architecture for dynamic scripting languages like groovy or python?
Say I want to write a large application in groovy, and take advantage of closures, categories and other concepts (that I regularly use to separate concerns). Is there a way to diagram or otherwise communicate in a simple way the architecture of some of this stuff? How do you detail (without verbose documentation) the things that a map of closures might do, for example? I understand that dynamic language features aren't usually recommended on a larger scale because they are seen as complex but does that have to be the case?
[ "UML isn't too well equipped to handle such things, but you can still use it to communicate your design if you are willing to do some mental mapping. You can find an isomorphism between most dynamic concepts and UMLs static object-model.\nFor example you can think of a closure as an object implementing a one method interface. It's probably useful to model such interfaces as something a bit more specific than interface Callable { call(args[0..*]: Object) : Object }.\nDuck typing can similarly though of as an interface. If you have a method that takes something that can quack, model it as taking an object that is a specialization of the interface _interface Quackable { quack() }.\nYou can use your imagination for other concepts. Keep in mind that the purpose of design diagrams is to communicate ideas. So don't get overly pedantic about modeling everything 100%, think what do you want your diagrams to say, make sure that they say that and eliminate any extraneous detail that would dilute the message. And if you use some concepts that aren't obvious to your target audience, explain them.\nAlso, if UML really can't handle what you want to say, try other ways to visualize your message. UML is only a good choice because it gives you a common vocabulary so you don't have to explain every concept on your diagram.\n", "If you don't want to generate verbose documentation, a picture is worth a thousand words. I've found tools like FreeMind useful, both for clarifying my ideas and for communicating them to others. And if you are willing to invest in a medium (or at least higher) level of documentation, I would recommend Sphinx. It is pretty easy to use, and although it's oriented towards documentation of Python modules, it can generate completely generic documentation which looks professional and easy on the eye. Your documentation can contain diagrams such as are created using Graphviz.\n" ]
[ 3, 1 ]
[]
[]
[ "architecture", "dynamic_languages", "groovy", "python", "uml" ]
stackoverflow_0001102134_architecture_dynamic_languages_groovy_python_uml.txt
Q: Moving files under python I'm confused with file moving under python. Under windows commandline, if i have directory c:\a and a directory c:\b, i can do move c:\a c:\b which moves a to b result is directory structure c:\b\a If I try this with os.rename or shutil.move: os.rename("c:/a", "c:/b") I get WindowsError: [Error 17] Cannot create a file when that file already exists If I move a single file under c:\a, it works. In python how do i move a directory to another existing directory? A: os.rename("c:/a", "c:/b/a") is equivalent to move c:\a c:\b under windows commandline A: You can try using the Shutil module. A: os.rename("c:/a/", "c:/b"/) --> Changes the name of folder a in folder b os.rename("c:/a/", "c:/b/a") --> Put folder b into folder a A: When i need many file system operations I prefer using 'path' module: http://pypi.python.org/pypi/path.py/2.2 It's quite a good and lightweight wrapper around built-in 'os.path' module. Also code: last_part = os.path.split(src)[1] is a bit strange, cause there is special function for this: last_part = os.path.basename(src)
Moving files under python
I'm confused with file moving under python. Under windows commandline, if i have directory c:\a and a directory c:\b, i can do move c:\a c:\b which moves a to b result is directory structure c:\b\a If I try this with os.rename or shutil.move: os.rename("c:/a", "c:/b") I get WindowsError: [Error 17] Cannot create a file when that file already exists If I move a single file under c:\a, it works. In python how do i move a directory to another existing directory?
[ "os.rename(\"c:/a\", \"c:/b/a\") \n\nis equivalent to \nmove c:\\a c:\\b\n\nunder windows commandline\n", "You can try using the Shutil module.\n", "os.rename(\"c:/a/\", \"c:/b\"/) --> Changes the name of folder a in folder b\nos.rename(\"c:/a/\", \"c:/b/a\") --> Put folder b into folder a\n", "When i need many file system operations I prefer using 'path' module:\nhttp://pypi.python.org/pypi/path.py/2.2\nIt's quite a good and lightweight wrapper around built-in 'os.path' module.\nAlso code:\nlast_part = os.path.split(src)[1]\n\nis a bit strange, cause there is special function for this:\nlast_part = os.path.basename(src)\n\n" ]
[ 16, 7, 1, 0 ]
[ "You will need to state the full path it's being moved to:\nsrc = 'C:\\a'\ndst_dir = 'C:\\b'\nlast_part = os.path.split(src)[1]\nos.rename(src, os.path.join(dst_dir, last_part))\n\nActually, it looks like shutil.move will do what you want by looking at its documentation:\n\nIf the destination is a directory or a symlink to a directory, the\n source\n is moved inside the directory.\n\n(And its source.)\n", "Using Twisted's FilePath:\nfrom twisted.python.filepath import FilePath\nFilePath(\"c:/a\").moveTo(FilePath(\"c:/b/a\"))\n\nor, more generally:\nfrom twisted.python.filepath import FilePath\ndef moveToExistingDir(fileOrDir, existingDir):\n fileOrDir.moveTo(existingDir.child(fileOrDir.basename()))\nmoveToExistingDir(FilePath(\"c:/a\"), FilePath(\"c:/b\"))\n\n" ]
[ -1, -1 ]
[ "move", "python", "windows" ]
stackoverflow_0001102825_move_python_windows.txt
Q: Python data/file Crc I am wanting to generate and store a CRC (or similar) value for a given list of files which can be used as a comparison at a later point. Writing a function to do this is simple enough, but is there a more standard way to do it within the Python libs? The value generated does not need to be of any particular standard. A: recommend hashlib, it implements a common interface to many different secure hash and message digest algorithms. Included are the FIPS secure hash algorithms SHA1 and MD5. a demo code: import hashlib m = hashlib.md5() for line in open('data.txt', 'rb'): m.update(line) print m.hexdigest() ##ouput 1ab8ad413648c44aa9b90ce5abe50eea A: If you don't need one-way security you could also use zlib.crc32 or zlib.adler32, as documented here.
Python data/file Crc
I am wanting to generate and store a CRC (or similar) value for a given list of files which can be used as a comparison at a later point. Writing a function to do this is simple enough, but is there a more standard way to do it within the Python libs? The value generated does not need to be of any particular standard.
[ "recommend hashlib, it implements a common interface to many different secure hash and message digest algorithms. Included are the FIPS secure hash algorithms SHA1 and MD5.\na demo code:\nimport hashlib\nm = hashlib.md5()\nfor line in open('data.txt', 'rb'):\n m.update(line)\nprint m.hexdigest()\n##ouput\n1ab8ad413648c44aa9b90ce5abe50eea\n\n", "If you don't need one-way security you could also use zlib.crc32 or zlib.adler32, as documented here.\n" ]
[ 6, 1 ]
[]
[]
[ "crc", "file", "hashlib", "md5", "python" ]
stackoverflow_0001103104_crc_file_hashlib_md5_python.txt
Q: python how to create different instances of the same class into an iteration My problem is: I would like to add to a Composite class Leaf objects created at runtime inside a Composite routine like this: def update(self, tp, msg, stt): """It updates composite objects """ d = Leaf() d.setDict(tp, msg, stt) self.append_child(d) return self.status() Inside main: import lib.composite c = Composite() for i in range(0,10): c.update(str(i), msg, stt) and the Composite is: class Composite(Component): def __init__(self, *args, **kw): super(Composite, self).__init__() self.children = [] def append_child(self, child): self.children.append(child) def update(self, tp, msg, stt): d = Leaf() d.setDict(tp, msg, stt) self.append_child(d) return self.status() def status(self): for child in self.children: ret = child.status() if type(child) == Leaf: p_out("Leaf: %s has value %s" % (child, ret)) class Component(object): def __init__(self, *args, **kw): if type(self) == Component: raise NotImplementedError("Component couldn't be " "instantiated directly") def status(self, *args, **kw): raise NotImplementedError("Status method " "must be implemented") class Leaf(Component): def __init__(self): super(Leaf, self).__init__() self._dict = {} def setDict(self, type, key, value) self._dict = { type : { key : value } } def status(self): return self._dict But in this way I found always that my composite has just one leaf ("d") added even if update was called many times. How can I code such a routine such to be able to fill composite at runtime? A: "But in this way I found always that my composite has just one leaf ("d") added even if update was called many times." No, that code makes Composite having ten children. >>> c.children [<__main__.Leaf object at 0xb7da77ec>, <__main__.Leaf object at 0xb7da780c>, <__main__.Leaf object at 0xb7da788c>, <__main__.Leaf object at 0xb7da78ac>, <__main__.Leaf object at 0xb7da78cc>, <__main__.Leaf object at 0xb7da792c>, <__main__.Leaf object at 0xb7da794c>, <__main__.Leaf object at 0xb7da798c>, <__main__.Leaf object at 0xb7da79ac>, <__main__.Leaf object at 0xb7da79cc>] So why you think it only has one is strange. A: What is doing the append_child? I think it should store the leafs in a list. Does it? Update: you shouldn't pass self as first argument in the main function. I think that it raises an exception. See code below that seems to work ok class Component(object): def __init__(self, *args, **kw): pass def setDict(self, *args, **kw): pass class Leaf(Component): def __init__(self, *args, **kw): Component.__init__(self, *args, **kw) class Composite(Component): def __init__(self, *args, **kw): Component.__init__(self, *args, **kw) self.children = [] def update(self, tp, msg, stt): """It updates composite objects """ d = Leaf() d.setDict(tp, msg, stt) self.append_child(d) return 0 def append_child(self, child): self.children.append(child) def remove_child(self, child): self.children.remove(child) c =Composite() for i in range(0,10): c.update(str(i), "", 0) print len(c.children)
python how to create different instances of the same class into an iteration
My problem is: I would like to add to a Composite class Leaf objects created at runtime inside a Composite routine like this: def update(self, tp, msg, stt): """It updates composite objects """ d = Leaf() d.setDict(tp, msg, stt) self.append_child(d) return self.status() Inside main: import lib.composite c = Composite() for i in range(0,10): c.update(str(i), msg, stt) and the Composite is: class Composite(Component): def __init__(self, *args, **kw): super(Composite, self).__init__() self.children = [] def append_child(self, child): self.children.append(child) def update(self, tp, msg, stt): d = Leaf() d.setDict(tp, msg, stt) self.append_child(d) return self.status() def status(self): for child in self.children: ret = child.status() if type(child) == Leaf: p_out("Leaf: %s has value %s" % (child, ret)) class Component(object): def __init__(self, *args, **kw): if type(self) == Component: raise NotImplementedError("Component couldn't be " "instantiated directly") def status(self, *args, **kw): raise NotImplementedError("Status method " "must be implemented") class Leaf(Component): def __init__(self): super(Leaf, self).__init__() self._dict = {} def setDict(self, type, key, value) self._dict = { type : { key : value } } def status(self): return self._dict But in this way I found always that my composite has just one leaf ("d") added even if update was called many times. How can I code such a routine such to be able to fill composite at runtime?
[ "\"But in this way I found always that my composite has just one leaf (\"d\") added even if update was called many times.\"\nNo, that code makes Composite having ten children.\n>>> c.children\n[<__main__.Leaf object at 0xb7da77ec>, <__main__.Leaf object at 0xb7da780c>,\n <__main__.Leaf object at 0xb7da788c>, <__main__.Leaf object at 0xb7da78ac>,\n <__main__.Leaf object at 0xb7da78cc>, <__main__.Leaf object at 0xb7da792c>,\n <__main__.Leaf object at 0xb7da794c>, <__main__.Leaf object at 0xb7da798c>,\n <__main__.Leaf object at 0xb7da79ac>, <__main__.Leaf object at 0xb7da79cc>]\n\nSo why you think it only has one is strange.\n", "What is doing the append_child? I think it should store the leafs in a list. Does it?\nUpdate: you shouldn't pass self as first argument in the main function. I think that it raises an exception.\nSee code below that seems to work ok\n\nclass Component(object):\n def __init__(self, *args, **kw):\n pass\n\n def setDict(self, *args, **kw):\n pass\n\nclass Leaf(Component):\n def __init__(self, *args, **kw):\n Component.__init__(self, *args, **kw)\n\nclass Composite(Component):\n def __init__(self, *args, **kw):\n Component.__init__(self, *args, **kw)\n self.children = []\n\n def update(self, tp, msg, stt):\n \"\"\"It updates composite objects\n \"\"\"\n d = Leaf()\n d.setDict(tp, msg, stt)\n self.append_child(d)\n\n return 0\n\n def append_child(self, child):\n self.children.append(child)\n\n def remove_child(self, child):\n self.children.remove(child)\n\nc =Composite()\nfor i in range(0,10):\n c.update(str(i), \"\", 0)\nprint len(c.children)\n\n" ]
[ 3, 1 ]
[]
[]
[ "composite", "design_patterns", "python" ]
stackoverflow_0001102822_composite_design_patterns_python.txt
Q: Starting and Controlling an External Process via STDIN/STDOUT with Python I need to launch an external process that is to be controlled via messages sent back and forth via stdin and stdout. Using subprocess.Popen I am able to start the process but am unable to control the execution via stdin as I need to. The flow of what I'm trying to complete is to: Start the external process Iterate for some number of steps Tell the external process to complete the next processing step by writing a new-line character to it's stdin Wait for the external process to signal it has completed the step by writing a new-line character to it's stdout Close the external process's stdin to indicate to the external process that execution has completed. I have come up with the following so far: process = subprocess.Popen([PathToProcess], stdin=subprocess.PIPE, stdout=subprocess.PIPE) for i in xrange(StepsToComplete): print "Forcing step # %s" % i process.communicate(input='\n') When I run the above code the '\n' is not communicated to the external process, and I never get beyond step #0. The code blocks at process.communicate() and does not proceed any further. I am using the communicate() method incorrectly? Also how would I implement the "wait until the external process writes a new line" piece of functionality? A: process.communicate(input='\n') is wrong. If you will notice from the Python docs, it writes your string to the stdin of the child, then reads all output from the child until the child exits. From doc.python.org: Popen.communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. Instead, you want to just write to the stdin of the child. Then read from it in your loop. Something more like: process=subprocess.Popen([PathToProcess],stdin=subprocess.PIPE,stdout=subprocess.PIPE); for i in xrange(StepsToComplete): print "Forcing step # %s"%i process.stdin.write("\n") result=process.stdout.readline() This will do something more like what you want. A: You could use Twisted, by using reactor.spawnProcess and LineReceiver.
Starting and Controlling an External Process via STDIN/STDOUT with Python
I need to launch an external process that is to be controlled via messages sent back and forth via stdin and stdout. Using subprocess.Popen I am able to start the process but am unable to control the execution via stdin as I need to. The flow of what I'm trying to complete is to: Start the external process Iterate for some number of steps Tell the external process to complete the next processing step by writing a new-line character to it's stdin Wait for the external process to signal it has completed the step by writing a new-line character to it's stdout Close the external process's stdin to indicate to the external process that execution has completed. I have come up with the following so far: process = subprocess.Popen([PathToProcess], stdin=subprocess.PIPE, stdout=subprocess.PIPE) for i in xrange(StepsToComplete): print "Forcing step # %s" % i process.communicate(input='\n') When I run the above code the '\n' is not communicated to the external process, and I never get beyond step #0. The code blocks at process.communicate() and does not proceed any further. I am using the communicate() method incorrectly? Also how would I implement the "wait until the external process writes a new line" piece of functionality?
[ "process.communicate(input='\\n') is wrong. If you will notice from the Python docs, it writes your string to the stdin of the child, then reads all output from the child until the child exits. From doc.python.org:\n\nPopen.communicate(input=None) Interact\n with process: Send data to stdin. Read\n data from stdout and stderr, until\n end-of-file is reached. Wait for\n process to terminate. The optional\n input argument should be a string to\n be sent to the child process, or None,\n if no data should be sent to the\n child.\n\nInstead, you want to just write to the stdin of the child. Then read from it in your loop.\nSomething more like:\nprocess=subprocess.Popen([PathToProcess],stdin=subprocess.PIPE,stdout=subprocess.PIPE);\nfor i in xrange(StepsToComplete):\n print \"Forcing step # %s\"%i\n process.stdin.write(\"\\n\")\n result=process.stdout.readline()\n\nThis will do something more like what you want.\n", "You could use Twisted, by using reactor.spawnProcess and LineReceiver.\n" ]
[ 9, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001087799_python_subprocess.txt
Q: Running both python 2.6 and 3.1 on the same machine I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint? Thanks for reading. A: On my Linux system (Ubuntu Jaunty), I have Python 2.5, 2.6 and 3.0 installed, just by installing the binary (deb) packages 'python2.5', 'python2.6' and 'python3.0' using apt-get. Perhaps Fedora packages them and names them as RPMs in a similar way. I can run the one I need from the command line just by typing e.g. python2.6. So I can also specify the one I want at the top of my script by putting e.g.: #!/usr/bin/python2.6 A: Download the python version you want to have as an alternative, untar it, and when you configure it, use --prefix=/my/alt/dir Cheers Nik A: You're not supposed to need to run them together. 2.6 already has all of the 3.0 features. You can enable those features with from __future__ import statements. It's much simpler run 2.6 (with some from __future__ import) until everything you need is in 3.x, then switch. A: Why do you need to use make install at all? After having done make to compile python 3.x, just move the python folder somewhere, and create a symlink to the python executable in your ~/bin directory. Add that directory to your path if it isn't already, and you'll have a working python development version ready to be used. As long as the symlink itself is not named python (I've named mine py), you'll never experience any clashes. An added benefit is that if you want to change to a new release of python 3.x, for example if you're following the beta releases, you simply download, compile and replace the folder with the new one. It's slightly messy, but the messiness is confined to one directory, and I find it much more convenient than thinking about altinstalls and the like.
Running both python 2.6 and 3.1 on the same machine
I'm currently toying with python at home and I'm planning to switch to python 3.1. The fact is that I have some scripts that use python 2.6 and I can't convert them since they use some modules that aren't available for python 3.1 atm. So I'm considering installing python 3.1 along with my python 2.6. I only found people on the internet that achieve that by compiling python from the source and use make altinstall instead of the classic make install. Anyway, I think compiling from the source is a bit complicated. I thought running two different versions of a program is easy on Linux (I run fedora 11 for the record). Any hint? Thanks for reading.
[ "On my Linux system (Ubuntu Jaunty), I have Python 2.5, 2.6 and 3.0 installed, just by installing the binary (deb) packages 'python2.5', 'python2.6' and 'python3.0' using apt-get. Perhaps Fedora packages them and names them as RPMs in a similar way.\nI can run the one I need from the command line just by typing e.g. python2.6. So I can also specify the one I want at the top of my script by putting e.g.:\n#!/usr/bin/python2.6\n\n", "Download the python version you want to have as an alternative, untar it, and when you configure it, use --prefix=/my/alt/dir\nCheers\nNik\n\n", "You're not supposed to need to run them together.\n2.6 already has all of the 3.0 features. You can enable those features with from __future__ import statements.\nIt's much simpler run 2.6 (with some from __future__ import) until everything you need is in 3.x, then switch.\n", "Why do you need to use make install at all? After having done make to compile python 3.x, just move the python folder somewhere, and create a symlink to the python executable in your ~/bin directory. Add that directory to your path if it isn't already, and you'll have a working python development version ready to be used. As long as the symlink itself is not named python (I've named mine py), you'll never experience any clashes.\nAn added benefit is that if you want to change to a new release of python 3.x, for example if you're following the beta releases, you simply download, compile and replace the folder with the new one.\nIt's slightly messy, but the messiness is confined to one directory, and I find it much more convenient than thinking about altinstalls and the like.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "linux", "python", "python_3.x" ]
stackoverflow_0001082692_linux_python_python_3.x.txt
Q: Is there any way to get python omnicomplete to work with non-system modules in vim? The only thing I can get python omnicomplete to work with are system modules. I get nothing for help with modules in my site-packages or modules that I'm currently working on. A: Once I generated ctags for one of my site-packages, it started working for that package -- so I'm guessing that the omnicomplete function depends on ctags for non-sys modules. EDIT: Not true at all. Here's the problem -- poor testing on my part -- omnicomplete WAS working for parts of my project, just not most of it. The issue was that I'm working on a django project, and in order to import django.db, you need to have an environment variable set. Since I couldn't import django.db, any class that inherited from django.db, or any module that imported a class that inherited from django.db wouldn't complete. A: Just ran across this on Python reddit tonight: PySmell. Looks like what you're looking for. PySmell is a python IDE completion helper. It tries to statically analyze Python source code, without executing it, and generates information about a project’s structure that IDE tools can use. A: I get completion for my own modules in my PYTHONPATH or site-packages. I'm not sure what version of the pythoncomplete.vim script you're using, but you may want to make sure it's the latest. EDIT: Here's some examples of what I'm seeing on my system... This file (mymodule.py), I puth in a directory in PYTHONPATH, and then in site-packages. Both times I was able to get the screenshot below. myvar = 'test' def myfunction(foo='test'): pass class MyClass(object): pass A: While it's important to note that you must properly set your PYTHONPATH environmental variable, per the the previous answer, there is a notable bug in Vim which prevents omnicompletion from working when an import fails. As of Vim 7.2.79, this bug hasn't been fixed. A: Trouble-shooting tip: verify that the module you are trying to omni-complete can be imported by VIM. I had some syntactically correct Python that VIM didn't like: :python import {module-name} Traceback (most recent call last): File "<string>", line 1, in ? File "modulename/__init__.py", line 9 class empty_paranthesis(): ^ SyntaxError: invalid syntax Case-in-point, removing the parenthesis from my class definition allowed VIM to import the module, and subsequently OmniComplete on that module started to work. A: I think your after the pydiction script. It lets you add your own stuff and site-packages to omni complete. While your at it, add the following to your python.vim file... set iskeyword+=. This will let you auto-complete package functions e.g. if you enter... os.path. and then [CTRL][N], you'll get a list of the functions for os.path.
Is there any way to get python omnicomplete to work with non-system modules in vim?
The only thing I can get python omnicomplete to work with are system modules. I get nothing for help with modules in my site-packages or modules that I'm currently working on.
[ "Once I generated ctags for one of my site-packages, it started working for that package -- so I'm guessing that the omnicomplete function depends on ctags for non-sys modules.\nEDIT: Not true at all.\nHere's the problem -- poor testing on my part -- omnicomplete WAS working for parts of my project, just not most of it.\nThe issue was that I'm working on a django project, and in order to import django.db, you need to have an environment variable set. Since I couldn't import django.db, any class that inherited from django.db, or any module that imported a class that inherited from django.db wouldn't complete.\n", "Just ran across this on Python reddit tonight: PySmell. Looks like what you're looking for.\n\nPySmell is a python IDE completion helper.\nIt tries to statically analyze Python source code, without executing it, and generates information about a project’s structure that IDE tools can use.\n\n", "I get completion for my own modules in my PYTHONPATH or site-packages. I'm not sure what version of the pythoncomplete.vim script you're using, but you may want to make sure it's the latest.\nEDIT: Here's some examples of what I'm seeing on my system...\nThis file (mymodule.py), I puth in a directory in PYTHONPATH, and then in site-packages. Both times I was able to get the screenshot below.\nmyvar = 'test'\n\ndef myfunction(foo='test'):\n pass\n\nclass MyClass(object):\n pass\n\n", "While it's important to note that you must properly set your PYTHONPATH environmental variable, per the the previous answer, there is a notable bug in Vim which prevents omnicompletion from working when an import fails. As of Vim 7.2.79, this bug hasn't been fixed.\n", "Trouble-shooting tip: verify that the module you are trying to omni-complete can be imported by VIM. I had some syntactically correct Python that VIM didn't like:\n:python import {module-name}\n Traceback (most recent call last):\n File \"<string>\", line 1, in ?\n File \"modulename/__init__.py\", line 9\n class empty_paranthesis():\n ^\n SyntaxError: invalid syntax\n\nCase-in-point, removing the parenthesis from my class definition allowed VIM to import the module, and subsequently OmniComplete on that module started to work.\n", "I think your after the pydiction script. It lets you add your own stuff and site-packages to omni complete. \nWhile your at it, add the following to your python.vim file...\n set iskeyword+=.\n\nThis will let you auto-complete package functions e.g. if you enter...\n os.path.\n\nand then [CTRL][N], you'll get a list of the functions for os.path.\n" ]
[ 3, 2, 2, 2, 2, 0 ]
[]
[]
[ "omnicomplete", "python", "vim" ]
stackoverflow_0000199180_omnicomplete_python_vim.txt
Q: Run (remote) php script from (local) python script How do I make python (local) run php script on a remote server? I don't want to process its output with python script or anything, just execute it and meanwhile quit python (while php script will be already working and doing its job). edit: What I'm trying to achieve: python script connects to ftp server and uploads php script (I already have this part of code) it runs php script (that's part of code i'm asking about) python script continues to do something else python script quits (but probably php script still didn't finished its work so i don't want it to end when it'll exit python) python script quit, php script still continues its task (I don't plan to do anything with php output in python - python just has to upload php script and make it start working) Hope I'm more clear now. Sorry if my question wasn't specific enough. another edit: Also please note that I don't have shell access on remote server. I have only ftp and control panel (cpanel); trying to use ftp for it. A: os.system("php yourscript.php") Another alternative would be: # will return new process' id os.spawnl(os.P_NOWAIT, "php yourscript.php") You can check all os module documentation here. A: If python is on a different physical machine than the PHP script, I'd make sure the PHP script is web-accessible and use urllib2 to call to that url import urllib2 urllib2.urlopen("http://remotehost.com/myscript.php") A: I'll paraphrase the answer to How do I include a PHP script in Python?. import subprocess def php(script_path): p = subprocess.Popen(['php', script_path] )
Run (remote) php script from (local) python script
How do I make python (local) run php script on a remote server? I don't want to process its output with python script or anything, just execute it and meanwhile quit python (while php script will be already working and doing its job). edit: What I'm trying to achieve: python script connects to ftp server and uploads php script (I already have this part of code) it runs php script (that's part of code i'm asking about) python script continues to do something else python script quits (but probably php script still didn't finished its work so i don't want it to end when it'll exit python) python script quit, php script still continues its task (I don't plan to do anything with php output in python - python just has to upload php script and make it start working) Hope I'm more clear now. Sorry if my question wasn't specific enough. another edit: Also please note that I don't have shell access on remote server. I have only ftp and control panel (cpanel); trying to use ftp for it.
[ "os.system(\"php yourscript.php\")\n\nAnother alternative would be:\n# will return new process' id\nos.spawnl(os.P_NOWAIT, \"php yourscript.php\")\n\nYou can check all os module documentation here.\n", "If python is on a different physical machine than the PHP script, I'd make sure the PHP script is web-accessible and use urllib2 to call to that url\nimport urllib2\n\nurllib2.urlopen(\"http://remotehost.com/myscript.php\")\n\n", "I'll paraphrase the answer to How do I include a PHP script in Python?.\nimport subprocess\n\ndef php(script_path):\n p = subprocess.Popen(['php', script_path] )\n\n" ]
[ 5, 4, 0 ]
[]
[]
[ "php", "python" ]
stackoverflow_0001104064_php_python.txt
Q: Why would Django's cache work with locmem but fail with memcached? Using Django's cache with locmem (with simple Python classes as values stored in lists/tuples/maps) works perfectly but does not work with memcached. Only a fraction of the keys (despite ample memory allocated and large timeouts) make their way into memcached, and none of them appear to have any associated value. When they are retrieved, no value is returned and they are removed from the cache. Forcing a value of "hi" makes those that appear in the cache retrievable, but does not account for why most of the keys are simply not there. Questions: Why do only certain keys end up in memcached and others not, even when all values are set to "hi"? Is there any way to enable more logging or error reporting? (everything seems to fail silently) Why do the Python classes serialize correctly to locmem but do not end up in Memcached? A: To find out what's going on, run memcached -vv 2>/tmp/mc_debug_log (I'm assuming you're on some sort of Unixy system) and run it for a short time -- you'll find detailed information in that logfile when you're done. Depending on what Python interface to memcached you're using, it may be that only strings are supported as values (as in the StringClient module in cmemcache) or that all pickleable objects are (with the overhead of pickling and unpickling of course), as in the more general Client module in the same cmemcache, GAE's memcache, and python-memcached; if you're only able to use strings as values, presumably you're using an interface of the former type? A: Apparently, keys can not have spaces in them: http://code.djangoproject.com/ticket/6447 http://blog.pos.thum.us/2009/05/22/memcached-keys-cant-have-spaces-in-them/ As soon as I used a key with a space in it, everything became unpredictable.
Why would Django's cache work with locmem but fail with memcached?
Using Django's cache with locmem (with simple Python classes as values stored in lists/tuples/maps) works perfectly but does not work with memcached. Only a fraction of the keys (despite ample memory allocated and large timeouts) make their way into memcached, and none of them appear to have any associated value. When they are retrieved, no value is returned and they are removed from the cache. Forcing a value of "hi" makes those that appear in the cache retrievable, but does not account for why most of the keys are simply not there. Questions: Why do only certain keys end up in memcached and others not, even when all values are set to "hi"? Is there any way to enable more logging or error reporting? (everything seems to fail silently) Why do the Python classes serialize correctly to locmem but do not end up in Memcached?
[ "To find out what's going on, run memcached -vv 2>/tmp/mc_debug_log (I'm assuming you're on some sort of Unixy system) and run it for a short time -- you'll find detailed information in that logfile when you're done.\nDepending on what Python interface to memcached you're using, it may be that only strings are supported as values (as in the StringClient module in cmemcache) or that all pickleable objects are (with the overhead of pickling and unpickling of course), as in the more general Client module in the same cmemcache, GAE's memcache, and python-memcached; if you're only able to use strings as values, presumably you're using an interface of the former type?\n", "Apparently, keys can not have spaces in them:\nhttp://code.djangoproject.com/ticket/6447\nhttp://blog.pos.thum.us/2009/05/22/memcached-keys-cant-have-spaces-in-them/\nAs soon as I used a key with a space in it, everything became unpredictable.\n" ]
[ 3, 3 ]
[]
[]
[ "caching", "django", "memcached", "python" ]
stackoverflow_0001101049_caching_django_memcached_python.txt
Q: Pythonic way to get some rows of a matrix I was thinking about a code that I wrote a few years ago in Python, at some point it had to get just some elements, by index, of a list of lists. I remember I did something like this: def getRows(m, row_indices): tmp = [] for i in row_indices: tmp.append(m[i]) return tmp Now that I've learnt a little bit more since then, I'd use a list comprehension like this: [m[i] for i in row_indices] But I'm still wondering if there's an even more pythonic way to do it. Any ideas? I would like to know also alternatives with numpy o any other array libraries. A: It's the clean an obvious way. So, I'd say it doesn't get more Pythonic than that. A: It's worth looking at NumPy for its slicing syntax. Scroll down in the linked page until you get to "Indexing, Slicing and Iterating". A: As Curt said, it seems that Numpy is a good tool for this. Here's an example, from numpy import * a = arange(16).reshape((4,4)) b = a[:, [1,2]] c = a[[1,2], :] print a print b print c gives [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] [[ 1 2] [ 5 6] [ 9 10] [13 14]] [[ 4 5 6 7] [ 8 9 10 11]]
Pythonic way to get some rows of a matrix
I was thinking about a code that I wrote a few years ago in Python, at some point it had to get just some elements, by index, of a list of lists. I remember I did something like this: def getRows(m, row_indices): tmp = [] for i in row_indices: tmp.append(m[i]) return tmp Now that I've learnt a little bit more since then, I'd use a list comprehension like this: [m[i] for i in row_indices] But I'm still wondering if there's an even more pythonic way to do it. Any ideas? I would like to know also alternatives with numpy o any other array libraries.
[ "It's the clean an obvious way. So, I'd say it doesn't get more Pythonic than that.\n", "It's worth looking at NumPy for its slicing syntax. Scroll down in the linked page until you get to \"Indexing, Slicing and Iterating\".\n", "As Curt said, it seems that Numpy is a good tool for this. Here's an example,\nfrom numpy import *\n\na = arange(16).reshape((4,4))\nb = a[:, [1,2]]\nc = a[[1,2], :]\n\nprint a\nprint b\nprint c\n\ngives\n[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]\n [12 13 14 15]]\n[[ 1 2]\n [ 5 6]\n [ 9 10]\n [13 14]]\n[[ 4 5 6 7]\n [ 8 9 10 11]]\n\n" ]
[ 4, 4, 2 ]
[]
[]
[ "coding_style", "filtering", "list", "python" ]
stackoverflow_0001105101_coding_style_filtering_list_python.txt
Q: python - Problem storing Unicode character to MySQL with Django I have the string u"Played Mirror's Edge\u2122" Which should be shown as Played Mirror's Edge™ But that is another issue. My problem at hand is that I'm putting it in a model and then trying to save it to a database. AKA: a = models.Achievement(name=u"Played Mirror's Edge\u2122") a.save() And I'm getting : 'ascii' codec can't encode character u'\u2122' in position 13: ordinal not in range(128) full stack trace (as requested) : Traceback: File "/var/home/ptarjan/django/mysite/django/core/handlers/base.py" in get_response 86. response = callback(request, *callback_args, **callback_kwargs) File "/var/home/ptarjan/django/mysite/yourock/views/alias.py" in import_all 161. types.import_all(type, alias) File "/var/home/ptarjan/django/mysite/yourock/types/types.py" in import_all 52. return modules[type].import_all(siteAlias, alias) File "/var/home/ptarjan/django/mysite/yourock/types/xbox.py" in import_all 117. achiever = self.add_achievement(dict, siteAlias, alias) File "/var/home/ptarjan/django/mysite/yourock/types/base_profile.py" in add_achievement 130. owner = siteAlias, File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in get 304. num = len(clone) File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in __len__ 160. self._result_cache = list(self.iterator()) File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in iterator 275. for row in self.query.results_iter(): File "/var/home/ptarjan/django/mysite/django/db/models/sql/query.py" in results_iter 206. for rows in self.execute_sql(MULTI): File "/var/home/ptarjan/django/mysite/django/db/models/sql/query.py" in execute_sql 1734. cursor.execute(sql, params) File "/var/home/ptarjan/django/mysite/django/db/backends/util.py" in execute 19. return self.cursor.execute(sql, params) File "/var/home/ptarjan/django/mysite/django/db/backends/mysql/base.py" in execute 83. return self.cursor.execute(query, args) File "/usr/lib/pymodules/python2.5/MySQLdb/cursors.py" in execute 151. query = query % db.literal(args) File "/usr/lib/pymodules/python2.5/MySQLdb/connections.py" in literal 247. return self.escape(o, self.encoders) File "/usr/lib/pymodules/python2.5/MySQLdb/connections.py" in string_literal 180. return db.string_literal(obj) Exception Type: UnicodeEncodeError at /import/xbox:bob Exception Value: 'ascii' codec can't encode character u'\u2122' in position 13: ordinal not in range(128) And the pertinant part of the model : class Achievement(MyBaseModel): name = models.CharField(max_length=100, help_text="A human readable achievement name") I'm using a MySQL backend with this in my settings.py DEFAULT_CHARSET = 'utf-8' So basically, how the heck should I deal with all this unicode stuff? I was hoping it would all "just work" if I stayed away from funny character sets and stuck to UTF8. Alas, it seems to not be just that easy. A: Thank you to everyone who was posting here. It really helps my unicode knowledge (and hoepfully other people learned something). We seemed to be all barking up the wrong tree since I tried to simplify my problem and didn't give ALL information. It seems that I wasn't using "REAL" unicode strings, but rather BeautifulSoup.NavigableString which repr themselves as unicode strings. So all the printouts looked like unicode, but they weren't. Somewhere deep in the MySQLDB library they couldn't deal with these strings. This worked : >>> Achievement.objects.get(name = u"Mirror's Edge\u2122") <Achievement: Mirror's Edge™> On the other hand : >>> b = BeautifulSoup(u"<span>Mirror's Edge\u2122</span>").span.string >>> Achievement.objects.get(name = b) ... Exceptoins ... UnicodeEncodeError: 'ascii' codec can't encode character u'\u2122' in position 13: ordinal not in range(128) But this works : >>> Achievement.objects.get(name = unicode(b)) <Achievement: Mirror's Edge™> So, thanks again for all the unicode help, I'm sure it will come in handy. But for now ... WARNING : BeautifulSoup doesn't return REAL unicode strings and should be coerced with unicode() before doing anything meaningful with them. A: A few remarks: Python 2.x has two string types "str", which is basically a byte array (so you can store anything you like in it) "unicode" , which is UCS2/UCS4 encoded unicode internally Instances of these types are considered "decoded" data. The internal representation is the reference, so you "decode" external data into it, and "encode" into some external format. A good strategy is to decode as early as possible when data enters the system, and encode as late as possible. Try to use unicode for the strings in your system as much as possible. (I disagree with Nikolai in this regard). This encoding aspect applies to Nicolai's answer. He takes the original unicode string, and encodes it into utf-8. But this doesn't solve the problem (at least not generally), because the resulting byte buffer can still contain bytes outside the range(127) (I haven't checked for \u2122), which means you will hit the same exception again. Still Nicolai's analysis holds that you are passing a unicode string, but somewhere down in the system this is regarded a str instance. It suffices if somewhere the str() function is applied to your unicode argument. In that case Python uses the so called default encoding which is ascii if you don't change it. There is a function sys.setdefaultencoding which you can use to switch to e.g. utf-8, but the function is only available in a limited context, so you cannot easily use it in application code. My feeling is the problem is somewhere deeper in the layers you are calling. Unfortunately, I cannot comment on Django or MySQL/SQLalchemy, but I wonder if you could specify a unicode type when declaring the 'name' attribute in your model. It would be good DB practice to handle type information on the field level. Maybe there is an alternative to CharField?! And yes, you can safely embed a single quote (') in a double quoted (") string, and vice versa. A: You are using strings of type 'unicode'. If your model or SQL backend does not support them or does not know how to convert to UTF-8, simply do the conversion yourself. Stick with simple strings (python type str) and convert like in a = models.Achievement(name=u"Played Mirror's Edge\u2122".encode("UTF-8")) A: I was working on this yesterday, and I found that adding "charset=utf8" and "use_unicode=1" to the connection string made it work (using SQLAlchemy, guess it's the same problem). So my string looks like: "mysql://user:pass@host:3306/database?use_unicode=1&charset=utf8" A: I agree with Nikolai. I already encountered problem to use UTF-8, even in pure Python (2.5). I finally used the unicode function(?): entry = unicode(sys.stdin, ENCODING) ENCODING was depending on the locale, if I remember well: import sys, locale ENCODING = locale.getdefaultlocale()[1] DEFAULT_ENCODING = sys.getdefaultencoding() Maybe take a look at the Python Unicode HOWTO ? A: I was having similar problems with mysql and postgres but no problems with sqllite. This is how i solved the problem with postgres (didnt test this trick with mysql but id asume it would solve it as well) in the file where u are dealing with the unicode string do a from django.utils.safestring import SafeUnicode and assume unistr is the variable containing the string, do a unistr = SafeUnicode(unistr) in my case i was scraping from a website original code which was giving problems (ht is beautifulsoup object):- keyword = ht.a.string the fix:- keyword = SafeUnicode(ht.a.string) I dont know why or what SafeUnicode is doing, all i know is it solved my problems.
python - Problem storing Unicode character to MySQL with Django
I have the string u"Played Mirror's Edge\u2122" Which should be shown as Played Mirror's Edge™ But that is another issue. My problem at hand is that I'm putting it in a model and then trying to save it to a database. AKA: a = models.Achievement(name=u"Played Mirror's Edge\u2122") a.save() And I'm getting : 'ascii' codec can't encode character u'\u2122' in position 13: ordinal not in range(128) full stack trace (as requested) : Traceback: File "/var/home/ptarjan/django/mysite/django/core/handlers/base.py" in get_response 86. response = callback(request, *callback_args, **callback_kwargs) File "/var/home/ptarjan/django/mysite/yourock/views/alias.py" in import_all 161. types.import_all(type, alias) File "/var/home/ptarjan/django/mysite/yourock/types/types.py" in import_all 52. return modules[type].import_all(siteAlias, alias) File "/var/home/ptarjan/django/mysite/yourock/types/xbox.py" in import_all 117. achiever = self.add_achievement(dict, siteAlias, alias) File "/var/home/ptarjan/django/mysite/yourock/types/base_profile.py" in add_achievement 130. owner = siteAlias, File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in get 304. num = len(clone) File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in __len__ 160. self._result_cache = list(self.iterator()) File "/var/home/ptarjan/django/mysite/django/db/models/query.py" in iterator 275. for row in self.query.results_iter(): File "/var/home/ptarjan/django/mysite/django/db/models/sql/query.py" in results_iter 206. for rows in self.execute_sql(MULTI): File "/var/home/ptarjan/django/mysite/django/db/models/sql/query.py" in execute_sql 1734. cursor.execute(sql, params) File "/var/home/ptarjan/django/mysite/django/db/backends/util.py" in execute 19. return self.cursor.execute(sql, params) File "/var/home/ptarjan/django/mysite/django/db/backends/mysql/base.py" in execute 83. return self.cursor.execute(query, args) File "/usr/lib/pymodules/python2.5/MySQLdb/cursors.py" in execute 151. query = query % db.literal(args) File "/usr/lib/pymodules/python2.5/MySQLdb/connections.py" in literal 247. return self.escape(o, self.encoders) File "/usr/lib/pymodules/python2.5/MySQLdb/connections.py" in string_literal 180. return db.string_literal(obj) Exception Type: UnicodeEncodeError at /import/xbox:bob Exception Value: 'ascii' codec can't encode character u'\u2122' in position 13: ordinal not in range(128) And the pertinant part of the model : class Achievement(MyBaseModel): name = models.CharField(max_length=100, help_text="A human readable achievement name") I'm using a MySQL backend with this in my settings.py DEFAULT_CHARSET = 'utf-8' So basically, how the heck should I deal with all this unicode stuff? I was hoping it would all "just work" if I stayed away from funny character sets and stuck to UTF8. Alas, it seems to not be just that easy.
[ "Thank you to everyone who was posting here. It really helps my unicode knowledge (and hoepfully other people learned something).\nWe seemed to be all barking up the wrong tree since I tried to simplify my problem and didn't give ALL information. It seems that I wasn't using \"REAL\" unicode strings, but rather BeautifulSoup.NavigableString which repr themselves as unicode strings. So all the printouts looked like unicode, but they weren't.\nSomewhere deep in the MySQLDB library they couldn't deal with these strings. \nThis worked :\n>>> Achievement.objects.get(name = u\"Mirror's Edge\\u2122\")\n<Achievement: Mirror's Edge™>\n\nOn the other hand :\n>>> b = BeautifulSoup(u\"<span>Mirror's Edge\\u2122</span>\").span.string\n>>> Achievement.objects.get(name = b)\n... Exceptoins ...\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\u2122' in position 13: ordinal not in range(128)\n\nBut this works :\n>>> Achievement.objects.get(name = unicode(b))\n<Achievement: Mirror's Edge™>\n\nSo, thanks again for all the unicode help, I'm sure it will come in handy. But for now ...\nWARNING : BeautifulSoup doesn't return REAL unicode strings and should be coerced with unicode() before doing anything meaningful with them.\n", "A few remarks: \n\nPython 2.x has two string types\n\n\"str\", which is basically a byte array (so you can store anything you like in it)\n\"unicode\" , which is UCS2/UCS4 encoded unicode internally\n\nInstances of these types are considered \"decoded\" data. The internal representation is the reference, so you \"decode\" external data into it, and \"encode\" into some external format.\nA good strategy is to decode as early as possible when data enters the system, and encode as late as possible. Try to use unicode for the strings in your system as much as possible. (I disagree with Nikolai in this regard).\nThis encoding aspect applies to Nicolai's answer. He takes the original unicode string, and encodes it into utf-8. But this doesn't solve the problem (at least not generally), because the resulting byte buffer can still contain bytes outside the range(127) (I haven't checked for \\u2122), which means you will hit the same exception again.\nStill Nicolai's analysis holds that you are passing a unicode string, but somewhere down in the system this is regarded a str instance. It suffices if somewhere the str() function is applied to your unicode argument.\nIn that case Python uses the so called default encoding which is ascii if you don't change it. There is a function sys.setdefaultencoding which you can use to switch to e.g. utf-8, but the function is only available in a limited context, so you cannot easily use it in application code.\nMy feeling is the problem is somewhere deeper in the layers you are calling. Unfortunately, I cannot comment on Django or MySQL/SQLalchemy, but I wonder if you could specify a unicode type when declaring the 'name' attribute in your model. It would be good DB practice to handle type information on the field level. Maybe there is an alternative to CharField?!\nAnd yes, you can safely embed a single quote (') in a double quoted (\") string, and vice versa.\n\n", "You are using strings of type 'unicode'. If your model or SQL backend does not support them or does not know how to convert to UTF-8, simply do the conversion yourself. Stick with simple strings (python type str) and convert like in\na = models.Achievement(name=u\"Played Mirror's Edge\\u2122\".encode(\"UTF-8\"))\n\n", "I was working on this yesterday, and I found that adding \"charset=utf8\" and \"use_unicode=1\" to the connection string made it work (using SQLAlchemy, guess it's the same problem).\nSo my string looks like:\n\"mysql://user:pass@host:3306/database?use_unicode=1&charset=utf8\"\n", "I agree with Nikolai. I already encountered problem to use UTF-8, even in pure Python (2.5).\nI finally used the unicode function(?): \nentry = unicode(sys.stdin, ENCODING)\n\nENCODING was depending on the locale, if I remember well:\nimport sys, locale\n\nENCODING = locale.getdefaultlocale()[1]\nDEFAULT_ENCODING = sys.getdefaultencoding()\n\nMaybe take a look at the Python Unicode HOWTO ?\n", "I was having similar problems with mysql and postgres but no problems with sqllite.\nThis is how i solved the problem with postgres (didnt test this trick with mysql but id asume it would solve it as well)\nin the file where u are dealing with the unicode string do a \nfrom django.utils.safestring import SafeUnicode\n\nand assume unistr is the variable containing the string, do a \nunistr = SafeUnicode(unistr)\n\nin my case i was scraping from a website\noriginal code which was giving problems (ht is beautifulsoup object):- \nkeyword = ht.a.string\n\nthe fix:-\nkeyword = SafeUnicode(ht.a.string)\n\nI dont know why or what SafeUnicode is doing, all i know is it solved my problems.\n" ]
[ 12, 4, 3, 1, 0, 0 ]
[ "To me the apostrophe looks strange, should it not be escapded like so:\nu\"Played Mirror\\'s Edge\\u2122\"\n\n" ]
[ -1 ]
[ "django", "django_models", "mysql", "python", "unicode" ]
stackoverflow_0001102465_django_django_models_mysql_python_unicode.txt
Q: How are these type of python decorators written? I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax : @max_execs(5) def my_method(*a,**k): # do something here pass I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a call method. The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works. A: This is what I whipped up. It doesn't use a class, but it does use function attributes: def max_execs(n=5): def decorator(fn): fn.max = n fn.called = 0 def wrapped(*args, **kwargs): fn.called += 1 if fn.called <= fn.max: return fn(*args, **kwargs) else: # Replace with your own exception, or something # else that you want to happen when the limit # is reached raise RuntimeError("max executions exceeded") return wrapped return decorator max_execs returns a functioned called decorator, which in turn returns wrapped. decoration stores the max execs and current number of execs in two function attributes, which then get checked in wrapped. Translation: When using the decorator like this: @max_execs(5) def f(): print "hi!" You're basically doing something like this: f = max_execs(5)(f) A: Decorator is merely a callable that transforms a function into something else. In your case, max_execs(5) must be a callable that transforms a function into another callable object that will count and forward the calls. class helper: def __init__(self, i, fn): self.i = i self.fn = fn def __call__(self, *args, **kwargs): if self.i > 0: self.i = self.i - 1 return self.fn(*args, **kwargs) class max_execs: def __init__(self, i): self.i = i def __call__(self, fn): return helper(self.i, fn) I don't see why you would want to limit yourself to a function (and not a class). But if you really want to... def max_execs(n): return lambda fn, i=n: return helper(i, fn) A: There are two ways of doing it. The object-oriented way is to make a class: class max_execs: def __init__(self, max_executions): self.max_executions = max_executions self.executions = 0 def __call__(self, func): @wraps(func) def maybe(*args, **kwargs): if self.executions < self.max_executions: self.executions += 1 return func(*args, **kwargs) else: print "fail" return maybe See this question for an explanation of wraps. I prefer the above OOP approach for this kind of decorator, since you've basically got a private count variable tracking the number of executions. However, the other approach is to use a closure, such as def max_execs(max_executions): executions = [0] def actual_decorator(func): @wraps(func) def maybe(*args, **kwargs): if executions[0] < max_executions: executions[0] += 1 return func(*args, **kwargs) else: print "fail" return maybe return actual_decorator This involved three functions. The max_execs function is given a parameter for the number of executions and returns a decorator that will restrict you to that many calls. That function, the actual_decorator, does the same thing as our __call__ method in the OOP example. The only weirdness is that since we don't have a class with private variables, we need to mutate the executions variable which is in the outer scope of our closure. Python 3.0 supports this with the nonlocal statement, but in Python 2.6 or earlier, we need to wrap our executions count in a list so that it can be mutated. A: Without relying to a state in a class, you have to save the state (count) in the function itself: def max_execs(count): def new_meth(meth): meth.count = count def new(*a,**k): meth.count -= 1 print meth.count if meth.count>=0: return meth(*a,**k) return new return new_meth @max_execs(5) def f(): print "invoked" [f() for _ in range(10)] It gives: 5 invoked 4 invoked 3 invoked 2 invoked 1 invoked 0 -1 -2 -3 -4 A: This method does not modify function internals, instead wraps it into a callable object. Using class slows down execution by ~20% vs using the patched function! def max_execs(n=1): class limit_wrapper: def __init__(self, fn, max): self.calls_left = max self.fn = fn def __call__(self,*a,**kw): if self.calls_left > 0: self.calls_left -= 1 return self.fn(*a,**kw) raise Exception("max num of calls is %d" % self.i) def decorator(fn): return limit_wrapper(fn,n) return decorator @max_execs(2) def fun(): print "called" A: I know you said you didn't want a class, but unfortunately that's the only way I can think of how to do it off the top of my head. class mymethodwrapper: def __init__(self): self.maxcalls = 0 def mymethod(self): self.maxcalls += 1 if self.maxcalls > 5: return #rest of your code print "Code fired!" Fire it up like this a = mymethodwrapper for x in range(1000): a.mymethod() The output would be: >>> Code fired! >>> Code fired! >>> Code fired! >>> Code fired! >>> Code fired!
How are these type of python decorators written?
I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax : @max_execs(5) def my_method(*a,**k): # do something here pass I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a call method. The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.
[ "This is what I whipped up. It doesn't use a class, but it does use function attributes:\ndef max_execs(n=5):\n def decorator(fn):\n fn.max = n\n fn.called = 0\n def wrapped(*args, **kwargs):\n fn.called += 1\n if fn.called <= fn.max:\n return fn(*args, **kwargs)\n else:\n # Replace with your own exception, or something\n # else that you want to happen when the limit\n # is reached\n raise RuntimeError(\"max executions exceeded\")\n return wrapped\n return decorator\n\nmax_execs returns a functioned called decorator, which in turn returns wrapped. decoration stores the max execs and current number of execs in two function attributes, which then get checked in wrapped.\nTranslation: When using the decorator like this:\n@max_execs(5)\ndef f():\n print \"hi!\"\n\nYou're basically doing something like this:\nf = max_execs(5)(f)\n\n", "Decorator is merely a callable that transforms a function into something else. In your case, max_execs(5) must be a callable that transforms a function into another callable object that will count and forward the calls.\nclass helper:\n def __init__(self, i, fn):\n self.i = i\n self.fn = fn\n def __call__(self, *args, **kwargs):\n if self.i > 0:\n self.i = self.i - 1\n return self.fn(*args, **kwargs)\n\nclass max_execs:\n def __init__(self, i):\n self.i = i\n def __call__(self, fn):\n return helper(self.i, fn)\n\nI don't see why you would want to limit yourself to a function (and not a class). But if you really want to...\ndef max_execs(n):\n return lambda fn, i=n: return helper(i, fn)\n\n", "There are two ways of doing it. The object-oriented way is to make a class:\nclass max_execs:\n def __init__(self, max_executions):\n self.max_executions = max_executions\n self.executions = 0\n\n def __call__(self, func):\n @wraps(func)\n def maybe(*args, **kwargs):\n if self.executions < self.max_executions:\n self.executions += 1\n return func(*args, **kwargs)\n else:\n print \"fail\"\n return maybe\n\nSee this question for an explanation of wraps.\nI prefer the above OOP approach for this kind of decorator, since you've basically got a private count variable tracking the number of executions. However, the other approach is to use a closure, such as\ndef max_execs(max_executions):\n executions = [0]\n def actual_decorator(func):\n @wraps(func)\n def maybe(*args, **kwargs):\n if executions[0] < max_executions:\n executions[0] += 1\n return func(*args, **kwargs)\n else:\n print \"fail\"\n return maybe\n return actual_decorator\n\nThis involved three functions. The max_execs function is given a parameter for the number of executions and returns a decorator that will restrict you to that many calls. That function, the actual_decorator, does the same thing as our __call__ method in the OOP example. The only weirdness is that since we don't have a class with private variables, we need to mutate the executions variable which is in the outer scope of our closure. Python 3.0 supports this with the nonlocal statement, but in Python 2.6 or earlier, we need to wrap our executions count in a list so that it can be mutated.\n", "Without relying to a state in a class, you have to save the state (count) in the function itself:\ndef max_execs(count):\n def new_meth(meth):\n meth.count = count\n def new(*a,**k):\n meth.count -= 1\n print meth.count \n if meth.count>=0:\n return meth(*a,**k)\n return new\n return new_meth\n\n@max_execs(5)\ndef f():\n print \"invoked\"\n\n[f() for _ in range(10)]\n\nIt gives:\n5\ninvoked\n4\ninvoked\n3\ninvoked\n2\ninvoked\n1\ninvoked\n0\n-1\n-2\n-3\n-4\n\n", "This method does not modify function internals, instead wraps it into a callable object.\nUsing class slows down execution by ~20% vs using the patched function!\ndef max_execs(n=1):\n class limit_wrapper:\n def __init__(self, fn, max):\n self.calls_left = max\n self.fn = fn\n def __call__(self,*a,**kw):\n if self.calls_left > 0:\n self.calls_left -= 1\n return self.fn(*a,**kw)\n raise Exception(\"max num of calls is %d\" % self.i)\n\n\n def decorator(fn):\n return limit_wrapper(fn,n)\n\n return decorator\n\n@max_execs(2)\ndef fun():\n print \"called\"\n\n", "I know you said you didn't want a class, but unfortunately that's the only way I can think of how to do it off the top of my head.\nclass mymethodwrapper:\n def __init__(self):\n self.maxcalls = 0\n def mymethod(self):\n self.maxcalls += 1\n if self.maxcalls > 5:\n return\n #rest of your code\n print \"Code fired!\"\n\nFire it up like this\na = mymethodwrapper\nfor x in range(1000):\n a.mymethod()\n\nThe output would be:\n>>> Code fired!\n>>> Code fired!\n>>> Code fired!\n>>> Code fired!\n>>> Code fired!\n\n" ]
[ 12, 4, 3, 2, 1, 0 ]
[]
[]
[ "decorator", "language_features", "python" ]
stackoverflow_0001106223_decorator_language_features_python.txt
Q: How does os.path map to posixpath.pyc and not os/path.py? What is the underlying mechanism in Python that handles such "aliases"? >>> import os.path >>> os.path.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/posixpath.pyc' A: Taken from os.py on CPython 2.6: sys.modules['os.path'] = path from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep, devnull) path is defined earlier as the platform-specific module: if 'posix' in _names: name = 'posix' linesep = '\n' from posix import * try: from posix import _exit except ImportError: pass import posixpath as path import posix __all__.extend(_get_exports_list(posix)) del posix elif 'nt' in _names: # ... A: Perhaps os uses import as? import posixpath as path
How does os.path map to posixpath.pyc and not os/path.py?
What is the underlying mechanism in Python that handles such "aliases"? >>> import os.path >>> os.path.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/posixpath.pyc'
[ "Taken from os.py on CPython 2.6:\nsys.modules['os.path'] = path\nfrom os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,\n devnull)\n\npath is defined earlier as the platform-specific module:\nif 'posix' in _names:\n name = 'posix'\n linesep = '\\n'\n from posix import *\n try:\n from posix import _exit\n except ImportError:\n pass\n import posixpath as path\n\n import posix\n __all__.extend(_get_exports_list(posix))\n del posix\n\nelif 'nt' in _names:\n# ...\n\n", "Perhaps os uses import as?\nimport posixpath as path\n\n" ]
[ 6, 0 ]
[]
[]
[ "alias", "import", "module", "path", "python" ]
stackoverflow_0001106455_alias_import_module_path_python.txt
Q: Python: better way to open lots of sockets I have the following program to open lot's of sockets, and hold them open to stress test one of our servers. There are several problem's with this. I think it could be a lot more efficient than a recursive call, and it's really still opening sockets in a serial fashion rather than parallel fashion. I realize there are tools like ab that could probably simulate what I'm trying to do, but I'm hoping to increase my python knowledge. Is this something I should be rewriting as either multi-threaded or multi-process? > #!/usr/bin/env python > > import socket, time, sys > sys.setrecursionlimit(2000) > > def open_socket(counter): > sockname = "s" + str(counter) > port = counter + 3000 > sockname = socket.socket() > sockname.bind(('localhost', port)) > sockname.listen(1) > if counter == 2: > time.sleep(20) > elif counter > 2: > counter = counter -1 > open_socket(counter) > > open_socket(1500) A: I was puzzled why you would use recursion instead of a simple loop. My guess is that with a simple loop, you would have overwritten the variable sockname again and again, so that Python's garbage collection would actually close the previous socket after you created the next one. The solution is to store them all in a list, to prevent Python from garbage-collecting them: def open_socket(counter): sockets = [] for i in range(counter): s = socket.socket() s.bind(('localhost', i+3000)) s.listen(1) sockets.append(s) time.sleep(20) Also notice that in your code, the first assignment to sockname is completely redundant, as it is overwritten by the second assignment. A: You can try using Twisted for this. It greatly simplifies networking on Python. Their site has some tutorials to get you started. However, you could easily see using Python an overkill for this task. A faster option to hack up would be to just open 1500 instances of nc: for i in {3000..4500}; do nc -l -p $i & done A: Well it's already multi process - put a sleep in before calling open_socket, and run , say , 500 of them from a shell: for i in `seq 500`; do ./yourprogram & done You're not a actually connecting to something though - seems you're setting up server sockets ? If you need to connect to something you surly should test it with parallelism(multiple threads, or run many processes like shown above or using asyncronous connect's). This should be of interest to read:
Python: better way to open lots of sockets
I have the following program to open lot's of sockets, and hold them open to stress test one of our servers. There are several problem's with this. I think it could be a lot more efficient than a recursive call, and it's really still opening sockets in a serial fashion rather than parallel fashion. I realize there are tools like ab that could probably simulate what I'm trying to do, but I'm hoping to increase my python knowledge. Is this something I should be rewriting as either multi-threaded or multi-process? > #!/usr/bin/env python > > import socket, time, sys > sys.setrecursionlimit(2000) > > def open_socket(counter): > sockname = "s" + str(counter) > port = counter + 3000 > sockname = socket.socket() > sockname.bind(('localhost', port)) > sockname.listen(1) > if counter == 2: > time.sleep(20) > elif counter > 2: > counter = counter -1 > open_socket(counter) > > open_socket(1500)
[ "I was puzzled why you would use recursion instead of a simple loop. My guess is that with a simple loop, you would have overwritten the variable sockname again and again, so that Python's garbage collection would actually close the previous socket after you created the next one. The solution is to store them all in a list, to prevent Python from garbage-collecting them:\ndef open_socket(counter):\n sockets = []\n for i in range(counter):\n s = socket.socket()\n s.bind(('localhost', i+3000))\n s.listen(1)\n sockets.append(s)\n time.sleep(20)\n\nAlso notice that in your code, the first assignment to sockname is completely redundant, as it is overwritten by the second assignment.\n", "You can try using Twisted for this. It greatly simplifies networking on Python. Their site has some tutorials to get you started.\nHowever, you could easily see using Python an overkill for this task. A faster option to hack up would be to just open 1500 instances of nc:\nfor i in {3000..4500};\ndo\n nc -l -p $i &\ndone\n\n", "Well it's already multi process - put a sleep in before calling open_socket, and run , say , 500 of them from a shell:\nfor i in `seq 500`; do ./yourprogram & done\n\nYou're not a actually connecting to something though - seems you're setting up server sockets ? If you need to connect to something you surly should test it with parallelism(multiple threads, or run many processes like shown above or using asyncronous connect's). This should be of interest to read:\n" ]
[ 4, 2, 0 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001106433_python_sockets.txt
Q: Is 'if element in aList' possible with Django templates? Does something like the python if "a" in ["a", "b", "c"]: pass exist in Django templates? If not, is there an easy way to implement it? A: This is something you usually do in your view functions. aList = ["a", "b", "c"] listAndFlags = [ (item,item in aList) for item in someQuerySet ] Now you have a simple two-element list that you can display {% for item, flag in someList %} <tr><td class="{{flag}}">{{item}}</td></tr> {% endfor %} A: Not directly, there is no if x in iterable template tag included. This is not typically something needed inside the templates themselves. Without more context about the surrounding problem a good answer cannot be given. We can guess and say that you want to either pass a nested list like the above comment, or you really just need to do more calculation in the view and pass a single list (testing for empty if you don't want it to do anything). Hope this helps.
Is 'if element in aList' possible with Django templates?
Does something like the python if "a" in ["a", "b", "c"]: pass exist in Django templates? If not, is there an easy way to implement it?
[ "This is something you usually do in your view functions.\naList = [\"a\", \"b\", \"c\"]\nlistAndFlags = [ (item,item in aList) for item in someQuerySet ]\n\nNow you have a simple two-element list that you can display\n{% for item, flag in someList %}\n <tr><td class=\"{{flag}}\">{{item}}</td></tr>\n{% endfor %}\n\n", "Not directly, there is no if x in iterable template tag included.\nThis is not typically something needed inside the templates themselves. Without more context about the surrounding problem a good answer cannot be given. We can guess and say that you want to either pass a nested list like the above comment, or you really just need to do more calculation in the view and pass a single list (testing for empty if you don't want it to do anything).\nHope this helps.\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0001106849_django_django_templates_python.txt
Q: Find functions explicitly defined in a module (python) Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this: from datetime import date, datetime def test(): return "This is a real method" Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see: ['date', 'datetime', 'test'] Is there any way to exclude imports? Or another way to find out what's defined in a module? A: Are you looking for something like this? import sys, inspect def is_mod_function(mod, func): return inspect.isfunction(func) and inspect.getmodule(func) == mod def list_functions(mod): return [func.__name__ for func in mod.__dict__.itervalues() if is_mod_function(mod, func)] print 'functions in current module:\n', list_functions(sys.modules[__name__]) print 'functions in inspect module:\n', list_functions(inspect) EDIT: Changed variable names from 'meth' to 'func' to avoid confusion (we're dealing with functions, not methods, here). A: How about the following: grep ^def my_module.py A: You can check __module__ attribute of the function in question. I say "function" because a method belongs to a class usually ;-). BTW, a class actually also has __module__ attribute. A: Every class in python has a __module__ attribute. You can use its value to perform filtering. Take a look at example 6.14 in dive into python A: the python inspect module is probably what you're looking for here. import inspect if inspect.ismethod(methodInQuestion): pass # It's a method
Find functions explicitly defined in a module (python)
Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this: from datetime import date, datetime def test(): return "This is a real method" Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see: ['date', 'datetime', 'test'] Is there any way to exclude imports? Or another way to find out what's defined in a module?
[ "Are you looking for something like this?\nimport sys, inspect\n\ndef is_mod_function(mod, func):\n return inspect.isfunction(func) and inspect.getmodule(func) == mod\n\ndef list_functions(mod):\n return [func.__name__ for func in mod.__dict__.itervalues() \n if is_mod_function(mod, func)]\n\n\nprint 'functions in current module:\\n', list_functions(sys.modules[__name__])\nprint 'functions in inspect module:\\n', list_functions(inspect)\n\nEDIT: Changed variable names from 'meth' to 'func' to avoid confusion (we're dealing with functions, not methods, here).\n", "How about the following:\ngrep ^def my_module.py\n\n", "You can check __module__ attribute of the function in question. I say \"function\" because a method belongs to a class usually ;-).\nBTW, a class actually also has __module__ attribute.\n", "Every class in python has a __module__ attribute. You can use its value to perform filtering. Take a look at example 6.14 in dive into python\n", "the python inspect module is probably what you're looking for here.\nimport inspect\nif inspect.ismethod(methodInQuestion):\n pass # It's a method\n\n" ]
[ 30, 4, 2, 1, 0 ]
[]
[]
[ "introspection", "python" ]
stackoverflow_0001106840_introspection_python.txt
Q: What should I call this function composition? What should 'foo' be called, given the following? x.items is a set, y.values is a set. function a(key) returns an ordered list of x.items function b(x.item) returns a single y.value Define foo(a, b), which returns a function, d, such that d(key) returns a list of y.values defined by: map(b, a(key)). This feels like a fairly common and generic function composition but I don't know what to call it. A: function a(key) returns an ordered list of x.items function b(x.item) returns a single y.value Except for the ordering, a() is in practice a filter, i.e. it "filters" or "selects" items from x.items according to a key. b() is a normal map, or function. Thus, I would choose for 'foo' the name "composeFilterWithMap", or "composeSelectorWithMap" or a similar name. A: I would call it function compositor or something like that A: I would call that function permuted_values What you are doing is equivalent to iterating over a hash map using a permutation based on your key. A: The synergy engine. A: Here's are example names for a, b, and foo that might help (I don't like these, but they're sort of like what I'm getting at): items_by_key(key) value_by_item(item) values_by_key(items_by_key, value_by_item)
What should I call this function composition?
What should 'foo' be called, given the following? x.items is a set, y.values is a set. function a(key) returns an ordered list of x.items function b(x.item) returns a single y.value Define foo(a, b), which returns a function, d, such that d(key) returns a list of y.values defined by: map(b, a(key)). This feels like a fairly common and generic function composition but I don't know what to call it.
[ " function a(key) returns an ordered list of x.items\n function b(x.item) returns a single y.value\n\nExcept for the ordering, a() is in practice a filter, i.e. it \"filters\" or \"selects\" items from x.items according to a key. b() is a normal map, or function. Thus, I would choose for 'foo' the name \"composeFilterWithMap\", or \"composeSelectorWithMap\" or a similar name. \n", "I would call it function compositor or something like that\n", "I would call that function permuted_values\nWhat you are doing is equivalent to iterating over a hash map using a permutation based on your key.\n", "The synergy engine.\n", "Here's are example names for a, b, and foo that might help (I don't like these, but they're sort of like what I'm getting at):\n\nitems_by_key(key)\n\nvalue_by_item(item)\n\nvalues_by_key(items_by_key, value_by_item)\n\n" ]
[ 2, 1, 1, 0, 0 ]
[]
[]
[ "functional_programming", "math", "python" ]
stackoverflow_0001058273_functional_programming_math_python.txt
Q: How to exclude U+2028 from line separators in Python when reading file? I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (http://www.fileformat.info/info/unicode/char/2028/index.htm). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you! A: I couldn't reproduce that behavior but here's a naive solution that just merges readline results until they don't end with U+2028. #!/usr/bin/env python from __future__ import with_statement def my_readlines(f): buf = u"" for line in f.readlines(): uline = line.decode('utf8') buf += uline if uline[-1] != u'\u2028': yield buf buf = u"" if buf: yield buf with open("in.txt", "rb") as fin: for l in my_readlines(fin): print l A: I can't duplicate this behaviour in python 2.5, 2.6 or 3.0 on mac os x - U+2028 is always treated as non-endline. Could you go into more detail about where you see this error? That said, here is a subclass of the "file" class that might do what you want: #/usr/bin/python # -*- coding: utf-8 -*- class MyFile (file): def __init__(self, *arg, **kwarg): file.__init__(self, *arg, **kwarg) self.EOF = False def next(self, catchEOF = False): if self.EOF: raise StopIteration("End of file") try: nextLine= file.next(self) except StopIteration: self.EOF = True if not catchEOF: raise return "" if nextLine.decode("utf8")[-1] == u'\u2028': return nextLine+self.next(catchEOF = True) else: return nextLine A = MyFile("someUnicode.txt") for line in A: print line.strip("\n").decode("utf8") A: Thanks to everyone for answering. I think I know why you might not have been able to replicate this.I just realized that it happens if I decode the file when opening, as in: f = codecs.open(filename, encoding='utf-8') for line in f: print line The lines are not separated on u2028, if I open the file first and then decode individual lines: f = open(filename) for line in f: print line.decode("utf8") (I'm using Python 2.6 on Windows. The file was originally UTF16LE and then it was converted into UTF8). This is very interesting, I guess I won't be using codecs.open much from now on :-). A: If you use Python 3.0 (note that I don't, so I can't test), according to the documentation you can pass an optional newline parameter to open to specifify which line seperator to use. However, the documentation doesn't mention U+2028 at all (it only mentions \r, \n, and \r\n as line seperators), so it's actually a suprise to me that this even occurs (although I can confirm this even with Python 2.6). A: The codecs module is doing the RIGHT thing. U+2028 is named "LINE SEPARATOR" with the comment "may be used to represent this semantic unambiguously". So treating it as a line separator is sensible. Presumably the creator would not have put the U+2028 characters there without good reason ... does the file have u"\n" as well? Why do you want lines not to be split on U+2028?
How to exclude U+2028 from line separators in Python when reading file?
I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (http://www.fileformat.info/info/unicode/char/2028/index.htm). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!
[ "I couldn't reproduce that behavior but here's a naive solution that just merges readline results until they don't end with U+2028.\n#!/usr/bin/env python\n\nfrom __future__ import with_statement\n\ndef my_readlines(f):\n buf = u\"\"\n for line in f.readlines():\n uline = line.decode('utf8')\n buf += uline\n if uline[-1] != u'\\u2028':\n yield buf\n buf = u\"\"\n if buf:\n yield buf\n\nwith open(\"in.txt\", \"rb\") as fin:\n for l in my_readlines(fin):\n print l\n\n", "I can't duplicate this behaviour in python 2.5, 2.6 or 3.0 on mac os x - U+2028 is always treated as non-endline. Could you go into more detail about where you see this error?\nThat said, here is a subclass of the \"file\" class that might do what you want:\n#/usr/bin/python\n# -*- coding: utf-8 -*-\nclass MyFile (file):\n def __init__(self, *arg, **kwarg):\n file.__init__(self, *arg, **kwarg)\n self.EOF = False\n def next(self, catchEOF = False):\n if self.EOF:\n raise StopIteration(\"End of file\")\n try:\n nextLine= file.next(self)\n except StopIteration:\n self.EOF = True\n if not catchEOF:\n raise\n return \"\"\n if nextLine.decode(\"utf8\")[-1] == u'\\u2028':\n return nextLine+self.next(catchEOF = True)\n else:\n return nextLine\n\nA = MyFile(\"someUnicode.txt\")\nfor line in A:\n print line.strip(\"\\n\").decode(\"utf8\")\n\n", "Thanks to everyone for answering. \nI think I know why you might not have been able to replicate this.I just realized that it happens if I decode the file when opening, as in:\nf = codecs.open(filename, encoding='utf-8')\nfor line in f:\n print line\n\nThe lines are not separated on u2028, if I open the file first and then decode individual lines:\nf = open(filename)\nfor line in f:\n print line.decode(\"utf8\")\n\n(I'm using Python 2.6 on Windows. The file was originally UTF16LE and then it was converted into UTF8).\nThis is very interesting, I guess I won't be using codecs.open much from now on :-).\n", "If you use Python 3.0 (note that I don't, so I can't test), according to the documentation you can pass an optional newline parameter to open to specifify which line seperator to use. However, the documentation doesn't mention U+2028 at all (it only mentions \\r, \\n, and \\r\\n as line seperators), so it's actually a suprise to me that this even occurs (although I can confirm this even with Python 2.6).\n", "The codecs module is doing the RIGHT thing. U+2028 is named \"LINE SEPARATOR\" with the comment \"may be used to represent this semantic unambiguously\". So treating it as a line separator is sensible.\nPresumably the creator would not have put the U+2028 characters there without good reason ... does the file have u\"\\n\" as well? Why do you want lines not to be split on U+2028?\n" ]
[ 2, 2, 1, 0, 0 ]
[]
[]
[ "python", "readline", "separator", "utf_8" ]
stackoverflow_0001105106_python_readline_separator_utf_8.txt
Q: improving Boyer-Moore string search I've been playing around with the Boyer-Moore sting search algorithm and starting with a base code set from Shriphani Palakodety I created 2 additional versions (v2 and v3) - each making some modifications such as removing len() function from the loop and than refactoring the while/if conditions. From v1 to v2 I see about a 10%-15% improvement and from v1 to v3 a 25%-30% improvement (significant). My question is: does anyone have any additional mods that would improve performance even more (if you can submit as a v4) - keeping the base 'algorithm' true to Boyer-Moore. #!/usr/bin/env python import time bcs = {} #the table def goodSuffixShift(key): for i in range(len(key)-1, -1, -1): if key[i] not in bcs.keys(): bcs[key[i]] = len(key)-i-1 #---------------------- v1 ---------------------- def searchv1(text, key): """base from Shriphani Palakodety fixed for single char""" i = len(key)-1 index = len(key) -1 j = i while True: if i < 0: return j + 1 elif j > len(text): return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len(key) i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v2 ---------------------- def searchv2(text, key): """removed string len functions from loop""" len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while True: if i < 0: return j + 1 elif j > len_text: return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len_key i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v3 ---------------------- def searchv3(text, key): """from v2 plus modified 3rd if condition breaking down the comparison for efficiency, modified the while loop to include the first if condition (opposite of it) """ len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while i >= 0 and j <= len_text: if text[j] != key[i]: if text[j] not in bcs.keys(): j += len_key i = index else: j += bcs[text[j]] i = index else: j -= 1 i -= 1 if j > len_text: return "not found" else: return j + 1 key_list = ["POWER", "HOUSE", "COMP", "SCIENCE", "SHRIPHANI", "BRUAH", "A", "H"] text = "SHRIPHANI IS A COMPUTER SCIENCE POWERHOUSE" t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv1(text, key) searchv1(text, key) bcs = {} t2 = time.clock() print('v1 took %0.5f ms' % ((t2-t1)*1000.0)) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv2(text, key) searchv2(text, key) bcs = {} t2 = time.clock() print('v2 took %0.5f ms' % ((t2-t1)*1000.0)) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv3(text, key) searchv3(text, key) bcs = {} t2 = time.clock() print('v3 took %0.5f ms' % ((t2-t1)*1000.0)) A: Using "in bcs.keys()" is creating a list and then doing an O(N) search of the list -- just use "in bcs". Do the goodSuffixShift(key) thing inside the search function. Two benefits: the caller has only one API to use, and you avoid having bcs as a global (horrid ** 2). Your indentation is incorrect in several places. Update This is not the Boyer-Moore algorithm (which uses TWO lookup tables). It looks more like the Boyer-Moore-Horspool algorithm, which uses only the first BM table. A probable speedup: add the line 'bcsget = bcs.get' after setting up the bcs dict. Then replace: if text[j] != key[i]: if text[j] not in bcs.keys(): j += len_key i = index else: j += bcs[text[j]] i = index with: if text[j] != key[i]: j += bcsget(text[j], len_key) i = index Update 2 -- back to basics, like getting the code correct before you optimise Version 1 has some bugs which you have carried forward into versions 2 and 3. Some suggestions: Change the not-found response from "not found" to -1. This makes it compatible with text.find(key), which you can use to check your results. Get some more text values e.g. "R" * 20, "X" * 20, and "XXXSCIENCEYYY" for use with your existing key values. Lash up a test harness, something like this: func_list = [searchv1, searchv2, searchv3] def test(): for text in text_list: print '==== text is', repr(text) for func in func_list: for key in key_list: try: result = func(text, key) except Exception, e: print "EXCEPTION: %r expected:%d func:%s key:%r" % (e, expected, func.__name__, key) continue expected = text.find(key) if result != expected: print "ERROR actual:%d expected:%d func:%s key:%r" % (result, expected, func.__name__, key) Run that, fix the errors in v1, carry those fixes forward, run the tests again until they're all OK. Then you can tidy up your timing harness along the same lines, and see what the performance is. Then you can report back here, and I'll give you my idea of what a searchv4 function should look like ;-)
improving Boyer-Moore string search
I've been playing around with the Boyer-Moore sting search algorithm and starting with a base code set from Shriphani Palakodety I created 2 additional versions (v2 and v3) - each making some modifications such as removing len() function from the loop and than refactoring the while/if conditions. From v1 to v2 I see about a 10%-15% improvement and from v1 to v3 a 25%-30% improvement (significant). My question is: does anyone have any additional mods that would improve performance even more (if you can submit as a v4) - keeping the base 'algorithm' true to Boyer-Moore. #!/usr/bin/env python import time bcs = {} #the table def goodSuffixShift(key): for i in range(len(key)-1, -1, -1): if key[i] not in bcs.keys(): bcs[key[i]] = len(key)-i-1 #---------------------- v1 ---------------------- def searchv1(text, key): """base from Shriphani Palakodety fixed for single char""" i = len(key)-1 index = len(key) -1 j = i while True: if i < 0: return j + 1 elif j > len(text): return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len(key) i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v2 ---------------------- def searchv2(text, key): """removed string len functions from loop""" len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while True: if i < 0: return j + 1 elif j > len_text: return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len_key i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v3 ---------------------- def searchv3(text, key): """from v2 plus modified 3rd if condition breaking down the comparison for efficiency, modified the while loop to include the first if condition (opposite of it) """ len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while i >= 0 and j <= len_text: if text[j] != key[i]: if text[j] not in bcs.keys(): j += len_key i = index else: j += bcs[text[j]] i = index else: j -= 1 i -= 1 if j > len_text: return "not found" else: return j + 1 key_list = ["POWER", "HOUSE", "COMP", "SCIENCE", "SHRIPHANI", "BRUAH", "A", "H"] text = "SHRIPHANI IS A COMPUTER SCIENCE POWERHOUSE" t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv1(text, key) searchv1(text, key) bcs = {} t2 = time.clock() print('v1 took %0.5f ms' % ((t2-t1)*1000.0)) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv2(text, key) searchv2(text, key) bcs = {} t2 = time.clock() print('v2 took %0.5f ms' % ((t2-t1)*1000.0)) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv3(text, key) searchv3(text, key) bcs = {} t2 = time.clock() print('v3 took %0.5f ms' % ((t2-t1)*1000.0))
[ "Using \"in bcs.keys()\" is creating a list and then doing an O(N) search of the list -- just use \"in bcs\". \nDo the goodSuffixShift(key) thing inside the search function. Two benefits: the caller has only one API to use, and you avoid having bcs as a global (horrid ** 2). \nYour indentation is incorrect in several places.\nUpdate \nThis is not the Boyer-Moore algorithm (which uses TWO lookup tables). It looks more like the Boyer-Moore-Horspool algorithm, which uses only the first BM table.\nA probable speedup: add the line 'bcsget = bcs.get' after setting up the bcs dict. Then replace:\nif text[j] != key[i]:\n if text[j] not in bcs.keys():\n j += len_key\n i = index\n else:\n j += bcs[text[j]]\n i = index\n\nwith:\nif text[j] != key[i]:\n j += bcsget(text[j], len_key)\n i = index\n\nUpdate 2 -- back to basics, like getting the code correct before you optimise \nVersion 1 has some bugs which you have carried forward into versions 2 and 3. Some suggestions: \nChange the not-found response from \"not found\" to -1. This makes it compatible with text.find(key), which you can use to check your results.\nGet some more text values e.g. \"R\" * 20, \"X\" * 20, and \"XXXSCIENCEYYY\" for use with your existing key values.\nLash up a test harness, something like this:\nfunc_list = [searchv1, searchv2, searchv3]\ndef test():\n for text in text_list: \n print '==== text is', repr(text)\n for func in func_list:\n for key in key_list:\n try:\n result = func(text, key)\n except Exception, e:\n print \"EXCEPTION: %r expected:%d func:%s key:%r\" % (e, expected, func.__name__, key)\n continue\n expected = text.find(key)\n if result != expected:\n print \"ERROR actual:%d expected:%d func:%s key:%r\" % (result, expected, func.__name__, key)\n\nRun that, fix the errors in v1, carry those fixes forward, run the tests again until they're all OK. Then you can tidy up your timing harness along the same lines, and see what the performance is. Then you can report back here, and I'll give you my idea of what a searchv4 function should look like ;-)\n" ]
[ 4 ]
[]
[]
[ "performance", "python" ]
stackoverflow_0001106112_performance_python.txt
Q: Python create function in a loop capturing the loop variable What's going on here? I'm trying to create a list of functions: def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) This isn't doing what I expect. I would expect the list to act like this: funcs[3](3) = 9 funcs[0](5) = 0 But all the functions in the list seem to be identical, and be setting the fixed value to be 9: funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 Any ideas? A: lambdas in python are closures.... the arguments you give it aren't going to be evaluated until the lambda is evaluated. At that time, i=9 regardless, because your iteration is finished. The behavior you're looking for can be achieved with functools.partial import functools def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(functools.partial(f,i)) A: Yep, the usual "scoping problem" (actually a binding-later-than-you want problem, but it's often called by that name). You've already gotten the two best (because simplest) answers -- the "fake default" i=i solution, and functools.partial, so I'm only giving the third one of the classic three, the "factory lambda": for i in range(0,10): funcs.append((lambda i: lambda x: f(i, x))(i)) Personally I'd go with i=i if there's no risk of the functions in funcs being accidentally called with 2 parameters instead of just 1, but the factory function approach is worth considering when you need something a little bit richer than just pre-binding one arg. A: There's only one i which is bound to each lambda, contrary to what you think. This is a common mistake. One way to get what you want is: for i in range(0,10): funcs.append(lambda x, i=i: f(i, x)) Now you're creating a default parameter i in each lambda closure and binding to it the current value of the looping variable i. A: All the lambdas end up being bound to the last one. See this question for a longer answer: How do I create a list of Python lambdas (in a list comprehension/for loop)? A: Considering the final value of i == 9 Like any good python function, it's going to use the value of the variable in the scope it was defined. Perhaps lambda: varname (being that it is a language construct) binds to the name, not the value, and evaluates that name at runtime? Similar to: i = 9 def foo(): print i i = 10 foo() I'd be quite interested in finding out of my answer is correct
Python create function in a loop capturing the loop variable
What's going on here? I'm trying to create a list of functions: def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) This isn't doing what I expect. I would expect the list to act like this: funcs[3](3) = 9 funcs[0](5) = 0 But all the functions in the list seem to be identical, and be setting the fixed value to be 9: funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 Any ideas?
[ "lambdas in python are closures.... the arguments you give it aren't going to be evaluated until the lambda is evaluated. At that time, i=9 regardless, because your iteration is finished.\nThe behavior you're looking for can be achieved with functools.partial\nimport functools\n\ndef f(a,b):\n return a*b\n\nfuncs = []\n\nfor i in range(0,10):\n funcs.append(functools.partial(f,i))\n\n", "Yep, the usual \"scoping problem\" (actually a binding-later-than-you want problem, but it's often called by that name). You've already gotten the two best (because simplest) answers -- the \"fake default\" i=i solution, and functools.partial, so I'm only giving the third one of the classic three, the \"factory lambda\":\nfor i in range(0,10):\n funcs.append((lambda i: lambda x: f(i, x))(i))\n\nPersonally I'd go with i=i if there's no risk of the functions in funcs being accidentally called with 2 parameters instead of just 1, but the factory function approach is worth considering when you need something a little bit richer than just pre-binding one arg.\n", "There's only one i which is bound to each lambda, contrary to what you think. This is a common mistake. \nOne way to get what you want is:\nfor i in range(0,10):\n funcs.append(lambda x, i=i: f(i, x))\n\nNow you're creating a default parameter i in each lambda closure and binding to it the current value of the looping variable i.\n", "All the lambdas end up being bound to the last one. See this question for a longer answer:\nHow do I create a list of Python lambdas (in a list comprehension/for loop)?\n", "Considering the final value of i == 9\nLike any good python function, it's going to use the value of the variable in the scope it was defined. Perhaps lambda: varname (being that it is a language construct) binds to the name, not the value, and evaluates that name at runtime?\nSimilar to:\ni = 9\ndef foo():\n print i\n\ni = 10\nfoo()\n\nI'd be quite interested in finding out of my answer is correct\n" ]
[ 20, 15, 10, 2, 2 ]
[]
[]
[ "lambda", "python" ]
stackoverflow_0001107210_lambda_python.txt
Q: hex to string formatting conversion in python I used to generate random string in the following way (now I've switched to this method). key = '%016x' % random.getrandbits(128) The key generated this way is most often a 32 character string, but once I've got 31 chars. This is what I don't get: why it's 32 chars, not 16? Doesn't one hex digit take one character to print? So if I ask for %016x - shouldn't one expect sixteen chars with possible leading zeroes? Why string legth is not always the same? Test case import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%016x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] Prints: 32 937911 27 1 28 9 29 221 30 3735 31 58123 A: Yes, but the format you're using doesn't truncate -- you generate 128 random bits, which require (usually) 32 hex digits to show, and the %016 means AT LEAST 16 hex digits, but doesn't just throw away the extra ones you need to show all of that 128-bit number. Why not generate just 64 random bits if that's what you actually need? Less work for the random generator AND no formatting problems. To satisfy your side curiosity, the length is occasionally 31 digits because 1 time in 16 the top 4 bits will all be 0; actually 1 time in 256 all the top 8 bits will be 0 so you'll get only 30 digits, etc. You've only asked for 16 digits, so the formatting will give the least number that's >= 16 and doesn't require the truncation you have not asked for. A: Each hex characters from 0 to F contains 4 bits of information, or half a byte. 128 bits is 16 bytes, and since it takes two hex characters to print a byte you get 32 characters. Your format string should thus be '%032x' which will always generate a 32-character string, never shorter. jkugelman$ cat rand.py #!/usr/bin/env python import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%032x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] jkugelman$ python rand.py 32 1000000
hex to string formatting conversion in python
I used to generate random string in the following way (now I've switched to this method). key = '%016x' % random.getrandbits(128) The key generated this way is most often a 32 character string, but once I've got 31 chars. This is what I don't get: why it's 32 chars, not 16? Doesn't one hex digit take one character to print? So if I ask for %016x - shouldn't one expect sixteen chars with possible leading zeroes? Why string legth is not always the same? Test case import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%016x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] Prints: 32 937911 27 1 28 9 29 221 30 3735 31 58123
[ "Yes, but the format you're using doesn't truncate -- you generate 128 random bits, which require (usually) 32 hex digits to show, and the %016 means AT LEAST 16 hex digits, but doesn't just throw away the extra ones you need to show all of that 128-bit number. Why not generate just 64 random bits if that's what you actually need? Less work for the random generator AND no formatting problems.\nTo satisfy your side curiosity, the length is occasionally 31 digits because 1 time in 16 the top 4 bits will all be 0; actually 1 time in 256 all the top 8 bits will be 0 so you'll get only 30 digits, etc. You've only asked for 16 digits, so the formatting will give the least number that's >= 16 and doesn't require the truncation you have not asked for.\n", "Each hex characters from 0 to F contains 4 bits of information, or half a byte. 128 bits is 16 bytes, and since it takes two hex characters to print a byte you get 32 characters. Your format string should thus be '%032x' which will always generate a 32-character string, never shorter.\njkugelman$ cat rand.py\n#!/usr/bin/env python\n\nimport random\nimport collections\nstats = collections.defaultdict(int)\nfor i in range(1000000):\n key = '%032x' % random.getrandbits(128)\n length = len(key)\n stats[length] += 1\n\nfor key in stats:\n print key, ' ', stats[key]\njkugelman$ python rand.py\n32 1000000\n\n" ]
[ 6, 3 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001107331_python_string.txt
Q: Assigning a Iron Python list to .NET array I have a list comprehension operating on elements of an .NET array like obj.arr = [f(x) for x in obj.arr] However the assignment back to obj.arr fails. Is it possible to convert a list to a .NET array in IronPython? A: Try this: obj.arr = Array[T]([f(x) for x in obj.arr]) replacing T with type of array elements. Alternatively: obj.arr = tuple([f(x) for x in obj.arr]) A: Arrays have to be typed as far as I know. This works for me: num_list = [n for n in range(10)] from System import Array num_arr = Array[int](num_list) Similarly for strings and other types.
Assigning a Iron Python list to .NET array
I have a list comprehension operating on elements of an .NET array like obj.arr = [f(x) for x in obj.arr] However the assignment back to obj.arr fails. Is it possible to convert a list to a .NET array in IronPython?
[ "Try this:\nobj.arr = Array[T]([f(x) for x in obj.arr])\n\nreplacing T with type of array elements.\nAlternatively:\nobj.arr = tuple([f(x) for x in obj.arr])\n\n", "Arrays have to be typed as far as I know. This works for me:\nnum_list = [n for n in range(10)]\n\nfrom System import Array\nnum_arr = Array[int](num_list)\n\nSimilarly for strings and other types. \n" ]
[ 10, 4 ]
[]
[]
[ "ironpython", "python" ]
stackoverflow_0001107789_ironpython_python.txt
Q: Lengthy single line strings in Python without going over maximum line length How can I break a long one liner string in my code and keep the string indented with the rest of the code? PEP 8 doesn't have any example for this case. Correct ouptut but strangely indented: if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test Bad output, but looks better in code: if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test Wow, lots of fast answers. Thanks! A: Adjacent strings are concatenated at compile time: if True: print ("this is the first line of a very long string" " this is the second line") Output: this is the first line of a very long string this is the second line A: if True: print "long test long test long test long test long"\ "test long test long test long test long test long test" A: You can use a trailing backslash to join separate strings like this: if True: print "long test long test long test long test long " \ "test long test long test long test long test long test" A: Why isn't anyone recommending triple quotes? print """ blah blah blah .............."""
Lengthy single line strings in Python without going over maximum line length
How can I break a long one liner string in my code and keep the string indented with the rest of the code? PEP 8 doesn't have any example for this case. Correct ouptut but strangely indented: if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test Bad output, but looks better in code: if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test Wow, lots of fast answers. Thanks!
[ "Adjacent strings are concatenated at compile time:\nif True:\n print (\"this is the first line of a very long string\"\n \" this is the second line\")\n\nOutput:\nthis is the first line of a very long string this is the second line\n\n", "if True:\n print \"long test long test long test long test long\"\\\n \"test long test long test long test long test long test\"\n\n", "You can use a trailing backslash to join separate strings like this:\nif True:\n print \"long test long test long test long test long \" \\\n \"test long test long test long test long test long test\"\n\n", "Why isn't anyone recommending triple quotes?\nprint \"\"\" blah blah\n blah ..............\"\"\"\n\n" ]
[ 30, 7, 2, 0 ]
[ "if True:\n print \"long test long test long test \"+\n \"long test long test long test \"+\n \"long test long test long test \"\n\nAnd so on.\n" ]
[ -6 ]
[ "python", "string" ]
stackoverflow_0001104762_python_string.txt
Q: Noob components design question Updated question, see below I'm starting a new project and I would like to experiment with components based architecture (I chose PyProtocols). It's a little program to display and interract with realtime graphics. I started by designing the user input components: IInputDevice - e.g. a mouse, keyboard, etc... An InputDevice may have one or more output channels: IOutput - an output channel containing a single value (e.g. the value of a MIDI slider) ISequenceOutput - an output channel containing a sequence of values (e.g. 2 integers representing mouse position) IDictOutput - an output channel containing named values (e.g. the state of each key of the keyboard, indexed by keyboard symbols) Now I would like to define interfaces to filter those outputs (smooth, jitter, invert, etc...). My first approach was to create an InputFilter interface, that had different filter methods for each kind of output channel it was connected to... But the introduction in PyProtocols documentation clearly says that the whole interface and adapters thing is about avoiding type checking ! So my guess is that my InputFilter interfaces should look like this: IInputFilter - filters IOutput ISequenceInputFilter - filters ISequenceOutput IDictInputFilter - filters IDictOutput Then I could have a connect() method in the I*Ouptut interfaces, that could magically adapt my filters and use the one appropriate for the type of output. I tried to implement that, and it kind of works: class InputFilter(object): """ Basic InputFilter implementation. """ advise( instancesProvide=[IInputFilter], ) def __init__(self): self.parameters = {} def connect(self, src): self.src = src def read(self): return self.src.read() class InvertInputFilter(InputFilter): """ A filter inverting single values. """ def read(self): return -self.src.read() class InvertSequenceInputFilter(InputFilter): """ A filter inverting sequences of values. """ advise( instancesProvide=[ISequenceInputFilter], asAdapterForProtocols=[IInputFilter], ) def __init__(self, ob): self.ob = ob def read(self): res = [] for value in self.src.read(): res.append(-value) return res Now I can adapt my filters to the type of output: filter = InvertInputFilter() single_filter = IInputFilter(filter) # noop sequence_filter = ISequenceInputFilter(filter) # creates an InvertSequenceInputFilter instance single_filter and sequence_filter have the correct behaviors and produce single and sequence data types. Now if I define a new InputFilter type on the same model, I get errors like this: TypeError: ('Ambiguous adapter choice', <class 'InvertSequenceInputFilter'>, <class 'SomeOtherSequenceInputFilter'>, 1, 1) I must be doing something terribly wrong, is my design even correct ? Or maybe am I missing the point on how to implement my InputFilterS ? Update 2 I understand I was expecting a little too much magic here, adapters don't type check the objects they are adapting and just look at the interface they provide, which now sounds normal to me (remember I'm new to these concepts !). So I came up with a new design (stripped to the bare minimum and omitted the dict interfaces): class IInputFilter(Interface): def read(): pass def connect(src): pass class ISingleInputFilter(Interface): def read_single(): pass class ISequenceInputFilter(Interface): def read_sequence(): pass So IInputFilter is now a sort of generic component, the one that is actually used, ISingleInputFilter and ISequenceInputFilter provide the specialized implementations. Now I can write adapters from the specialized to the generic interfaces: class SingleInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISingleInputFilter], ) def __init__(self, ob): self.read = ob.read_single class SequenceInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISequenceInputFilter], ) def __init__(self, ob): self.read = ob.read_sequence Now I write my InvertInputFilter like this: class InvertInputFilter(object): advise( instancesProvide=[ ISingleInputFilter, ISequenceInputFilter ] ) def read_single(self): # Return single value inverted def read_sequence(self): # Return sequence of inverted values And to use it with the various output types I would do: filter = InvertInputFilter() single_filter = SingleInputFilterAsInputFilter(filter) sequence_filter = SequenceInputFilterAsInputFilter(filter) But, again, this fails miserably with the same kind of error, and this time it's triggered directly by the InvertInputFilter definition: TypeError: ('Ambiguous adapter choice', <class 'SingleInputFilterAsInputFilter'>, <class 'SequenceInputFilterAsInputFilter'>, 2, 2) (the error disapears as soon as I put exactly one interface in the class' instancesProvide clause) Update 3 After some discussion on the PEAK mailing list, it seems that this last error is due to a design flaw in PyProtocols, that does some extra checks at declaration time. I rewrote everything with zope.interface and it works perfectly. A: I haven't used PyProtocols, only the Zope Component Architecture, but they are similar enough for these principles to be the same. Your error is that you have two adapters that can adapt the same thing. You both have an averaging filter and an inversion filter. When you then ask for the filter, both are found, and you get the "ambigous adapter" error. You can handle this by having different interfaces for averaging filters and inverting filters, but it's getting silly. In the Zope component architecture you would typically handle this case with named adapters. Each adapter gets a name, by default ''. In this case you would give the adapter names like "averaging" and "inverting", and you'd look them up with that name, so you know if you get the averaging or the inverting filter. For the more general question, if the design makes sense or not, it's hard to tell. You having three different kinds of outputs and three different kinds of filters doesn't seem like a good idea. Perhaps you could make the sequence and dict outputs into composites of the single value output, so that each output value gets it's own object, so it can be filtered independently. That would make more sense to me.
Noob components design question
Updated question, see below I'm starting a new project and I would like to experiment with components based architecture (I chose PyProtocols). It's a little program to display and interract with realtime graphics. I started by designing the user input components: IInputDevice - e.g. a mouse, keyboard, etc... An InputDevice may have one or more output channels: IOutput - an output channel containing a single value (e.g. the value of a MIDI slider) ISequenceOutput - an output channel containing a sequence of values (e.g. 2 integers representing mouse position) IDictOutput - an output channel containing named values (e.g. the state of each key of the keyboard, indexed by keyboard symbols) Now I would like to define interfaces to filter those outputs (smooth, jitter, invert, etc...). My first approach was to create an InputFilter interface, that had different filter methods for each kind of output channel it was connected to... But the introduction in PyProtocols documentation clearly says that the whole interface and adapters thing is about avoiding type checking ! So my guess is that my InputFilter interfaces should look like this: IInputFilter - filters IOutput ISequenceInputFilter - filters ISequenceOutput IDictInputFilter - filters IDictOutput Then I could have a connect() method in the I*Ouptut interfaces, that could magically adapt my filters and use the one appropriate for the type of output. I tried to implement that, and it kind of works: class InputFilter(object): """ Basic InputFilter implementation. """ advise( instancesProvide=[IInputFilter], ) def __init__(self): self.parameters = {} def connect(self, src): self.src = src def read(self): return self.src.read() class InvertInputFilter(InputFilter): """ A filter inverting single values. """ def read(self): return -self.src.read() class InvertSequenceInputFilter(InputFilter): """ A filter inverting sequences of values. """ advise( instancesProvide=[ISequenceInputFilter], asAdapterForProtocols=[IInputFilter], ) def __init__(self, ob): self.ob = ob def read(self): res = [] for value in self.src.read(): res.append(-value) return res Now I can adapt my filters to the type of output: filter = InvertInputFilter() single_filter = IInputFilter(filter) # noop sequence_filter = ISequenceInputFilter(filter) # creates an InvertSequenceInputFilter instance single_filter and sequence_filter have the correct behaviors and produce single and sequence data types. Now if I define a new InputFilter type on the same model, I get errors like this: TypeError: ('Ambiguous adapter choice', <class 'InvertSequenceInputFilter'>, <class 'SomeOtherSequenceInputFilter'>, 1, 1) I must be doing something terribly wrong, is my design even correct ? Or maybe am I missing the point on how to implement my InputFilterS ? Update 2 I understand I was expecting a little too much magic here, adapters don't type check the objects they are adapting and just look at the interface they provide, which now sounds normal to me (remember I'm new to these concepts !). So I came up with a new design (stripped to the bare minimum and omitted the dict interfaces): class IInputFilter(Interface): def read(): pass def connect(src): pass class ISingleInputFilter(Interface): def read_single(): pass class ISequenceInputFilter(Interface): def read_sequence(): pass So IInputFilter is now a sort of generic component, the one that is actually used, ISingleInputFilter and ISequenceInputFilter provide the specialized implementations. Now I can write adapters from the specialized to the generic interfaces: class SingleInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISingleInputFilter], ) def __init__(self, ob): self.read = ob.read_single class SequenceInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISequenceInputFilter], ) def __init__(self, ob): self.read = ob.read_sequence Now I write my InvertInputFilter like this: class InvertInputFilter(object): advise( instancesProvide=[ ISingleInputFilter, ISequenceInputFilter ] ) def read_single(self): # Return single value inverted def read_sequence(self): # Return sequence of inverted values And to use it with the various output types I would do: filter = InvertInputFilter() single_filter = SingleInputFilterAsInputFilter(filter) sequence_filter = SequenceInputFilterAsInputFilter(filter) But, again, this fails miserably with the same kind of error, and this time it's triggered directly by the InvertInputFilter definition: TypeError: ('Ambiguous adapter choice', <class 'SingleInputFilterAsInputFilter'>, <class 'SequenceInputFilterAsInputFilter'>, 2, 2) (the error disapears as soon as I put exactly one interface in the class' instancesProvide clause) Update 3 After some discussion on the PEAK mailing list, it seems that this last error is due to a design flaw in PyProtocols, that does some extra checks at declaration time. I rewrote everything with zope.interface and it works perfectly.
[ "I haven't used PyProtocols, only the Zope Component Architecture, but they are similar enough for these principles to be the same.\nYour error is that you have two adapters that can adapt the same thing. You both have an averaging filter and an inversion filter. When you then ask for the filter, both are found, and you get the \"ambigous adapter\" error.\nYou can handle this by having different interfaces for averaging filters and inverting filters, but it's getting silly. In the Zope component architecture you would typically handle this case with named adapters. Each adapter gets a name, by default ''. In this case you would give the adapter names like \"averaging\" and \"inverting\", and you'd look them up with that name, so you know if you get the averaging or the inverting filter.\nFor the more general question, if the design makes sense or not, it's hard to tell. You having three different kinds of outputs and three different kinds of filters doesn't seem like a good idea. Perhaps you could make the sequence and dict outputs into composites of the single value output, so that each output value gets it's own object, so it can be filtered independently. That would make more sense to me.\n" ]
[ 1 ]
[]
[]
[ "interface", "protocols", "python" ]
stackoverflow_0001107368_interface_protocols_python.txt
Q: Do I have any obligations if I upload an egg to the CheeseShop? Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ? I haven't really released anything as open source 'till now, and I'd like to know the process. A: You have an obligation to register the package with a useful description. Nothing is more frustrating than finding a Package that may be good, but you don't know, because there is no description. Typical example of Lazy developer: http://pypi.python.org/pypi/gevent/0.9.1 Better: http://pypi.python.org/pypi/itty/0.6.0 Fantastic (even a changelog!): http://pypi.python.org/pypi/jarn.mkrelease/2.0b2 On CheeseShop you can also choose to just register the package, but not upload the code. Instead you can provide your own downloading URL. DO NOT DO THAT! That means that your software gets unavailable when cheeseshop is down or when your server is down. That means that if you want to install a system that uses your software, the chances that it will fail because a server is down somewhere doubles. And with a big system, when you have five different servers involved... Always upload the package to the CheeseShop as well as registering it! You also have the obligation not to remove the egg (except under exceptional circumstances) as people who starts to depend on a specific version of your software will fail if you remove that version. If you don't want to support the software anymore, upload a new version, with a big fat "THIS IS NO LONGER SUPPORTED SOFTWARE" or something, on top of the description. And don't upload development versions, like "0.1dev-r73183". And although you may not have an "obligation" to License your software, you kinda have to, or the uploading gets pointless. If you are unsure, go with GPL. That's it as far as I'm concerned. Sorry about the ranting. ;-) A: See CheeseShopTutorial and Writing the Setup Script. A: You will need to license the code. Despite what some people may think, the authors of content actually need to grant the license on their own. The Cheese Shop can't grant a license to other people to use the content until you've granted it as the copyright owner.
Do I have any obligations if I upload an egg to the CheeseShop?
Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ? I haven't really released anything as open source 'till now, and I'd like to know the process.
[ "\nYou have an obligation to register the package with a useful description. Nothing is more frustrating than finding a Package that may be good, but you don't know, because there is no description.\nTypical example of Lazy developer: http://pypi.python.org/pypi/gevent/0.9.1\nBetter: http://pypi.python.org/pypi/itty/0.6.0\nFantastic (even a changelog!): http://pypi.python.org/pypi/jarn.mkrelease/2.0b2\nOn CheeseShop you can also choose to just register the package, but not upload the code. Instead you can provide your own downloading URL. DO NOT DO THAT! That means that your software gets unavailable when cheeseshop is down or when your server is down. That means that if you want to install a system that uses your software, the chances that it will fail because a server is down somewhere doubles. And with a big system, when you have five different servers involved... Always upload the package to the CheeseShop as well as registering it!\nYou also have the obligation not to remove the egg (except under exceptional circumstances) as people who starts to depend on a specific version of your software will fail if you remove that version.\nIf you don't want to support the software anymore, upload a new version, with a big fat \"THIS IS NO LONGER SUPPORTED SOFTWARE\" or something, on top of the description.\nAnd don't upload development versions, like \"0.1dev-r73183\".\nAnd although you may not have an \"obligation\" to License your software, you kinda have to, or the uploading gets pointless. If you are unsure, go with GPL.\n\nThat's it as far as I'm concerned. Sorry about the ranting. ;-)\n", "See CheeseShopTutorial and Writing the Setup Script.\n", "You will need to license the code. Despite what some people may think, the authors of content actually need to grant the license on their own. The Cheese Shop can't grant a license to other people to use the content until you've granted it as the copyright owner.\n" ]
[ 9, 4, 3 ]
[]
[]
[ "egg", "pypi", "python" ]
stackoverflow_0001106759_egg_pypi_python.txt
Q: tools to aid in browsing/following (large) python projects' source code A specific example: becoming familiar with django's project source code (core, contrib, utils, etc.). Example of a useful tool: ctags - it allows you to "jump" to the file+location where a function/method is defined. Wondering about other tools that developers use (example: is there a tool that given a function x(), lists the functions that call x() and that are called by x()?). Thanks. Edit: added an answer with an aggregate of tools mentioned so far in other answers A: The following is an aggregate of tools mentioned in other answers... cscope http://cscope.sourceforge.net/ wikipedia entry: http://en.wikipedia.org/wiki/Cscope cscope is a console mode or text-based graphical interface ... It is often used on very large projects to find source code, functions, declarations, definitions and regular expressions given a text string. pycscope http://pypi.python.org/pypi/pycscope/ generates a cscope index of Python source trees ctags and exuberant ctags http://ctags.sourceforge.net/ http://ctags.sourceforge.net/ctags.html wikipedia entry: http://en.wikipedia.org/wiki/Ctags Ctags is a program that generates an index (or tag) file of names found in source and header files of various programming languages. Depending on the language, functions, variables, class members, macros and so on may be indexed. These tags allow definitions to be quickly and easily located by a text editor or other utility. Eclipse: http://www.eclipse.org/ wikipedia entry: http://en.wikipedia.org/wiki/Eclipse_%28software%29 Eclipse is a multi-language software development platform comprising an IDE and a plug-in system to extend it. It is written primarily in Java and can be used to develop applications in Java and, by means of the various plug-ins, in other languages as well, including C, C++, COBOL, Python, Perl, PHP, and others. PyDev http://pydev.sourceforge.net/ "Pydev is a plugin that enables users to use Eclipse for Python and Jython development -- making Eclipse a first class Python IDE" Komodo Edit http://www.activestate.com/komodo_edit/ wikipedia entry: http://en.wikipedia.org/wiki/ActiveState_Komodo Komodo Edit is a free text editor for dynamic programming languages introduced in January 2007. With the release of version 4.3, Komodo Edit is built on top of the Open Komodo project. It was developed for programmers who need a multi-language editor with broad functionality, but not the features of an IDE, like debugging, DOM viewer, interactive shells, and source code control integration. Prashanth's call graph (visualization) tool http://blog.prashanthellina.com/2007/11/14/generating-call-graphs-for-understanding-and-refactoring-python-code/ Just thought I'd share a link to an interesting small fun script I've found long time ago, that draws a graph of function calls. It works only for simple cases, so "as is" it's more fun than useful. rope/ropemacs http://rope.sourceforge.net/ropemacs.html Ropemacs is a plugin for performing python refactorings in emacs. It uses rope library and pymacs. http://www.enigmacurry.com/2008/05/09/emacs-as-a-powerful-python-ide/ Wing IDE http://www.wingware.com/ Wing IDE has goto-definition, find uses, a source browser, refactoring, and other code intelligence features that should help. Another good way to understand unfamiliar Python code is to set a breakpoint, run to it in the debugger, and then go up and down the stack. In Wing Professional you can also use the Debug Probe to interact with and try out things in the debug runtime state (it's a Python shell that runs in the context of the current debug stack frame). A: You can maybe try cscope! Wikipedia says that cscope is often used to search content within C or C++ files, but it can be used to search for content in other languages such as Java, Python, PHP and Perl.[citation needed] And you can also dig in this project. A: I think Komodo Edit and PyDev allows you to jump to python function defs. A: Many (or even most, I should say) IDE's help you in this by enabling you do go to variable and function definitions, often by just Ctrl+click, or showing you class overviews where you can see all methods and attributes a class has including those inherited, and letting you go to their definition, etc, etc, etc. I can't recommend such a tool highly enough, it's very time-saving for development. I personally use WingIDE, which is excellent and has all these features, but you should also check out KomodoEdit and Eclipse+PyDev. There maybe more that I don't know of, and it's fully possible that vim and emacs have some sort of plugins for this. A: is there a tool that given a function x(), lists the functions that call x() and that are called by x()? Just thought I'd share a link to an interesting small fun script I've found long time ago, that draws a graph of function calls. It works only for simple cases, so "as is" it's more fun than useful. For normal Python development personally I use GNU Emacs with rope/ropemacs (found a video showing the features) and sometimes Eclipse with PyDev. A: This is subjective so I think it should probably be a community wiki. That said, the best thing you can probably do to make browsing large projects is to be familiar with hotkeys provided in your favourite IDE. Using the keyboard to browse through large source code is much easier than manually scrolling through text, highlighting text and fumbling through an IDE with a mouse. A: Document it as you go. Leave trails, improve the structure, and keep notes. By the time you've found you way around the enter codebase, you'll have a good map. A: I like Eclipse and the PyDev plugin. This combination has been very useful to me. A: You should notice that cscope targets only the UNIX, Linux OSs.
tools to aid in browsing/following (large) python projects' source code
A specific example: becoming familiar with django's project source code (core, contrib, utils, etc.). Example of a useful tool: ctags - it allows you to "jump" to the file+location where a function/method is defined. Wondering about other tools that developers use (example: is there a tool that given a function x(), lists the functions that call x() and that are called by x()?). Thanks. Edit: added an answer with an aggregate of tools mentioned so far in other answers
[ "The following is an aggregate of tools mentioned in other answers...\ncscope\nhttp://cscope.sourceforge.net/\nwikipedia entry: http://en.wikipedia.org/wiki/Cscope\ncscope is a console mode or text-based graphical interface ... It is often used on very large projects to find source code, functions, declarations, definitions and regular expressions given a text string.\npycscope\nhttp://pypi.python.org/pypi/pycscope/\ngenerates a cscope index of Python source trees\nctags and exuberant ctags\nhttp://ctags.sourceforge.net/\nhttp://ctags.sourceforge.net/ctags.html\nwikipedia entry: http://en.wikipedia.org/wiki/Ctags\nCtags is a program that generates an index (or tag) file of names found in source and header files of various programming languages. Depending on the language, functions, variables, class members, macros and so on may be indexed. These tags allow definitions to be quickly and easily located by a text editor or other utility. \nEclipse:\nhttp://www.eclipse.org/\nwikipedia entry: http://en.wikipedia.org/wiki/Eclipse_%28software%29\nEclipse is a multi-language software development platform comprising an IDE and a plug-in system to extend it. It is written primarily in Java and can be used to develop applications in Java and, by means of the various plug-ins, in other languages as well, including C, C++, COBOL, Python, Perl, PHP, and others.\nPyDev\nhttp://pydev.sourceforge.net/\n\"Pydev is a plugin that enables users to use Eclipse for Python and Jython development -- making Eclipse a first class Python IDE\"\nKomodo Edit\nhttp://www.activestate.com/komodo_edit/\nwikipedia entry: http://en.wikipedia.org/wiki/ActiveState_Komodo\nKomodo Edit is a free text editor for dynamic programming languages introduced in January 2007. With the release of version 4.3, Komodo Edit is built on top of the Open Komodo project.\nIt was developed for programmers who need a multi-language editor with broad functionality, but not the features of an IDE, like debugging, DOM viewer, interactive shells, and source code control integration.\nPrashanth's call graph (visualization) tool\nhttp://blog.prashanthellina.com/2007/11/14/generating-call-graphs-for-understanding-and-refactoring-python-code/\nJust thought I'd share a link to an interesting small fun script I've found long time ago, that draws a graph of function calls. It works only for simple cases, so \"as is\" it's more fun than useful.\nrope/ropemacs\nhttp://rope.sourceforge.net/ropemacs.html\nRopemacs is a plugin for performing python refactorings in emacs. It uses rope library and pymacs.\nhttp://www.enigmacurry.com/2008/05/09/emacs-as-a-powerful-python-ide/\nWing IDE\nhttp://www.wingware.com/\nWing IDE has goto-definition, find uses, a source browser, refactoring, and other code intelligence features that should help. Another good way to understand unfamiliar Python code is to set a breakpoint, run to it in the debugger, and then go up and down the stack. In Wing Professional you can also use the Debug Probe to interact with and try out things in the debug runtime state (it's a Python shell that runs in the context of the current debug stack frame).\n", "You can maybe try cscope! Wikipedia says that\n\ncscope is often used to search content within C or C++ files, but it can be used to search for content in other languages such as Java, Python, PHP and Perl.[citation needed]\n\nAnd you can also dig in this project.\n", "I think Komodo Edit and PyDev allows you to jump to python function defs.\n", "Many (or even most, I should say) IDE's help you in this by enabling you do go to variable and function definitions, often by just Ctrl+click, or showing you class overviews where you can see all methods and attributes a class has including those inherited, and letting you go to their definition, etc, etc, etc. I can't recommend such a tool highly enough, it's very time-saving for development.\nI personally use WingIDE, which is excellent and has all these features, but you should also check out KomodoEdit and Eclipse+PyDev. There maybe more that I don't know of, and it's fully possible that vim and emacs have some sort of plugins for this.\n", "\nis there a tool that given a function x(), lists the functions that call x() and that are called by x()?\n\nJust thought I'd share a link to an interesting small fun script I've found long time ago, that draws a graph of function calls. It works only for simple cases, so \"as is\" it's more fun than useful.\nFor normal Python development personally I use GNU Emacs with rope/ropemacs (found a video showing the features) and sometimes Eclipse with PyDev.\n", "This is subjective so I think it should probably be a community wiki. That said, the best thing you can probably do to make browsing large projects is to be familiar with hotkeys provided in your favourite IDE. Using the keyboard to browse through large source code is much easier than manually scrolling through text, highlighting text and fumbling through an IDE with a mouse. \n", "Document it as you go. Leave trails, improve the structure, and keep notes. By the time you've found you way around the enter codebase, you'll have a good map.\n", "I like Eclipse and the PyDev plugin. This combination has been very useful to me.\n", "You should notice that cscope targets only the UNIX, Linux OSs.\n" ]
[ 10, 1, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "django", "ide", "python" ]
stackoverflow_0001077273_django_ide_python.txt
Q: Datastore access optimization I'm writing a small program to record reading progress, the data models are simple: class BookState(db.Model): isbn = db.StringProperty() title = db.StringProperty(required=True) pages = db.IntegerProperty(required=True) img = db.StringProperty() class UpdatePoint(db.Model): book = db.ReferenceProperty(BookState) date = db.DateProperty(required=True) page = db.IntegerProperty(required=True) The UpdatePoint class records how many pages the user has read on corresponding date. Now I want to draw a chart from the data stored in App Engine database, the function looks like this: book = db.get(bookkey) ups = book.updatepoint_set ups.order('date') for (i, up) in enumerate(ups): if i == 0: continue # code begin days = (up.date - ups[i-1].date).days pages = up.page - ups[i-1].page # code end # blah blah I find that for a book with about 40 update points, it will costs more than 4 seconds to run the code. And after timing I find the commented code snippet seems to be the root of poor performance. Each loop costs about 0.08 seconds or more. It seems UpdatePoint is fetched in a lazy way that it won't be loaded until it is needed. I want to know whether there is any better solution to accelerate the data access like fetch the data in a bunch. Many thanks for your reply. A: It seems I used Query class in a wrong way. I need to call ups.fetch() first to get the data. Now the code is a lot faster than before: book = db.get(bookkey) q = book.updatepoint_set q.order('date') ups = q.fetch(50) A: From the look of the code, it appears that your slow down is because its in the loop and have to kinda pop out to find the object you want. Have you tried something like i = 0 for up in ups: if i != 0: days = (up.date - previous.date).days pages = up.page - previous.page i += 1 previous = up
Datastore access optimization
I'm writing a small program to record reading progress, the data models are simple: class BookState(db.Model): isbn = db.StringProperty() title = db.StringProperty(required=True) pages = db.IntegerProperty(required=True) img = db.StringProperty() class UpdatePoint(db.Model): book = db.ReferenceProperty(BookState) date = db.DateProperty(required=True) page = db.IntegerProperty(required=True) The UpdatePoint class records how many pages the user has read on corresponding date. Now I want to draw a chart from the data stored in App Engine database, the function looks like this: book = db.get(bookkey) ups = book.updatepoint_set ups.order('date') for (i, up) in enumerate(ups): if i == 0: continue # code begin days = (up.date - ups[i-1].date).days pages = up.page - ups[i-1].page # code end # blah blah I find that for a book with about 40 update points, it will costs more than 4 seconds to run the code. And after timing I find the commented code snippet seems to be the root of poor performance. Each loop costs about 0.08 seconds or more. It seems UpdatePoint is fetched in a lazy way that it won't be loaded until it is needed. I want to know whether there is any better solution to accelerate the data access like fetch the data in a bunch. Many thanks for your reply.
[ "It seems I used Query class in a wrong way. I need to call ups.fetch() first to get the data. Now the code is a lot faster than before:\nbook = db.get(bookkey)\nq = book.updatepoint_set\nq.order('date')\nups = q.fetch(50)\n\n", "From the look of the code, it appears that your slow down is because its in the loop and have to kinda pop out to find the object you want. Have you tried something like\ni = 0\nfor up in ups:\n if i != 0:\n days = (up.date - previous.date).days\n pages = up.page - previous.page\n i += 1\n previous = up\n\n" ]
[ 3, 0 ]
[]
[]
[ "database", "google_app_engine", "python" ]
stackoverflow_0001108072_database_google_app_engine_python.txt
Q: How to include a python .egg library that is in a subdirectory (relative location)? How do you import python .egg files that are stored in a relative location to the .py code? For example, My Application/ My Application/library1.egg My Application/libs/library2.egg My Application/test.py How do you import and use library1 and library2 from within test.py, while leaving the .egg libraries in-place? A: An .egg is just a .zip file that acts like a directory from which you can import stuff. You can use the PYTHONPATH variable to add the .egg to your path, or append a directory to sys.path. Another option is to use a .pth file pointing to the eggs. For more info see A Small Introduction to Python eggs, Python Eggs and All about eggs. For example, if your library1.egg contains a package named foo, and you add library1.egg to PYTHONPATH, you can simply import foo If you can't set PYTHONPATH, you can write: import sys sys.path.append("library1.egg") import foo A: You can include each egg on the sys.path, or create a .pth file that mentions each egg. If you have many eggs that you need in your system I'd recommend using something like buildout, that will make the setup easily replicatable. It will handle the eggs for you. http://pypi.python.org/pypi/zc.buildout/
How to include a python .egg library that is in a subdirectory (relative location)?
How do you import python .egg files that are stored in a relative location to the .py code? For example, My Application/ My Application/library1.egg My Application/libs/library2.egg My Application/test.py How do you import and use library1 and library2 from within test.py, while leaving the .egg libraries in-place?
[ "An .egg is just a .zip file that acts like a directory from which you can import stuff.\nYou can use the PYTHONPATH variable to add the .egg to your path, or append a directory to \nsys.path. Another option is to use a .pth file pointing to the eggs.\nFor more info see A Small Introduction to Python eggs, Python Eggs and All about eggs.\nFor example, if your library1.egg contains a package named foo, and you add library1.egg to PYTHONPATH, you can simply import foo\nIf you can't set PYTHONPATH, you can write:\nimport sys\nsys.path.append(\"library1.egg\")\nimport foo\n\n", "You can include each egg on the sys.path, or create a .pth file that mentions each egg.\nIf you have many eggs that you need in your system I'd recommend using something like buildout, that will make the setup easily replicatable. It will handle the eggs for you.\nhttp://pypi.python.org/pypi/zc.buildout/\n" ]
[ 28, 2 ]
[]
[]
[ "egg", "python" ]
stackoverflow_0001108384_egg_python.txt
Q: Looking for testing/QA idea for Python Web Application Project I have the 'luck' of develop and enhance a legacy python web application for almost 2 years. The major contribution I consider I made is the introduction of the use of unit test, nosestest, pychecker and CI server. Yes, that's right, there are still project out there that has no single unit test (To be fair, it has a few doctest, but are broken). Nonetheless, progress is slow, because literally the coverage is limited by how many unit tests you can afford to write. From time to time embarrassing mistakes still occur, and it does not look good on management reports. (e.g. even pychecker cannot catch certain "missing attribute" situation, and the program just blows up in run time) I just want to know if anyone has any suggestion about what additional thing I can do to improve the QA. The application uses WebWare 0.8.1, but I have expermentially ported it to cherrypy, so I can potentially take advantage of WSGI to conduct integration tests. Mixed language development and/or hiring an additional tester are also options I am thinking. Nothing is too wild, as long as it works. A: Feather's great book is the first resource I always recommend to anybody in your situation (wish I had it in hand before I faced it my first four times or so!-) -- not Python specific but a lot of VERY useful general-purpose pieces of advice. Another technique I've been happy with is fuzz testing -- low-effort, great returns in terms of catching sundry bugs and vulnerabilitues; check it out! Last but not least, if you do have the headcount & budget to hire one more engineer, please do, but make sure he or she is a "software engineer in testing", NOT a warm body banging at the keyboard or mouse for manual "testing" -- somebody who's rarin' to write and integrate all sorts of automated testing approaches as opposed to spending their days endlessly repeating (if they're lucky) the same manual testing sequences!!! I'm not sure what you think mixed language dev't will buy you in terms of QA. WSGI OTOH will give you nice bottlenecks/hooks to exploit in your forthcoming integration-test infrastructure -- it's good for that (AND for sundry other things too;-). A: Automated testing seems to be as a very interesting approach. If you are developping a web app, you may be interested in WebDriver http://code.google.com/p/webdriver/ A: Since it is a web app, I'm wondering whether browser-based testing would make sense for you. If so, check out Selenium, an open-source suite of test tools. Here are some items that might be interesting to you: automatically starts and stops browser instances on major platforms (linux, win32, macos) tests by emulating user actions on web pages (clicking, typing), Javascript based uses assertions for behavioral results (new web page loaded, containing text, ...) can record interactive tests in firefox can be driven by Python test scripts, using a simple communication API and running against a coordination server (Selenium RC). can run multiple browsers on the same machine or multiple machines It has a learning curve, but particularly the Selenium RC server architecture is very helpful in conducting automated browser tests. A: Have a look at Twill, it's a headless web browser written in Python, specifically for automated testing. It can record and replay actions, and it can also hook directly into a WSGI stack. A: Few things help as much as testing. These two quotes are really important. "how many unit tests you can afford to write." "From time to time embarrassing mistakes still occur," If mistakes occur, you haven't written enough tests. If you're still having mistakes, then you can afford to write more unit tests. It's that simple. Each embarrassing mistake is a direct result of not writing enough unit tests. Each management report that describes an embarrassing mistake should also describe what testing is required to prevent that mistake from ever happening again. A unit test is a permanent prevention of further problems.
Looking for testing/QA idea for Python Web Application Project
I have the 'luck' of develop and enhance a legacy python web application for almost 2 years. The major contribution I consider I made is the introduction of the use of unit test, nosestest, pychecker and CI server. Yes, that's right, there are still project out there that has no single unit test (To be fair, it has a few doctest, but are broken). Nonetheless, progress is slow, because literally the coverage is limited by how many unit tests you can afford to write. From time to time embarrassing mistakes still occur, and it does not look good on management reports. (e.g. even pychecker cannot catch certain "missing attribute" situation, and the program just blows up in run time) I just want to know if anyone has any suggestion about what additional thing I can do to improve the QA. The application uses WebWare 0.8.1, but I have expermentially ported it to cherrypy, so I can potentially take advantage of WSGI to conduct integration tests. Mixed language development and/or hiring an additional tester are also options I am thinking. Nothing is too wild, as long as it works.
[ "Feather's great book is the first resource I always recommend to anybody in your situation (wish I had it in hand before I faced it my first four times or so!-) -- not Python specific but a lot of VERY useful general-purpose pieces of advice.\nAnother technique I've been happy with is fuzz testing -- low-effort, great returns in terms of catching sundry bugs and vulnerabilitues; check it out!\nLast but not least, if you do have the headcount & budget to hire one more engineer, please do, but make sure he or she is a \"software engineer in testing\", NOT a warm body banging at the keyboard or mouse for manual \"testing\" -- somebody who's rarin' to write and integrate all sorts of automated testing approaches as opposed to spending their days endlessly repeating (if they're lucky) the same manual testing sequences!!!\nI'm not sure what you think mixed language dev't will buy you in terms of QA. WSGI OTOH will give you nice bottlenecks/hooks to exploit in your forthcoming integration-test infrastructure -- it's good for that (AND for sundry other things too;-).\n", "Automated testing seems to be as a very interesting approach. If you are developping a web app, you may be interested in WebDriver http://code.google.com/p/webdriver/\n", "Since it is a web app, I'm wondering whether browser-based testing would make sense for you. If so, check out Selenium, an open-source suite of test tools. Here are some items that might be interesting to you:\n\nautomatically starts and stops browser instances on major platforms (linux, win32, macos)\ntests by emulating user actions on web pages (clicking, typing), Javascript based\nuses assertions for behavioral results (new web page loaded, containing text, ...)\ncan record interactive tests in firefox\ncan be driven by Python test scripts, using a simple communication API and running against a coordination server (Selenium RC).\ncan run multiple browsers on the same machine or multiple machines\n\nIt has a learning curve, but particularly the Selenium RC server architecture is very helpful in conducting automated browser tests.\n", "Have a look at Twill, it's a headless web browser written in Python, specifically for automated testing. It can record and replay actions, and it can also hook directly into a WSGI stack.\n", "Few things help as much as testing.\nThese two quotes are really important.\n\n\"how many unit tests you can afford to write.\"\n\"From time to time embarrassing mistakes still occur,\"\n\nIf mistakes occur, you haven't written enough tests. If you're still having mistakes, then you can afford to write more unit tests. It's that simple. \nEach embarrassing mistake is a direct result of not writing enough unit tests. \nEach management report that describes an embarrassing mistake should also describe what testing is required to prevent that mistake from ever happening again. \nA unit test is a permanent prevention of further problems.\n" ]
[ 2, 1, 1, 0, 0 ]
[]
[]
[ "integration_testing", "python", "testing" ]
stackoverflow_0001107858_integration_testing_python_testing.txt
Q: python long running daemon job processor I want to write a long running process (linux daemon) that serves two purposes: responds to REST web requests executes jobs which can be scheduled I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies). I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests. My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately. I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements. I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you? A: If the REST server and the scheduled jobs have nothing in common, do two separate implementations, the REST server and the jobs stuff, and run them as separate processes. As mentioned previously, look into existing schedulers for the jobs stuff. I don't know if Twisted would be an alternative, but you might want to check this platform. If, OTOH, the REST interface invokes the same functionality as the scheduled jobs do, you should try to look at them as two interfaces to the same functionality, e.g. like this: Write the actual jobs as programs the REST server can fork and run. Have a separate scheduler that handles the timing of the jobs. If a job is due to run, let the scheduler issue a corresponding REST request to the local server. This way the scheduler only handles job descriptions, but has no own knowledge how they are implemented. It's a common trait for long-running, high-availability processes to have an additional "supervisor" process that just checks the necessary demons are up and running, and restarts them as necessary. A: One option is to simply choose a lightweight WSGI server from this list: http://wsgi.org/wsgi/Servers and let it do the work of a long-running process that serves requests. (I would recommend Spawning.) Your code can concentrate on the REST API and handling requests through the well defined WSGI interface, and scheduling jobs. There are at least a couple of scheduling libraries you could use, but I don't know much about them: http://sourceforge.net/projects/pycron/ http://code.google.com/p/scheduler-py/ A: Here's what we did. Wrote a simple, pure-wsgi web application to respond to REST requests. Start jobs Report status of jobs Extended the built-in wsgiref server to use the select module to check for incoming requests. Activity on the socket is ordinary REST request, we let the wsgiref handle this. It will -- eventually -- call our WSGI applications to respond to status and submit requests. Timeout means that we have to do two things: Check all children that are running to see if they're done. Update their status, etc. Check a crontab-like schedule to see if there's any scheduled work to do. This is a SQLite database that this server maintains. A: I usually use cron for scheduling. As for REST you can use one of the many, many web frameworks out there. But just running SimpleHTTPServer should be enough. You can schedule the REST service startup with cron @reboot @reboot (cd /path/to/my/app && nohup python myserver.py&) A: The usual design pattern for a scheduler would be: Maintain a list of scheduled jobs, sorted by next-run-time (as Date-Time value); When woken up, compare the first job in the list with the current time. If it's due or overdue, remove it from the list and run it. Continue working your way through the list this way until the first job is not due yet, then go to sleep for (next_job_due_date - current_time); When a job finishes running, re-schedule it if appropriate; After adding a job to the schedule, wake up the scheduler process. Tweak as appropriate for your situation (eg. sometimes you might want to re-schedule jobs to run again at the point that they start running rather than finish).
python long running daemon job processor
I want to write a long running process (linux daemon) that serves two purposes: responds to REST web requests executes jobs which can be scheduled I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies). I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests. My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately. I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements. I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you?
[ "\nIf the REST server and the scheduled jobs have nothing in common, do two separate implementations, the REST server and the jobs stuff, and run them as separate processes.\nAs mentioned previously, look into existing schedulers for the jobs stuff. I don't know if Twisted would be an alternative, but you might want to check this platform.\nIf, OTOH, the REST interface invokes the same functionality as the scheduled jobs do, you should try to look at them as two interfaces to the same functionality, e.g. like this:\n\nWrite the actual jobs as programs the REST server can fork and run.\nHave a separate scheduler that handles the timing of the jobs.\nIf a job is due to run, let the scheduler issue a corresponding REST request to the local server.\nThis way the scheduler only handles job descriptions, but has no own knowledge how they are implemented.\n\nIt's a common trait for long-running, high-availability processes to have an additional \"supervisor\" process that just checks the necessary demons are up and running, and restarts them as necessary.\n\n", "One option is to simply choose a lightweight WSGI server from this list:\n\nhttp://wsgi.org/wsgi/Servers\n\nand let it do the work of a long-running process that serves requests. (I would recommend Spawning.) Your code can concentrate on the REST API and handling requests through the well defined WSGI interface, and scheduling jobs. \nThere are at least a couple of scheduling libraries you could use, but I don't know much about them:\n\nhttp://sourceforge.net/projects/pycron/\nhttp://code.google.com/p/scheduler-py/\n\n", "Here's what we did.\n\nWrote a simple, pure-wsgi web application to respond to REST requests.\n\nStart jobs\nReport status of jobs\n\nExtended the built-in wsgiref server to use the select module to check for incoming requests.\n\nActivity on the socket is ordinary REST request, we let the wsgiref handle this.\nIt will -- eventually -- call our WSGI applications to respond to status and\nsubmit requests.\nTimeout means that we have to do two things:\n\nCheck all children that are running to see if they're done. Update their status, etc.\nCheck a crontab-like schedule to see if there's any scheduled work to do. This is a SQLite database that this server maintains.\n\n\n\n", "I usually use cron for scheduling. As for REST you can use one of the many, many web frameworks out there. But just running SimpleHTTPServer should be enough.\nYou can schedule the REST service startup with cron @reboot\n@reboot (cd /path/to/my/app && nohup python myserver.py&)\n\n", "The usual design pattern for a scheduler would be:\n\nMaintain a list of scheduled jobs, sorted by next-run-time (as Date-Time value);\nWhen woken up, compare the first job in the list with the current time. If it's due or overdue, remove it from the list and run it. Continue working your way through the list this way until the first job is not due yet, then go to sleep for (next_job_due_date - current_time);\nWhen a job finishes running, re-schedule it if appropriate;\nAfter adding a job to the schedule, wake up the scheduler process.\n\nTweak as appropriate for your situation (eg. sometimes you might want to re-schedule jobs to run again at the point that they start running rather than finish).\n" ]
[ 3, 1, 1, 0, 0 ]
[]
[]
[ "long_running_processes", "python", "scheduling", "web_services" ]
stackoverflow_0001107826_long_running_processes_python_scheduling_web_services.txt
Q: Manually logging out a user, after a site update in Django I have a website, which will be frequently updated. Sometimes changes happen to User specific models and are linked to sessions. After I update my site, I want the user to log out and log back in. So I would log out the user right then. If he logs back in, he will see the latest updates to the site. How do I do it? A: You could just reset your session table. This would logout every user. Of course, depending on what your doing with sessions, it could have other implications (like emptying a shopping cart, for example). python manage.py reset sessions Or in raw SQL: DELETE FROM django_sessions
Manually logging out a user, after a site update in Django
I have a website, which will be frequently updated. Sometimes changes happen to User specific models and are linked to sessions. After I update my site, I want the user to log out and log back in. So I would log out the user right then. If he logs back in, he will see the latest updates to the site. How do I do it?
[ "You could just reset your session table. This would logout every user. Of course, depending on what your doing with sessions, it could have other implications (like emptying a shopping cart, for example).\npython manage.py reset sessions\n\nOr in raw SQL:\nDELETE FROM django_sessions\n\n" ]
[ 10 ]
[ "See this: http://docs.djangoproject.com/en/dev/topics/auth/#how-to-log-a-user-out\nThat seems to cover it.\n" ]
[ -1 ]
[ "authentication", "django", "logout", "python" ]
stackoverflow_0001107598_authentication_django_logout_python.txt
Q: Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError Because the Twisted getPage function doesn't give me access to headers, I had to write my own getPageWithHeaders function. def getPageWithHeaders(contextFactory=None, *args, **kwargs): try: return _makeGetterFactory(url, HTTPClientFactory, contextFactory=contextFactory, *args, **kwargs) except: traceback.print_exc() This is exactly the same as the normal getPage function, except that I added the try/except block and return the factory object instead of returning the factory.deferred For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look. EDIT: Here's the traceback I get, which seems bizarrely incomplete: Traceback (most recent call last): File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory factory = factoryFactory(url, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__ self.headers = InsensitiveDict(headers) RuntimeError: maximum recursion depth exceeded This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like def f(): return f() try: f() except: traceback.print_exc() then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to f() A: The specific traceback that you're looking at is a bit mystifying. You could try traceback.print_stack rather than traceback.print_exc to get a look at the entire stack above the problematic code, rather than just the stack going back to where the exception is caught. Without seeing more of your traceback I can't be certain, but you may be running into the problem where Deferreds will raise a recursion limit exception if you chain too many of them together. If you turn on Deferred debugging (from twisted.internet.defer import setDebugging; setDebugging(True)) you may get more useful tracebacks in some cases, but please be aware that this may also slow down your server quite a bit. A: You should look at the traceback you're getting together with the exception -- that will tell you what function(s) is/are recursing too deeply, "below" _makeGetterFactory. Most likely you'll find that your own getPageWithHeaders is involved in the recursion, exactly because instead of properly returning a deferred it tries to return a factory that's not ready yet. What happens if you do go back to returning the deferred?
Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError
Because the Twisted getPage function doesn't give me access to headers, I had to write my own getPageWithHeaders function. def getPageWithHeaders(contextFactory=None, *args, **kwargs): try: return _makeGetterFactory(url, HTTPClientFactory, contextFactory=contextFactory, *args, **kwargs) except: traceback.print_exc() This is exactly the same as the normal getPage function, except that I added the try/except block and return the factory object instead of returning the factory.deferred For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look. EDIT: Here's the traceback I get, which seems bizarrely incomplete: Traceback (most recent call last): File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory factory = factoryFactory(url, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__ self.headers = InsensitiveDict(headers) RuntimeError: maximum recursion depth exceeded This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like def f(): return f() try: f() except: traceback.print_exc() then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to f()
[ "The specific traceback that you're looking at is a bit mystifying. You could try traceback.print_stack rather than traceback.print_exc to get a look at the entire stack above the problematic code, rather than just the stack going back to where the exception is caught.\nWithout seeing more of your traceback I can't be certain, but you may be running into the problem where Deferreds will raise a recursion limit exception if you chain too many of them together.\nIf you turn on Deferred debugging (from twisted.internet.defer import setDebugging; setDebugging(True)) you may get more useful tracebacks in some cases, but please be aware that this may also slow down your server quite a bit.\n", "You should look at the traceback you're getting together with the exception -- that will tell you what function(s) is/are recursing too deeply, \"below\" _makeGetterFactory. Most likely you'll find that your own getPageWithHeaders is involved in the recursion, exactly because instead of properly returning a deferred it tries to return a factory that's not ready yet. What happens if you do go back to returning the deferred?\n" ]
[ 2, 1 ]
[ "The URL opener is likely following an un-ending series of 301 or 302 redirects.\n" ]
[ -1 ]
[ "python", "twisted" ]
stackoverflow_0001104587_python_twisted.txt
Q: What's a good way to render outlined fonts? I'm writing a game in python with pygame and need to render text onto the screen. I want to render this text in one colour with an outline, so that I don't have to worry about what sort of background the the text is being displayed over. pygame.font doesn't seem to offer support for doing this sort of thing directly, and I'm wondering if anyone has any good solutions for achieving this? A: A quick and dirty way would be to render your text multiple times with the outline color, shifted by small amounts on a circle around the text position: 1 8 | 2 \ | / \|/ 7----*----3 /|\ / | \ 6 | 4 5 Edit: Doh you've been faster ! I wont delete my answer though, this ASCII art is just too good and deserves to live ! Edit 2: As OregonGhost mentioned, you may need more or fewer steps for the outline rendering, depending on your outline width. A: I can give you a quick and bad solution: print the text 8 times, to surround it, plus one more time for the inner text, like this UUU UIU UUU U for outer color and I for the inner color.
What's a good way to render outlined fonts?
I'm writing a game in python with pygame and need to render text onto the screen. I want to render this text in one colour with an outline, so that I don't have to worry about what sort of background the the text is being displayed over. pygame.font doesn't seem to offer support for doing this sort of thing directly, and I'm wondering if anyone has any good solutions for achieving this?
[ "A quick and dirty way would be to render your text multiple times with the outline color, shifted by small amounts on a circle around the text position:\n\n 1\n 8 | 2\n \\ | /\n \\|/\n 7----*----3\n /|\\\n / | \\ \n 6 | 4\n 5\n\nEdit: Doh you've been faster ! I wont delete my answer though, this ASCII art is just too good and deserves to live !\nEdit 2: As OregonGhost mentioned, you may need more or fewer steps for the outline rendering, depending on your outline width.\n", "I can give you a quick and bad solution:\nprint the text 8 times, to surround it, plus one more time for the inner text, like this\nUUU\nUIU\nUUU\n\nU for outer color and I for the inner color.\n" ]
[ 4, 3 ]
[]
[]
[ "fonts", "pygame", "python" ]
stackoverflow_0001109498_fonts_pygame_python.txt
Q: How can I tell if my script is being run from a cronjob or from the command line? I have a script and it's display show's upload progress by writing to the same console line. When the script is run from a cron job, rather than writing to a single line, I get many lines: *** E0710091001.DAT *** [0.67%] *** E0710091001.DAT *** [1.33%] *** E0710091001.DAT *** [2.00%] *** E0710091001.DAT *** [2.66%] *** E0710091001.DAT *** [3.33%] *** E0710091001.DAT *** [3.99%] *** E0710091001.DAT *** [4.66%] *** E0710091001.DAT *** [5.32%] *** E0710091001.DAT *** [5.99%] *** E0710091001.DAT *** [6.65%] *** E0710091001.DAT *** [7.32%] *** E0710091001.DAT *** [7.98%] *** E0710091001.DAT *** [8.65%] *** E0710091001.DAT *** [9.32%] *** E0710091001.DAT *** [9.98%] *** E0710091001.DAT *** [10.65%] *** E0710091001.DAT *** [11.31%] *** E0710091001.DAT *** [11.98%] *** E0710091001.DAT *** [12.64%] *** E0710091001.DAT *** [13.31%] *** E0710091001.DAT *** [13.97%] *** E0710091001.DAT *** [14.64%] *** E0710091001.DAT *** [15.30%] *** E0710091001.DAT *** [15.97%] *** E0710091001.DAT *** [16.63%] *** E0710091001.DAT *** [17.30%] *** E0710091001.DAT *** [17.97%] *** E0710091001.DAT *** [18.63%] I just want to know if I can tell from inside the script if it's being called from cron, and if so, I won't display this output. A: you could create a flag. Possibly undocumented that your cron job would pass to the utility to structure the output. A: I'd check sys.stderr.isatty() -- this way you avoid useless "decoration" output to stderr whenever it wouldn't be immediately perceptible by the user anyway. A: See code below. Replace my print statements with what you want to show. import sys if sys.stdout.isatty(): print "Running from command line" else: print "Running from cron" A: You want to check if you're on a terminal or not. See this stack overflow question: How to detect if my shell script is running through a pipe? A: An easy way would be to have the script take an argument that means to suppress that output, and supply that argument when you call it from cron.
How can I tell if my script is being run from a cronjob or from the command line?
I have a script and it's display show's upload progress by writing to the same console line. When the script is run from a cron job, rather than writing to a single line, I get many lines: *** E0710091001.DAT *** [0.67%] *** E0710091001.DAT *** [1.33%] *** E0710091001.DAT *** [2.00%] *** E0710091001.DAT *** [2.66%] *** E0710091001.DAT *** [3.33%] *** E0710091001.DAT *** [3.99%] *** E0710091001.DAT *** [4.66%] *** E0710091001.DAT *** [5.32%] *** E0710091001.DAT *** [5.99%] *** E0710091001.DAT *** [6.65%] *** E0710091001.DAT *** [7.32%] *** E0710091001.DAT *** [7.98%] *** E0710091001.DAT *** [8.65%] *** E0710091001.DAT *** [9.32%] *** E0710091001.DAT *** [9.98%] *** E0710091001.DAT *** [10.65%] *** E0710091001.DAT *** [11.31%] *** E0710091001.DAT *** [11.98%] *** E0710091001.DAT *** [12.64%] *** E0710091001.DAT *** [13.31%] *** E0710091001.DAT *** [13.97%] *** E0710091001.DAT *** [14.64%] *** E0710091001.DAT *** [15.30%] *** E0710091001.DAT *** [15.97%] *** E0710091001.DAT *** [16.63%] *** E0710091001.DAT *** [17.30%] *** E0710091001.DAT *** [17.97%] *** E0710091001.DAT *** [18.63%] I just want to know if I can tell from inside the script if it's being called from cron, and if so, I won't display this output.
[ "you could create a flag. Possibly undocumented that your cron job would pass to the utility to structure the output.\n", "I'd check sys.stderr.isatty() -- this way you avoid useless \"decoration\" output to stderr whenever it wouldn't be immediately perceptible by the user anyway.\n", "See code below. Replace my print statements with what you want to show.\nimport sys\nif sys.stdout.isatty():\n print \"Running from command line\"\nelse:\n print \"Running from cron\"\n\n", "You want to check if you're on a terminal or not. See this stack overflow question:\nHow to detect if my shell script is running through a pipe?\n", "An easy way would be to have the script take an argument that means to suppress that output, and supply that argument when you call it from cron.\n" ]
[ 10, 9, 8, 6, 2 ]
[]
[]
[ "cron", "python" ]
stackoverflow_0001110203_cron_python.txt
Q: Python clock function on FreeBSD While testing Pythons time.clock() function on FreeBSD I've noticed it always returns the same value, around 0.156 The time.time() function works properly but I need a something with a slightly higher resolution. Does anyone the C function it's bound to and if there is an alternative high resolution timer? I'm not profiling so the TimeIt module is not really appropriate here. A: Python's time.clock calls C function clock(3) -- man clock should confirm that it's supposed to work on BSD, so I don't know why it's not working for you. Maybe you can try working around this apparent bug in your Python port by using ctypes to call the clock function from the system C library directly (if you have said library as a .so/.dynlib/.dll or whatever dynamic shared libraries are called on FreeBSD)? time.time is supposed to be very high resolution, BTW, as internally it calls gettimeofday (well, in a properly built Python, anyway) -- what resolution do you observe for it on your system? Edit: here's wat.c, a BSD-specific extension (tested on my Mac only -- sorry but I have no other BSD flavor at hand right know) to work around this apparent FreeBSD port problem: #include "Python.h" #include <sys/time.h> static PyObject * wat_time(PyObject *self, PyObject *args) { struct timeval t; if (gettimeofday(&t, (struct timezone *)NULL) == 0) { double result = (double)t.tv_sec + t.tv_usec*0.000001; return PyFloat_FromDouble(result); } return PyErr_SetFromErrno(PyExc_OSError); } static PyMethodDef wat_methods[] = { {"time", wat_time, METH_VARARGS, PyDoc_STR("time() -> microseconds since epoch")}, {NULL, NULL} /* sentinel */ }; PyDoc_STRVAR(wat_module_doc, "Workaround for time.time issues on FreeBsd."); PyMODINIT_FUNC initwat(void) { Py_InitModule3("wat", wat_methods, wat_module_doc); } And here's the setup.py to put in the same directory: from distutils.core import setup, Extension setup (name = "wat", version = "0.1", maintainer = "Alex Martelli", maintainer_email = "[email protected]", url = "http://www.aleax.it/wat.zip", description = "WorkAround for Time in FreeBSD", ext_modules = [Extension('wat', sources=['wat.c'])], ) The URL is correct, so you can also get these two files zipped up here. To build & install this extension, python setup.py install (if you have permission to write in your Python's installation) or python setup.py build_ext -i to write wat.so in the very directory in which you put the sources (and then manually move it wherever you prefer to have it, but first try it out e.g. with python -c'import wat; print repr(wat.time())' in the same directory in which you've built it). Please let me know how it works on FreeBSD (or any other Unix flavor with gettimeofday!-) -- if the C compiler complains about gettimeofday, you may be on a system which doesn't want to see its second argument, try without it!-). A: time.clock() returns CPU time on UNIX systems, and wallclock time since program start on Windows. This is a very unfortunate insymmetry, in my opinion. You can find the definition for time.time() in the Python sources here (link to Google Code Search). It seems to use the highest-resolution timer available, which according to a quick Googling is gettimeofday() on FreeBSD as well, and that should be in the microsecond accuracy class. However, if you really need more accuracy, you could look into writing your own C module for really high-resolution timing (something that might just return the current microsecond count, maybe!). Pyrex makes Python extension writing very effortless, and SWIG is the other common choice. (Though really, if you want to shave as many microseconds off your timer accuracy, just write it as a pure C Python extension yourself.) Ctypes is also an option, but probably rather slow. Best of luck! A: time.clock() returns the processor time. That is, how much time the current process has used on the processor. So if you have a Python script called "clock.py", that does import time;print time.clock() it will indeed print about exactly the same each time you run it, as a new process is started each time. Here is a python console log that might explain it to you: >>> import time >>> time.clock() 0.11 >>> time.clock() 0.11 >>> time.clock() 0.11 >>> for x in xrange(100000000): pass ... >>> time.clock() 7.7800000000000002 >>> time.clock() 7.7800000000000002 >>> time.clock() 7.7800000000000002 I hope this clarifies things. A: time.clock() is implemented to return a double value resulting from ((double)clock()) / CLOCKS_PER_SEC Why do you think time.time() has bad resolution? It uses gettimeofday, which in turn reads the hardware clock, which has very good resolution.
Python clock function on FreeBSD
While testing Pythons time.clock() function on FreeBSD I've noticed it always returns the same value, around 0.156 The time.time() function works properly but I need a something with a slightly higher resolution. Does anyone the C function it's bound to and if there is an alternative high resolution timer? I'm not profiling so the TimeIt module is not really appropriate here.
[ "Python's time.clock calls C function clock(3) -- man clock should confirm that it's supposed to work on BSD, so I don't know why it's not working for you. Maybe you can try working around this apparent bug in your Python port by using ctypes to call the clock function from the system C library directly (if you have said library as a .so/.dynlib/.dll or whatever dynamic shared libraries are called on FreeBSD)?\ntime.time is supposed to be very high resolution, BTW, as internally it calls gettimeofday (well, in a properly built Python, anyway) -- what resolution do you observe for it on your system?\nEdit: here's wat.c, a BSD-specific extension (tested on my Mac only -- sorry but I have no other BSD flavor at hand right know) to work around this apparent FreeBSD port problem:\n#include \"Python.h\"\n#include <sys/time.h>\n\nstatic PyObject *\nwat_time(PyObject *self, PyObject *args)\n{\n struct timeval t;\n if (gettimeofday(&t, (struct timezone *)NULL) == 0) {\n double result = (double)t.tv_sec + t.tv_usec*0.000001;\n return PyFloat_FromDouble(result);\n }\n return PyErr_SetFromErrno(PyExc_OSError);\n}\n\nstatic PyMethodDef wat_methods[] = {\n {\"time\", wat_time, METH_VARARGS,\n PyDoc_STR(\"time() -> microseconds since epoch\")},\n {NULL, NULL} /* sentinel */\n};\n\nPyDoc_STRVAR(wat_module_doc,\n\"Workaround for time.time issues on FreeBsd.\");\n\nPyMODINIT_FUNC\ninitwat(void)\n{\n Py_InitModule3(\"wat\", wat_methods, wat_module_doc);\n}\n\nAnd here's the setup.py to put in the same directory:\nfrom distutils.core import setup, Extension\n\nsetup (name = \"wat\",\n version = \"0.1\",\n maintainer = \"Alex Martelli\",\n maintainer_email = \"[email protected]\",\n url = \"http://www.aleax.it/wat.zip\",\n description = \"WorkAround for Time in FreeBSD\",\n ext_modules = [Extension('wat', sources=['wat.c'])],\n)\n\nThe URL is correct, so you can also get these two files zipped up here.\nTo build & install this extension, python setup.py install (if you have permission to write in your Python's installation) or python setup.py build_ext -i to write wat.so in the very directory in which you put the sources (and then manually move it wherever you prefer to have it, but first try it out e.g. with python -c'import wat; print repr(wat.time())' in the same directory in which you've built it).\nPlease let me know how it works on FreeBSD (or any other Unix flavor with gettimeofday!-) -- if the C compiler complains about gettimeofday, you may be on a system which doesn't want to see its second argument, try without it!-).\n", "time.clock() returns CPU time on UNIX systems, and wallclock time since program start on Windows. This is a very unfortunate insymmetry, in my opinion.\nYou can find the definition for time.time() in the Python sources here (link to Google Code Search). It seems to use the highest-resolution timer available, which according to a quick Googling is gettimeofday() on FreeBSD as well, and that should be in the microsecond accuracy class.\nHowever, if you really need more accuracy, you could look into writing your own C module for really high-resolution timing (something that might just return the current microsecond count, maybe!). Pyrex makes Python extension writing very effortless, and SWIG is the other common choice. (Though really, if you want to shave as many microseconds off your timer accuracy, just write it as a pure C Python extension yourself.) Ctypes is also an option, but probably rather slow.\nBest of luck!\n", "time.clock() returns the processor time. That is, how much time the current process has used on the processor. So if you have a Python script called \"clock.py\", that does import time;print time.clock() it will indeed print about exactly the same each time you run it, as a new process is started each time.\nHere is a python console log that might explain it to you:\n>>> import time\n>>> time.clock()\n0.11\n>>> time.clock()\n0.11\n>>> time.clock()\n0.11\n>>> for x in xrange(100000000): pass\n... \n>>> time.clock()\n7.7800000000000002\n>>> time.clock()\n7.7800000000000002\n>>> time.clock()\n7.7800000000000002\n\nI hope this clarifies things.\n", "time.clock() is implemented to return a double value resulting from\n ((double)clock()) / CLOCKS_PER_SEC\n\nWhy do you think time.time() has bad resolution? It uses gettimeofday, which in turn reads the hardware clock, which has very good resolution.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "bsd", "freebsd", "gettimeofday", "python", "timer" ]
stackoverflow_0001110063_bsd_freebsd_gettimeofday_python_timer.txt
Q: How can I check for a blank image in Qt or PyQt? I have generated a collection of images. Some of them are blank as in their background is white. I have access to the QImage object of each of the images. Is there a Qt way to check for blank images? If not, can someone recommend the best way to do it in Python? A: I don't know about Qt, but there is an easy and efficient way to do it in PIL Using the getextrema method, example: im = Image.open('image.png') bands = im.split() isBlank = all(band.getextrema() == (255, 255) for band in bands) From the documentation: im.getextrema() => 2-tuple Returns a 2-tuple containing the minimum and maximum values of the image. In the current version of PIL, this is only applicable to single-band images. A: Well, I would count the colors in the image. If there is only one, then the image is blank. I do not know enough Python or qt to write code for this but I am sure there is a library that can tell you how many colors there are in an image (I am going to look into using ImageMagick for this right after I post this). Update: Here is the Perl code (apologies) to do this using Image::Magick. You should be able to convert it to Python using the Python bindings. Clearly, this only works for palette based images. #!/usr/bin/perl use strict; use warnings; use Image::Magick; die "Call with image file name\n" unless @ARGV == 1; my ($file) = @ARGV; my $image = Image::Magick->new; my $result = $image->Read( $file ); die "$result" if "$result"; my $colors = $image->Get('colors'); my %unique_colors; for ( my $i = 0; $i < $colors; ++$i ) { $unique_colors{ $image->Get("colormap[$i]") } = undef; } print "'$file' is blank\n" if keys %unique_colors == 1; __END__
How can I check for a blank image in Qt or PyQt?
I have generated a collection of images. Some of them are blank as in their background is white. I have access to the QImage object of each of the images. Is there a Qt way to check for blank images? If not, can someone recommend the best way to do it in Python?
[ "I don't know about Qt, but there is an easy and efficient way to do it in PIL\nUsing the getextrema method, example:\nim = Image.open('image.png')\nbands = im.split()\nisBlank = all(band.getextrema() == (255, 255) for band in bands)\n\nFrom the documentation:\n\nim.getextrema() => 2-tuple\nReturns a 2-tuple containing the\n minimum and maximum values of the\n image. In the current version of PIL,\n this is only applicable to single-band\n images.\n\n", "Well, I would count the colors in the image. If there is only one, then the image is blank. I do not know enough Python or qt to write code for this but I am sure there is a library that can tell you how many colors there are in an image (I am going to look into using ImageMagick for this right after I post this).\nUpdate: Here is the Perl code (apologies) to do this using Image::Magick. You should be able to convert it to Python using the Python bindings.\nClearly, this only works for palette based images.\n#!/usr/bin/perl\n\nuse strict;\nuse warnings;\n\nuse Image::Magick;\n\ndie \"Call with image file name\\n\" unless @ARGV == 1;\nmy ($file) = @ARGV;\n\nmy $image = Image::Magick->new;\n\nmy $result = $image->Read( $file );\ndie \"$result\" if \"$result\";\n\nmy $colors = $image->Get('colors');\n\nmy %unique_colors;\n\nfor ( my $i = 0; $i < $colors; ++$i ) {\n $unique_colors{ $image->Get(\"colormap[$i]\") } = undef;\n}\n\nprint \"'$file' is blank\\n\" if keys %unique_colors == 1;\n\n__END__\n\n" ]
[ 5, 1 ]
[]
[]
[ "pyqt4", "python", "qt4" ]
stackoverflow_0001110403_pyqt4_python_qt4.txt
Q: python sqlalchemy performance? HI , I made a ICAPServer (similar with httpserver) for which the performance is very important. The DB module is sqlalchemy. I then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong? BUT, no matter right or wrong, it seems the bottle-neck comes from the DB part. HOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle? BTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy.. A: You should first measure where your bottleneck is, for example using the profile module. Then optimize, if you have the possibility to, the slowest part of the system. A: You can only push SQLAlchemy so far as a programmer. I would agree with you that the rest of the performance is up to your DBA, including creating proper indexes on tables, etc. A: I had some issues with sqlalchemy's performance as well - I think you should first figure out in which ways you are using it ... they recommend that for big data sets is better to use the sql expression language. Either ways try and optimize the sqlalchemy code and have the Oracle database optimized as well, so you can better figure out what's wrong. Also, do some tests on the database.
python sqlalchemy performance?
HI , I made a ICAPServer (similar with httpserver) for which the performance is very important. The DB module is sqlalchemy. I then made a test about the performance of sqlalchemy, as a result, i found that it takes about 30ms for sqlalchemy to write <50kb data to DB (Oracle), i don`t know if the result is normal, or i did something wrong? BUT, no matter right or wrong, it seems the bottle-neck comes from the DB part. HOW can i improve the performance of sqlalchemy? OR it is up to DBA to improve Oracle? BTW, ICAPServer and Oracle are on the same pc , and i used the essential way of sqlalchemy..
[ "You should first measure where your bottleneck is, for example using the profile module.\nThen optimize, if you have the possibility to, the slowest part of the system.\n", "You can only push SQLAlchemy so far as a programmer. I would agree with you that the rest of the performance is up to your DBA, including creating proper indexes on tables, etc.\n", "I had some issues with sqlalchemy's performance as well - I think you should first figure out in which ways you are using it ... they recommend that for big data sets is better to use the sql expression language. Either ways try and optimize the sqlalchemy code and have the Oracle database optimized as well, so you can better figure out what's wrong.\nAlso, do some tests on the database.\n" ]
[ 5, 1, 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001110805_python_sqlalchemy.txt
Q: @staticmethod gives SyntaxError: invalid syntax I have been using a python script for a long while and all of sudden it gives me: File "youtube-dl.py", line 103 @staticmethod ^ SyntaxError: invalid syntax If you want to see the script, its right here: http://bitbucket.org/rg3/youtube-dl/raw/2009.06.29/youtube-dl What could be the reason? Update I am using python version Python 2.3.4. A: You might be using an old Python version that didn't support decorators yet.
@staticmethod gives SyntaxError: invalid syntax
I have been using a python script for a long while and all of sudden it gives me: File "youtube-dl.py", line 103 @staticmethod ^ SyntaxError: invalid syntax If you want to see the script, its right here: http://bitbucket.org/rg3/youtube-dl/raw/2009.06.29/youtube-dl What could be the reason? Update I am using python version Python 2.3.4.
[ "You might be using an old Python version that didn't support decorators yet.\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0001111227_python.txt
Q: Why does Django's signal handling use weak references for callbacks by default? The Django docs say this on the subject: Note also that Django stores signal handlers as weak references by default, so if your handler is a local function, it may be garbage collected. To prevent this, pass weak=False when you call the signal’s connect(). I haven't been able to find any justification for why this is the default, and I don't understand why you would ever want a signal that you explicitly registered to implicitly disappear. So what is the use-case for weak references here? And why is it the default? I realize it probably doesn't matter either way in 99% of cases, but clearly there's something I don't understand here, and I want to know if there's any "gotchas" lurking that might bite me someday. A: Signals handlers are stored as weak references to avoid the object they reference from not being garbage collected (for example after explicit deletion of the signal handler), just because a signal is still flying around. A: Bound methods keep a reference to the object they belong to (otherwise, they cannot fill self, cf. the Python documentation). Consider the following code: import gc class SomeLargeObject(object): def on_foo(self): pass slo = SomeLargeObject() callbacks = [slo.on_foo] print [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)] del slo print [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)] callbacks = [] print [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)] The output: [<__main__.SomeLargeObject object at 0x15001d0>] [<__main__.SomeLargeObject object at 0x15001d0>] [] One important thing to know when keeping weakrefs on callbacks is that you cannot weakref bound methods directly, because they are always created on the fly: >>> class SomeLargeObject(object): ... def on_foo(self): pass >>> import weakref >>> def report(o): ... print "about to collect" >>> slo = SomeLargeObject() >>> #second argument: function that is called when weakref'ed object is finalized >>> weakref.proxy(slo.on_foo, report) about to collect <weakproxy at 0x7f9abd3be208 to NoneType at 0x72ecc0>
Why does Django's signal handling use weak references for callbacks by default?
The Django docs say this on the subject: Note also that Django stores signal handlers as weak references by default, so if your handler is a local function, it may be garbage collected. To prevent this, pass weak=False when you call the signal’s connect(). I haven't been able to find any justification for why this is the default, and I don't understand why you would ever want a signal that you explicitly registered to implicitly disappear. So what is the use-case for weak references here? And why is it the default? I realize it probably doesn't matter either way in 99% of cases, but clearly there's something I don't understand here, and I want to know if there's any "gotchas" lurking that might bite me someday.
[ "Signals handlers are stored as weak references to avoid the object they reference from not being garbage collected (for example after explicit deletion of the signal handler), just because a signal is still flying around.\n", "Bound methods keep a reference to the object they belong to (otherwise, they cannot fill self, cf. the Python documentation). Consider the following code:\nimport gc\nclass SomeLargeObject(object):\n def on_foo(self): pass\n\nslo = SomeLargeObject()\ncallbacks = [slo.on_foo]\n\nprint [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)]\ndel slo\nprint [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)]\ncallbacks = []\nprint [o for o in gc.get_objects() if isinstance(o, SomeLargeObject)]\n\nThe output:\n[<__main__.SomeLargeObject object at 0x15001d0>]\n[<__main__.SomeLargeObject object at 0x15001d0>]\n[]\n\nOne important thing to know when keeping weakrefs on callbacks is that you cannot weakref bound methods directly, because they are always created on the fly:\n>>> class SomeLargeObject(object):\n... def on_foo(self): pass\n>>> import weakref\n>>> def report(o):\n... print \"about to collect\"\n>>> slo = SomeLargeObject()\n>>> #second argument: function that is called when weakref'ed object is finalized\n>>> weakref.proxy(slo.on_foo, report)\nabout to collect\n<weakproxy at 0x7f9abd3be208 to NoneType at 0x72ecc0>\n\n" ]
[ 14, 6 ]
[]
[]
[ "django", "django_signals", "garbage_collection", "python", "weak_references" ]
stackoverflow_0001110668_django_django_signals_garbage_collection_python_weak_references.txt
Q: Is there anything like HTTP::Recorder for Python? I really like Perl's HTTP::Recorder. Is there something like it for Python? A: I'm aware of Scotch and FunkLoad, but I don't know how they compare with HTTP::Recorder. See the following links for more details: http://darcs.idyll.org/~t/projects/scotch/doc/ also see the subsection "Other Python Recorders and Proxies" http://funkload.nuxeo.org/#test-recorder
Is there anything like HTTP::Recorder for Python?
I really like Perl's HTTP::Recorder. Is there something like it for Python?
[ "I'm aware of Scotch and FunkLoad, but I don't know how they compare with HTTP::Recorder. See the following links for more details:\n\nhttp://darcs.idyll.org/~t/projects/scotch/doc/\n\nalso see the subsection \"Other Python Recorders and Proxies\"\n\nhttp://funkload.nuxeo.org/#test-recorder\n\n" ]
[ 4 ]
[]
[]
[ "http", "proxy", "python", "record" ]
stackoverflow_0001111356_http_proxy_python_record.txt
Q: How to build an Ecommerce Shopping Cart in Django? Is there a book or a tutorial which teaches how to build a shopping cart with django or any other python framework ? A: There's a book coming out that talks about just that. See here: http://www.apress.com/book/view/9781430225355 Edit: The above link is dead, so here's a working link for the book: https://play.google.com/store/books/details?id=LwO1GzMN_QsC It's called Beginning Django E-Commerce by James McGaw. A: Satchmo project is a known open source shopping cart. http://www.satchmoproject.com/ A: Ingredients: one cup PayPal (or subsitute with other equivalent payment system) few cups html add css to taste add django if desired Cooking: Mix well. Bake for 1-2 month. Release as open source :-)
How to build an Ecommerce Shopping Cart in Django?
Is there a book or a tutorial which teaches how to build a shopping cart with django or any other python framework ?
[ "There's a book coming out that talks about just that. See here:\nhttp://www.apress.com/book/view/9781430225355\nEdit: The above link is dead, so here's a working link for the book: https://play.google.com/store/books/details?id=LwO1GzMN_QsC\nIt's called Beginning Django E-Commerce by James McGaw.\n", "Satchmo project is a known open source shopping cart.\nhttp://www.satchmoproject.com/\n", "Ingredients:\n\none cup PayPal (or subsitute with other equivalent payment system)\nfew cups html\nadd css to taste\nadd django if desired\n\nCooking:\n\nMix well.\nBake for 1-2 month.\n\nRelease as open source :-) \n" ]
[ 7, 5, 3 ]
[]
[]
[ "django", "python", "shopping_cart" ]
stackoverflow_0001111173_django_python_shopping_cart.txt
Q: Separation of ORM and validation I use django and I wonder in what cases where model validation should go. There are at least two variants: Validate in the model's save method and to raise IntegrityError or another exception if business rules were violated Validate data using forms and built-in clean_* facilities From one point of view, answer is obvious: one should use form-based validation. It is because ORM is ORM and validation is completely another concept. Take a look at CharField: forms.CharField allows min_length specification, but models.CharField does not. Ok cool, but what the hell all that validation features are doing in django.db.models? I can specify that CharField can't be blank, I can use EmailField, FileField, SlugField validation of which are performed here, in python, not on RDBMS. Furthermore there is the URLField which checks existance of url involving some really complex logic. From another side, if I have an entity I want to guarantee that it will not be saved in inconsistent state whether it came from a form or was modified/created by some internal algorithms. I have a model with name field, I expect it should be longer than one character. I have a min_age and a max_age fields also, it makes not much sense if min_age > max_age. So should I check such conditions in save method? What are the best practices of model validation? A: I am not sure if this is best practise but what I do is that I tend to validate both client side and server side before pushing the data to the database. I know it requires a lot more effort but this can be done by setting some values before use and then maintaining them. You could also try push in size contraints with **kwargs into a validation function that is called before the put() call. A: Your two options are two different things. Form-based validation can be regarded as syntactic validation + convert HTTP request parameters from text to Python types. Model-based validation can be regarded as semantic validation, sometimes using context not available at the HTTP/form layer. And of course there is a third layer at the DB where constraints are enforced, and may not be checkable anywhere else because of concurrent requests updating the database (e.g. uniqueness constraints, optimistic locking). A: There's an ongoing Google Summer of Code project that aims to bring validation to the Django model layer. You can read more about it in this presentation from the GSoC student (Honza Kral). There's also a github repository with the preliminary code. Until that code finds its way into a Django release, one recommended approach is to use ModelForms to validate data, even if the source isn't a form. It's described in this blog entry from one of the Django core devs. A: "but what the hell all that validation features are doing in django.db.models? " One word: Legacy. Early versions of Django had less robust forms and the validation was scattered. "So should I check such conditions in save method?" No, you should use a form for all validation. "What are the best practices of model validation?"* Use a form for all validation. "whether it came from a form or was modified/created by some internal algorithms" What? If your algorithms suffer from psychotic episodes or your programmers are sociopaths, then -- perhaps -- you have to validate internally-generated data. Otherwise, internally-generated data is -- by definition -- valid. Only user data can be invalid. If you don't trust your software, what's the point of writing it? Are your unit tests broken? A: DB/Model validation The data store in database must always be in a certain form/state. For example: required first name, last name, foreign key, unique constraint. This is where the logic of you app resides. No matter where you think the data comes from - it should be "validated" here and an exception raised if the requirements are not met. Form validation Data being entered should look right. It is ok if this data is entered differently through some other means (through admin or api calls). Examples: length of person's name, proper capitalization of the sentence... Example1: Object has a StartDate and an EndDate. StartDate must always be before EndDate. Where do you validate this? In the model of course! Consider a case when you might be importing data from some other system - you don't want this to go through. Example2: Password confirmation. You have a field for storing the password in the db. However you display two fields: password1 and password2 on your form. The form, and only the form, is responsible for comparing those two fields to see that they are the same. After form is valid you can safely store the password1 field into the db as the password.
Separation of ORM and validation
I use django and I wonder in what cases where model validation should go. There are at least two variants: Validate in the model's save method and to raise IntegrityError or another exception if business rules were violated Validate data using forms and built-in clean_* facilities From one point of view, answer is obvious: one should use form-based validation. It is because ORM is ORM and validation is completely another concept. Take a look at CharField: forms.CharField allows min_length specification, but models.CharField does not. Ok cool, but what the hell all that validation features are doing in django.db.models? I can specify that CharField can't be blank, I can use EmailField, FileField, SlugField validation of which are performed here, in python, not on RDBMS. Furthermore there is the URLField which checks existance of url involving some really complex logic. From another side, if I have an entity I want to guarantee that it will not be saved in inconsistent state whether it came from a form or was modified/created by some internal algorithms. I have a model with name field, I expect it should be longer than one character. I have a min_age and a max_age fields also, it makes not much sense if min_age > max_age. So should I check such conditions in save method? What are the best practices of model validation?
[ "I am not sure if this is best practise but what I do is that I tend to validate both client side and server side before pushing the data to the database. I know it requires a lot more effort but this can be done by setting some values before use and then maintaining them.\nYou could also try push in size contraints with **kwargs into a validation function that is called before the put() call.\n", "Your two options are two different things.\n\nForm-based validation can be regarded as syntactic validation + convert HTTP request parameters from text to Python types.\nModel-based validation can be regarded as semantic validation, sometimes using context not available at the HTTP/form layer.\n\nAnd of course there is a third layer at the DB where constraints are enforced, and may not be checkable anywhere else because of concurrent requests updating the database (e.g. uniqueness constraints, optimistic locking).\n", "There's an ongoing Google Summer of Code project that aims to bring validation to the Django model layer. You can read more about it in this presentation from the GSoC student (Honza Kral). There's also a github repository with the preliminary code.\nUntil that code finds its way into a Django release, one recommended approach is to use ModelForms to validate data, even if the source isn't a form. It's described in this blog entry from one of the Django core devs.\n", "\"but what the hell all that validation features are doing in django.db.models? \"\nOne word: Legacy. Early versions of Django had less robust forms and the validation was scattered.\n\"So should I check such conditions in save method?\"\nNo, you should use a form for all validation.\n\"What are the best practices of model validation?\"*\nUse a form for all validation.\n\"whether it came from a form or was modified/created by some internal algorithms\"\nWhat? If your algorithms suffer from psychotic episodes or your programmers are sociopaths, then -- perhaps -- you have to validate internally-generated data. \nOtherwise, internally-generated data is -- by definition -- valid. Only user data can be invalid. If you don't trust your software, what's the point of writing it? Are your unit tests broken?\n", "DB/Model validation\nThe data store in database must always be in a certain form/state. For example: required first name, last name, foreign key, unique constraint. This is where the logic of you app resides. No matter where you think the data comes from - it should be \"validated\" here and an exception raised if the requirements are not met.\nForm validation\nData being entered should look right. It is ok if this data is entered differently through some other means (through admin or api calls).\nExamples: length of person's name, proper capitalization of the sentence...\nExample1: Object has a StartDate and an EndDate. StartDate must always be before EndDate. Where do you validate this? In the model of course! Consider a case when you might be importing data from some other system - you don't want this to go through.\nExample2: Password confirmation. You have a field for storing the password in the db. However you display two fields: password1 and password2 on your form. The form, and only the form, is responsible for comparing those two fields to see that they are the same. After form is valid you can safely store the password1 field into the db as the password.\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "architecture", "django", "django_models", "python", "validation" ]
stackoverflow_0001108967_architecture_django_django_models_python_validation.txt
Q: Performance of list(...).insert(...) I thought about the following question about computer's architecture. Suppose I do in Python from bisect import bisect index = bisect(x, a) # O(log n) (also, shouldn't it be a standard list function?) x.insert(index, a) # O(1) + memcpy() which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works? A: Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances. My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n). For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure. A: Use the blist module if you need a list with better insert performance. A: CPython lists are contiguous arrays. Which one of the O(log n) bisect and O(n) insert dominates your performance profile depends on the size of your list and also the constant factors inside the O(). Particularly, the comparison function invoked by bisect can be something expensive depending on the type of objects in the list. If you need to hold potentially large mutable sorted sequences then the linear array underlying Pythons list type isn't a good choice. Depending on your requirements heaps, trees or skip-lists might be appropriate.
Performance of list(...).insert(...)
I thought about the following question about computer's architecture. Suppose I do in Python from bisect import bisect index = bisect(x, a) # O(log n) (also, shouldn't it be a standard list function?) x.insert(index, a) # O(1) + memcpy() which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works?
[ "Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.\nMy bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).\nFor small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.\n", "Use the blist module if you need a list with better insert performance.\n", "CPython lists are contiguous arrays. Which one of the O(log n) bisect and O(n) insert dominates your performance profile depends on the size of your list and also the constant factors inside the O(). Particularly, the comparison function invoked by bisect can be something expensive depending on the type of objects in the list.\nIf you need to hold potentially large mutable sorted sequences then the linear array underlying Pythons list type isn't a good choice. Depending on your requirements heaps, trees or skip-lists might be appropriate.\n" ]
[ 17, 12, 7 ]
[]
[]
[ "architecture", "list", "memcpy", "memory", "python" ]
stackoverflow_0001110332_architecture_list_memcpy_memory_python.txt
Q: Accessing Clipboard in Python version 3.1 I want to access the clipboard using Python 3.1. I've obviously come across win32clipboard, but it requires pywin32 and in its site I only found download versions for up to Python 2.13 or so. Bottom line: Is there a way to access the clipboard in Python 3.1 or do I have to revert to an old Python version? A: No you don't need to revert, it will be quite strange. Go to http://sourceforge.net/projects/pywin32/files/ and download package for Python 3.1 from there.
Accessing Clipboard in Python version 3.1
I want to access the clipboard using Python 3.1. I've obviously come across win32clipboard, but it requires pywin32 and in its site I only found download versions for up to Python 2.13 or so. Bottom line: Is there a way to access the clipboard in Python 3.1 or do I have to revert to an old Python version?
[ "No you don't need to revert, it will be quite strange. Go to http://sourceforge.net/projects/pywin32/files/ and download package for Python 3.1 from there.\n" ]
[ 1 ]
[]
[]
[ "clipboard", "python" ]
stackoverflow_0001112057_clipboard_python.txt
Q: Returning Matplotlib image as string I am using matplotlib in a django app and would like to directly return the rendered image. So far I can go plt.savefig(...), then return the location of the image. What I want to do is: return HttpResponse(plt.renderfig(...), mimetype="image/png") Any ideas? A: Django's HttpResponse object supports file-like API and you can pass a file-object to savefig. response = HttpResponse(mimetype="image/png") # create your image as usual, e.g. pylab.plot(...) pylab.savefig(response, format="png") return response Hence, you can return the image directly in the HttpResponse. A: what about cStringIO? import pylab import cStringIO pylab.plot([3,7,2,1]) output = cStringIO.StringIO() pylab.savefig('test.png', dpi=75) pylab.savefig(output, dpi=75) print output.getvalue() == open('test.png', 'rb').read() # True A: There is a recipe in the Matplotlib Cookbook that does exactly this. At its core, it looks like: def simple(request): from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas from matplotlib.figure import Figure fig=Figure() ax=fig.add_subplot(111) ax.plot(range(10), range(10), '-') canvas=FigureCanvas(fig) response=django.http.HttpResponse(content_type='image/png') canvas.print_png(response) return response Put that in your views file, point your URL to it, and you're off and running. Edit: As noted, this is a simplified version of a recipe in the cookbook. However, it looks like there is a difference between calling print_png and savefig, at least in the initial test that I did. Calling fig.savefig(response, format='png') gave an image with that was larger and had a white background, while the original canvas.print_png(response) gave a slightly smaller image with a grey background. So, I would replace the last few lines above with: canvas=FigureCanvas(fig) response=django.http.HttpResponse(content_type='image/png') fig.savefig(response, format='png') return response You still need to have the canvas instantiated, though. A: Employ ducktyping and pass a object of your own, in disguise of file object class MyFile(object): def __init__(self): self._data = "" def write(self, data): self._data += data myfile = MyFile() fig.savefig(myfile) print myfile._data you can use myfile = StringIO.StringIO() instead in real code and return data in reponse e.g. output = StringIO.StringIO() fig.savefig(output) contents = output.getvalue() return HttpResponse(contents , mimetype="image/png")
Returning Matplotlib image as string
I am using matplotlib in a django app and would like to directly return the rendered image. So far I can go plt.savefig(...), then return the location of the image. What I want to do is: return HttpResponse(plt.renderfig(...), mimetype="image/png") Any ideas?
[ "Django's HttpResponse object supports file-like API and you can pass a file-object to savefig.\nresponse = HttpResponse(mimetype=\"image/png\")\n# create your image as usual, e.g. pylab.plot(...)\npylab.savefig(response, format=\"png\")\nreturn response\n\nHence, you can return the image directly in the HttpResponse.\n", "what about cStringIO?\nimport pylab\nimport cStringIO\npylab.plot([3,7,2,1])\noutput = cStringIO.StringIO()\npylab.savefig('test.png', dpi=75)\npylab.savefig(output, dpi=75)\nprint output.getvalue() == open('test.png', 'rb').read() # True\n\n", "There is a recipe in the Matplotlib Cookbook that does exactly this. At its core, it looks like:\ndef simple(request):\n from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas\n from matplotlib.figure import Figure\n\n fig=Figure()\n ax=fig.add_subplot(111)\n ax.plot(range(10), range(10), '-')\n canvas=FigureCanvas(fig)\n response=django.http.HttpResponse(content_type='image/png')\n canvas.print_png(response)\n return response\n\nPut that in your views file, point your URL to it, and you're off and running.\nEdit: As noted, this is a simplified version of a recipe in the cookbook. However, it looks like there is a difference between calling print_png and savefig, at least in the initial test that I did. Calling fig.savefig(response, format='png') gave an image with that was larger and had a white background, while the original canvas.print_png(response) gave a slightly smaller image with a grey background. So, I would replace the last few lines above with:\n canvas=FigureCanvas(fig)\n response=django.http.HttpResponse(content_type='image/png')\n fig.savefig(response, format='png')\n return response\n\nYou still need to have the canvas instantiated, though.\n", "Employ ducktyping and pass a object of your own, in disguise of file object\nclass MyFile(object):\n def __init__(self):\n self._data = \"\"\n def write(self, data):\n self._data += data\n\nmyfile = MyFile()\nfig.savefig(myfile)\nprint myfile._data\n\nyou can use myfile = StringIO.StringIO() instead in real code and return data in reponse e.g.\noutput = StringIO.StringIO()\nfig.savefig(output)\ncontents = output.getvalue()\nreturn HttpResponse(contents , mimetype=\"image/png\")\n\n" ]
[ 18, 6, 2, 0 ]
[]
[]
[ "django", "matplotlib", "python" ]
stackoverflow_0001108881_django_matplotlib_python.txt
Q: What are the different possible values for __name__ in a Python script, and what do they mean? Checking to see if __name__ == '__main__' is a common idiom to run some code when the file is being called directly, rather than through a module. In the process of writing a custom command for Django's manage.py, I found myself needing to use code.InteractiveConsole, which gives the effect to the user of a standard python shell. In some test code I was doing, I found that in the script I'm trying to execute, I get that __name__ is __console__, which caused my code (dependent on __main__) to not run. I'm fairly certain that I have some things in my original implementation to change, but it got me wondering as to what different things __name__ could be. I couldn't find any documentation on the possible values, nor what they mean, so that's how I ended up here. A: from the document of class code.InteractiveInterpreter([locals]): The optional locals argument specifies the dictionary in which code will be executed; it defaults to a newly created dictionary with key '__name__' set to '__console__' and key '__doc__' set to None. maybe u can turnning the locals argument, set __name__ with __main__, or change the test clause from if __name__ == '__main__' to if __name__ in set(["__main__", "__console__"]) Hope it helps. A: __name__ is usually the module name, but it's changed to '__main__' when the module in question is executed directly instead of being imported by another one. I understand that other values can only be set directly by the code you're running.
What are the different possible values for __name__ in a Python script, and what do they mean?
Checking to see if __name__ == '__main__' is a common idiom to run some code when the file is being called directly, rather than through a module. In the process of writing a custom command for Django's manage.py, I found myself needing to use code.InteractiveConsole, which gives the effect to the user of a standard python shell. In some test code I was doing, I found that in the script I'm trying to execute, I get that __name__ is __console__, which caused my code (dependent on __main__) to not run. I'm fairly certain that I have some things in my original implementation to change, but it got me wondering as to what different things __name__ could be. I couldn't find any documentation on the possible values, nor what they mean, so that's how I ended up here.
[ "from the document of class code.InteractiveInterpreter([locals]):\nThe optional locals argument specifies the dictionary in which code will be executed; it defaults to a newly created dictionary with key '__name__' set to '__console__' and key '__doc__' set to None.\nmaybe u can turnning the locals argument, set __name__ with __main__, or change the test clause from \nif __name__ == '__main__'\nto \nif __name__ in set([\"__main__\", \"__console__\"])\n\nHope it helps.\n", "__name__ is usually the module name, but it's changed to '__main__' when the module in question is executed directly instead of being imported by another one.\nI understand that other values can only be set directly by the code you're running.\n" ]
[ 9, 6 ]
[]
[]
[ "python" ]
stackoverflow_0001112198_python.txt
Q: file won't write in python I'm trying to replace a string in all the files within the current directory. for some reason, my temp file ends up blank. It seems my .write isn't working because the secondfile was declared outside its scope maybe? I'm new to python, so still climbing the learning curve...thanks! edit: I'm aware my tempfile isn't being copied currently. I'm also aware there are much more efficient ways of doing this. I'm doing it this way for practice. If someone could answer specifically why the .write method fails to work here, that would be great. Thanks! import os import shutil for filename in os.listdir("."): file1 = open(filename,'r') secondfile = open("temp.out",'w') print filename for line in file1: line2 = line.replace('mrddb2.','shpdb2.') line3 = line2.replace('MRDDB2.','SHPDB2.') secondfile.write(line3) print 'file copy in progress' file1.close() secondfile.close() A: Just glancing at the thing, it appears that your problem is with the 'w'. It looks like you keep overwriting, not appending. So you're basically looping through the file(s), and by the end you've only copied the last file to your temp file. You'll may want to open the file with 'a' instead of 'w'. A: Your code (correctly indented, though I don't think there's a way to indent it so it runs but doesn't work right) actually seems right. Keep in mind, temp.out will be the replaced contents of only the last source file. Could it be that file is just blank? A: Firstly, you have forgotten to copy the temp file back onto the original. Secondly: use sed -i or perl -i instead of python. For instance: perl -i -pe 's/mrddb2/shpdb2/;s/MRDDB2/SHPDB2/' * A: I don't have the exact answer for you, but what might help is to stick some print lines in there in strategic places, like print each line before it was modified, then again after it was modified. Then place another one after the line was modified just before it is written to the file. Then just before you close the new file do a: print secondfile.read() You could also try to limit the results you get if there are too many for debugging purposes. You can limit string output by attaching a subscript modifier to the end, for example: print secondfile.read()[:n] If n = 100 it will limit the output to 100 characters. A: if your code is actually indented as showed in the post, the write is working fine. But if it is failing, the write call may be outside the inner for loop. A: Just to make sure I wasn't really missing something, I tested the code and it worked fine for me. Maybe you could try continue for everything but one specific filename and then check the contents of temp.out after that. import os for filename in os.listdir("."): if filename != 'findme.txt': continue print 'Processing', filename file1 = open(filename,'r') secondfile = open("temp.out",'w') print filename for line in file1: line2 = line.replace('mrddb2.','shpdb2.') line3 = line2.replace('MRDDB2.','SHPDB2.') print 'About to write:', line3 secondfile.write(line3) print 'Done with', filename file1.close() secondfile.close() Also, as others have mentioned, you're just clobbering your temp.out file each time you process a new file. You've also imported shutil without actually doing anything with it. Are you forgetting to copy temp.out back to your original file? A: I noticed sometimes it will not print to file if you don't have a file.close after file.write. For example, this program never actually saves to file, it just makes a blank file (unless you add outfile.close() right after the outfile.write.) outfile=open("ok.txt","w") fc="filecontents" outfile.write(fc.encode("utf-8")) while 1: print "working..." A: @OP, you might also want to try fileinput module ( this way, you don't have to use your own temp file) import fileinput for filename in os.listdir("."): for line in fileinput.FileInput(filename,inplace=1): line = line.strip().replace('mrddb2.','shpdb2.') line = line.strip().replace('MRDDB2.','SHPDB2.') print line set "inplace" to 1 for editing the file in place. Set to 0 for normal print to stdout
file won't write in python
I'm trying to replace a string in all the files within the current directory. for some reason, my temp file ends up blank. It seems my .write isn't working because the secondfile was declared outside its scope maybe? I'm new to python, so still climbing the learning curve...thanks! edit: I'm aware my tempfile isn't being copied currently. I'm also aware there are much more efficient ways of doing this. I'm doing it this way for practice. If someone could answer specifically why the .write method fails to work here, that would be great. Thanks! import os import shutil for filename in os.listdir("."): file1 = open(filename,'r') secondfile = open("temp.out",'w') print filename for line in file1: line2 = line.replace('mrddb2.','shpdb2.') line3 = line2.replace('MRDDB2.','SHPDB2.') secondfile.write(line3) print 'file copy in progress' file1.close() secondfile.close()
[ "Just glancing at the thing, it appears that your problem is with the 'w'.\nIt looks like you keep overwriting, not appending.\nSo you're basically looping through the file(s), \nand by the end you've only copied the last file to your temp file.\nYou'll may want to open the file with 'a' instead of 'w'.\n", "Your code (correctly indented, though I don't think there's a way to indent it so it runs but doesn't work right) actually seems right. Keep in mind, temp.out will be the replaced contents of only the last source file. Could it be that file is just blank?\n", "Firstly, \nyou have forgotten to copy the temp file back onto the original.\nSecondly:\nuse sed -i or perl -i instead of python.\nFor instance:\nperl -i -pe 's/mrddb2/shpdb2/;s/MRDDB2/SHPDB2/' *\n\n", "I don't have the exact answer for you, but what might help is to stick some print lines in there in strategic places, like print each line before it was modified, then again after it was modified. Then place another one after the line was modified just before it is written to the file. Then just before you close the new file do a:\nprint secondfile.read()\nYou could also try to limit the results you get if there are too many for debugging purposes. You can limit string output by attaching a subscript modifier to the end, for example:\nprint secondfile.read()[:n]\nIf n = 100 it will limit the output to 100 characters.\n", "if your code is actually indented as showed in the post, the write is working fine. But if it is failing, the write call may be outside the inner for loop. \n", "Just to make sure I wasn't really missing something, I tested the code and it worked fine for me. Maybe you could try continue for everything but one specific filename and then check the contents of temp.out after that.\nimport os\n\nfor filename in os.listdir(\".\"):\n if filename != 'findme.txt': continue\n print 'Processing', filename\n file1 = open(filename,'r')\n secondfile = open(\"temp.out\",'w')\n print filename\n for line in file1:\n line2 = line.replace('mrddb2.','shpdb2.')\n line3 = line2.replace('MRDDB2.','SHPDB2.')\n print 'About to write:', line3\n secondfile.write(line3)\n print 'Done with', filename\n file1.close()\n secondfile.close()\n\nAlso, as others have mentioned, you're just clobbering your temp.out file each time you process a new file. You've also imported shutil without actually doing anything with it. Are you forgetting to copy temp.out back to your original file?\n", "I noticed sometimes it will not print to file if you don't have a file.close after file.write.\nFor example, this program never actually saves to file, it just makes a blank file (unless you add outfile.close() right after the outfile.write.)\noutfile=open(\"ok.txt\",\"w\")\n\nfc=\"filecontents\"\n\noutfile.write(fc.encode(\"utf-8\"))\n\n\nwhile 1:\n\n print \"working...\"\n\n", "@OP, you might also want to try fileinput module ( this way, you don't have to use your own temp file)\nimport fileinput\nfor filename in os.listdir(\".\"):\n for line in fileinput.FileInput(filename,inplace=1):\n line = line.strip().replace('mrddb2.','shpdb2.')\n line = line.strip().replace('MRDDB2.','SHPDB2.')\n print line\n\nset \"inplace\" to 1 for editing the file in place. Set to 0 for normal print to stdout\n" ]
[ 5, 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "file_io", "for_loop", "python" ]
stackoverflow_0000984216_file_io_for_loop_python.txt
Q: Replace URL with a link using regex in python how do I convert some text to a link? Back in PHP, I used this piece of code that worked well for my purpose: $text = preg_replace("#(^|[\n ])(([\w]+?://[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)#is", "\\1<a href=\"\\2\" target=\"_blank\">\\3</a>", $text); $text = preg_replace("#(^|[\n ])(((www|ftp)\.[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)#is", "\\1<a href=\"http://\\2\" target=\"_blank\">\\3</a>", $text); I tried around in Python, but was unable to get it to work.. Would be very nice if someone could translate this to Python :).. A: The code below is a simple translation to python. You should confirm that it actually does what you want. For more information, please see the Python Regular Expression HOWTO. import re pat1 = re.compile(r"(^|[\n ])(([\w]+?://[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)", re.IGNORECASE | re.DOTALL) pat2 = re.compile(r"#(^|[\n ])(((www|ftp)\.[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)", re.IGNORECASE | re.DOTALL) urlstr = 'http://www.example.com/foo/bar.html' urlstr = pat1.sub(r'\1<a href="\2" target="_blank">\3</a>', urlstr) urlstr = pat2.sub(r'\1<a href="http:/\2" target="_blank">\3</a>', urlstr) print urlstr Here's what the output looks like at my end: <a href="http://www.example.com/foo/bar.html" target="_blank">http://www.example.com</a>
Replace URL with a link using regex in python
how do I convert some text to a link? Back in PHP, I used this piece of code that worked well for my purpose: $text = preg_replace("#(^|[\n ])(([\w]+?://[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)#is", "\\1<a href=\"\\2\" target=\"_blank\">\\3</a>", $text); $text = preg_replace("#(^|[\n ])(((www|ftp)\.[\w\#$%&~.\-;:=,?@\[\]+]*)(/[\w\#$%&~/.\-;:=,?@\[\]+]*)?)#is", "\\1<a href=\"http://\\2\" target=\"_blank\">\\3</a>", $text); I tried around in Python, but was unable to get it to work.. Would be very nice if someone could translate this to Python :)..
[ "The code below is a simple translation to python. You should confirm that it actually does what you want. For more information, please see the Python Regular Expression HOWTO.\nimport re\n\npat1 = re.compile(r\"(^|[\\n ])(([\\w]+?://[\\w\\#$%&~.\\-;:=,?@\\[\\]+]*)(/[\\w\\#$%&~/.\\-;:=,?@\\[\\]+]*)?)\", re.IGNORECASE | re.DOTALL)\n\npat2 = re.compile(r\"#(^|[\\n ])(((www|ftp)\\.[\\w\\#$%&~.\\-;:=,?@\\[\\]+]*)(/[\\w\\#$%&~/.\\-;:=,?@\\[\\]+]*)?)\", re.IGNORECASE | re.DOTALL)\n\n\nurlstr = 'http://www.example.com/foo/bar.html'\n\nurlstr = pat1.sub(r'\\1<a href=\"\\2\" target=\"_blank\">\\3</a>', urlstr)\nurlstr = pat2.sub(r'\\1<a href=\"http:/\\2\" target=\"_blank\">\\3</a>', urlstr)\n\nprint urlstr\n\nHere's what the output looks like at my end:\n<a href=\"http://www.example.com/foo/bar.html\" target=\"_blank\">http://www.example.com</a>\n\n" ]
[ 7 ]
[]
[]
[ "hyperlink", "python", "regex", "url" ]
stackoverflow_0001112012_hyperlink_python_regex_url.txt
Q: Why is BeautifulSoup throwing this HTMLParseError? I thought BeautifulSoup will be able to handle malformed documents, but when I sent it the source of a page, the following traceback got printed: Traceback (most recent call last): File "mx.py", line 7, in s = BeautifulSoup(content) File "build\bdist.win32\egg\BeautifulSoup.py", line 1499, in __init__ File "build\bdist.win32\egg\BeautifulSoup.py", line 1230, in __init__ File "build\bdist.win32\egg\BeautifulSoup.py", line 1263, in _feed File "C:\Python26\lib\HTMLParser.py", line 108, in feed self.goahead(0) File "C:\Python26\lib\HTMLParser.py", line 150, in goahead k = self.parse_endtag(i) File "C:\Python26\lib\HTMLParser.py", line 314, in parse_endtag self.error("bad end tag: %r" % (rawdata[i:j],)) File "C:\Python26\lib\HTMLParser.py", line 115, in error raise HTMLParseError(message, self.getpos()) HTMLParser.HTMLParseError: bad end tag: u"", at line 258, column 34 Shouldn't it be able to handle this sort of stuff? If it can handle them, how could I do it? If not, is there a module that can handle malformed documents? EDIT: here's an update. I saved the page locally, using firefox, and I tried to create a soup object from the contents of the file. That's where BeautifulSoup fails. If I try to create a soup object directly from the website, it works.Here's the document that causes trouble for soup. A: Worked fine for me using BeautifulSoup version 3.0.7. The latest is 3.1.0, but there's a note on the BeautifulSoup home page to try 3.0.7a if you're having trouble. I think I ran into a similar problem as yours some time ago and reverted, which fixed the problem; I'd try that. If you want to stick with your current version, I suggest removing the large <script> block at the top, since that is where the error occurs, and since you cannot parse that section with BeautifulSoup anyway. A: In my experience BeautifulSoup isn't that fault tolerant. I had to use it once for a small script and ran into these problems. I think using a regular expression to strip out the tags helped a bit, but I eventually just gave up and moved the script over to Ruby and Nokogiri. A: The problem appears to be the contents = contents.replace(/</g, '&lt;'); in line 258 plus the similar contents = contents.replace(/>/g, '&gt;'); in the next line. I'd just use re.sub to clobber all occurrences of r"replace(/[<>]/" with something inocuous before feeding it to BeautifulSoup ... moving away from BeautifulSoup would be like throwing out the baby with the bathwater IMHO.
Why is BeautifulSoup throwing this HTMLParseError?
I thought BeautifulSoup will be able to handle malformed documents, but when I sent it the source of a page, the following traceback got printed: Traceback (most recent call last): File "mx.py", line 7, in s = BeautifulSoup(content) File "build\bdist.win32\egg\BeautifulSoup.py", line 1499, in __init__ File "build\bdist.win32\egg\BeautifulSoup.py", line 1230, in __init__ File "build\bdist.win32\egg\BeautifulSoup.py", line 1263, in _feed File "C:\Python26\lib\HTMLParser.py", line 108, in feed self.goahead(0) File "C:\Python26\lib\HTMLParser.py", line 150, in goahead k = self.parse_endtag(i) File "C:\Python26\lib\HTMLParser.py", line 314, in parse_endtag self.error("bad end tag: %r" % (rawdata[i:j],)) File "C:\Python26\lib\HTMLParser.py", line 115, in error raise HTMLParseError(message, self.getpos()) HTMLParser.HTMLParseError: bad end tag: u"", at line 258, column 34 Shouldn't it be able to handle this sort of stuff? If it can handle them, how could I do it? If not, is there a module that can handle malformed documents? EDIT: here's an update. I saved the page locally, using firefox, and I tried to create a soup object from the contents of the file. That's where BeautifulSoup fails. If I try to create a soup object directly from the website, it works.Here's the document that causes trouble for soup.
[ "Worked fine for me using BeautifulSoup version 3.0.7. The latest is 3.1.0, but there's a note on the BeautifulSoup home page to try 3.0.7a if you're having trouble. I think I ran into a similar problem as yours some time ago and reverted, which fixed the problem; I'd try that. \nIf you want to stick with your current version, I suggest removing the large <script> block at the top, since that is where the error occurs, and since you cannot parse that section with BeautifulSoup anyway.\n", "In my experience BeautifulSoup isn't that fault tolerant. I had to use it once for a small script and ran into these problems. I think using a regular expression to strip out the tags helped a bit, but I eventually just gave up and moved the script over to Ruby and Nokogiri.\n", "The problem appears to be the\ncontents = contents.replace(/</g, '&lt;');\n in line 258 plus the similar\ncontents = contents.replace(/>/g, '&gt;');\n in the next line.\nI'd just use re.sub to clobber all occurrences of r\"replace(/[<>]/\" with something inocuous before feeding it to BeautifulSoup ... moving away from BeautifulSoup would be like throwing out the baby with the bathwater IMHO.\n" ]
[ 5, 1, 1 ]
[]
[]
[ "beautifulsoup", "exception", "malformed", "parsing", "python" ]
stackoverflow_0001111656_beautifulsoup_exception_malformed_parsing_python.txt
Q: Python Win32 Extensions not Available? I am trying to get a post-commit.bat script running on a Windows Vista Ultimate machine for Trac. I have installed Trac and its working fine - but when I run this script I get the error: "The Python Win32 extensions for NT (service, event, logging) appear not to be Available." Anyone know why this would occur ? A: have u installed the Python Win32 module?
Python Win32 Extensions not Available?
I am trying to get a post-commit.bat script running on a Windows Vista Ultimate machine for Trac. I have installed Trac and its working fine - but when I run this script I get the error: "The Python Win32 extensions for NT (service, event, logging) appear not to be Available." Anyone know why this would occur ?
[ "have u installed the Python Win32 module?\n" ]
[ 6 ]
[]
[]
[ "python", "trac" ]
stackoverflow_0001112784_python_trac.txt
Q: List of installed fonts OS X / C I'm trying to programatically get a list of installed fonts in C or Python. I need to be able to do this on OS X, does anyone know how? A: Python with PyObjC installed (which is the case for Mac OS X 10.5+, so this code will work without having to install anything): import Cocoa manager = Cocoa.NSFontManager.sharedFontManager() font_families = list(manager.availableFontFamilies()) (based on htw's answer) A: Why not use the Terminal? System Fonts: ls -R /System/Library/Fonts | grep ttf User Fonts: ls -R ~/Library/Fonts | grep ttf Mac OS X Default fonts: ls -R /Library/Fonts | grep ttf If you need to run it inside your C program: void main() { printf("System fonts: "); execl("/bin/ls","ls -R /System/Library/Fonts | grep ttf", "-l",0); printf("Mac OS X Default fonts: "); execl("/bin/ls","ls -R /Library/Fonts | grep ttf", "-l",0); printf("User fonts: "); execl("/bin/ls","ls -R ~/Library/Fonts | grep ttf", "-l",0); } A: Not exactly C, but in Objective-C, you can easily get a list of installed fonts via the Cocoa framework: // This returns an array of NSStrings that gives you each font installed on the system NSArray *fonts = [[NSFontManager sharedFontManager] availableFontFamilies]; // Does the same as the above, but includes each available font style (e.g. you get // Verdana, "Verdana-Bold", "Verdana-BoldItalic", and "Verdana-Italic" for Verdana). NSArray *fonts = [[NSFontManager sharedFontManager] availableFonts]; You can access the Cocoa framework from Python via PyObjC, if you want. In C, I think you can do something similar in Carbon with the ATSUI library, although I'm not entirely sure how to do this, since I haven't worked with fonts in Carbon before. Nevertheless, from browsing the ATSUI docs, I'd recommend looking into the ATSUGetFontIDs and the ATSUGetIndFontName functions. Here's a link to the ATSUI documentation for more information. A: Do you want to write a program to do it, or do you want to use a program to do it? There are many programs that list fonts, xlsfonts comes to mind. A: You can get an array of available fonts using Objective-C and Cocoa. The method you are looking for is NSFontManager's availableFonts. I don't believe there is a standard way to determine what the system fonts are using pure C. However, you can freely mix C and Objective-C, so it really shouldn't be to hard to use this method to do what you'd like.
List of installed fonts OS X / C
I'm trying to programatically get a list of installed fonts in C or Python. I need to be able to do this on OS X, does anyone know how?
[ "Python with PyObjC installed (which is the case for Mac OS X 10.5+, so this code will work without having to install anything):\nimport Cocoa\nmanager = Cocoa.NSFontManager.sharedFontManager()\nfont_families = list(manager.availableFontFamilies())\n\n(based on htw's answer)\n", "Why not use the Terminal?\nSystem Fonts:\nls -R /System/Library/Fonts | grep ttf\n\nUser Fonts:\nls -R ~/Library/Fonts | grep ttf\n\nMac OS X Default fonts:\nls -R /Library/Fonts | grep ttf\n\nIf you need to run it inside your C program:\nvoid main()\n{ \n printf(\"System fonts: \");\n execl(\"/bin/ls\",\"ls -R /System/Library/Fonts | grep ttf\", \"-l\",0);\n printf(\"Mac OS X Default fonts: \");\n execl(\"/bin/ls\",\"ls -R /Library/Fonts | grep ttf\", \"-l\",0);\n printf(\"User fonts: \");\n execl(\"/bin/ls\",\"ls -R ~/Library/Fonts | grep ttf\", \"-l\",0);\n}\n\n", "Not exactly C, but in Objective-C, you can easily get a list of installed fonts via the Cocoa framework:\n// This returns an array of NSStrings that gives you each font installed on the system\nNSArray *fonts = [[NSFontManager sharedFontManager] availableFontFamilies];\n\n// Does the same as the above, but includes each available font style (e.g. you get\n// Verdana, \"Verdana-Bold\", \"Verdana-BoldItalic\", and \"Verdana-Italic\" for Verdana).\nNSArray *fonts = [[NSFontManager sharedFontManager] availableFonts];\n\nYou can access the Cocoa framework from Python via PyObjC, if you want.\nIn C, I think you can do something similar in Carbon with the ATSUI library, although I'm not entirely sure how to do this, since I haven't worked with fonts in Carbon before. Nevertheless, from browsing the ATSUI docs, I'd recommend looking into the ATSUGetFontIDs and the ATSUGetIndFontName functions. Here's a link to the ATSUI documentation for more information.\n", "Do you want to write a program to do it, or do you want to use a program to do it? There are many programs that list fonts, xlsfonts comes to mind.\n", "You can get an array of available fonts using Objective-C and Cocoa. The method you are looking for is NSFontManager's availableFonts.\nI don't believe there is a standard way to determine what the system fonts are using pure C. However, you can freely mix C and Objective-C, so it really shouldn't be to hard to use this method to do what you'd like.\n" ]
[ 14, 9, 4, 1, 0 ]
[]
[]
[ "c", "fonts", "macos", "python" ]
stackoverflow_0001113040_c_fonts_macos_python.txt
Q: Is it really OK to do object closeing/disposing in __del__? I have been thinking about how I write classes in Python. More specifically how the constructor is implemented and how the object should be destroyed. I don't want to rely on CPython's reference counting to do object cleanup. This basically tells me I should use with statements to manage my object life times and that I need an explicit close/dispose method (this method could be called from __exit__ if the object is also a context manager). class Foo(object): def __init__(self): pass def close(self): pass Now, if all my objects behave in this way and all my code uses with statements or explicit calls to close() (or dispose()) I don't realy see the need for me to put any code in __del__. Should we really use __del__ to dispose of our objects? A: Short answer : No. Long answer: Using __del__ is tricky, mainly because it's not guaranteed to be called. That means you can't do things there that absolutely has to be done. This in turn means that __del__ basically only can be used for cleanups that would happen sooner or later anyway, like cleaning up resources that would be cleaned up when the process exits, so it doesn't matter if __del__ doesn't get called. Of course, these are also generally the same things Python will do for you. So that kinda makes __del__ useless. Also, __del__ gets called when Python garbage collects, and you didn't want to wait for Pythons garbage collecting, which means you can't use __del__ anyway. So, don't use __del__. Use __enter__/__exit__ instead. FYI: Here is an example of a non-circular situation where the destructor did not get called: class A(object): def __init__(self): print('Constructing A') def __del__(self): print('Destructing A') class B(object): a = A() OK, so it's a class attribute. Evidently that's a special case. But it just goes to show that making sure __del__ gets called isn't straightforward. I'm pretty sure I've seen more non-circular situations where __del__ isn't called. A: Not necessarily. You'll encounter problems when you have cyclic references. Eli Bendersky does a good job of explaining this in his blog post: Safely using destructors in Python A: If you are sure you will not go into cyclic references, then using __del__ in that way is OK: as soon as the reference count goes to zero, the CPython VM will call that method and destroy the object. If you plan to use cyclic references - please think it very thoroughly, and check if weak references may help; in many cases, cyclic references are a first symptom of bad design. If you have no control on the way your object is going to be used, then using __del__ may not be safe. If you plan to use JPython or IronPython, __del__ is unreliable at all, because final object destruction will happen at garbage collection, and that's something you cannot control. In sum, in my opinion, __del__ is usually perfectly safe and good; however, in many situation it could be better to make a step back, and try to look at the problem from a different perspective; a good use of try/except and of with contexts may be a more pythonic solution.
Is it really OK to do object closeing/disposing in __del__?
I have been thinking about how I write classes in Python. More specifically how the constructor is implemented and how the object should be destroyed. I don't want to rely on CPython's reference counting to do object cleanup. This basically tells me I should use with statements to manage my object life times and that I need an explicit close/dispose method (this method could be called from __exit__ if the object is also a context manager). class Foo(object): def __init__(self): pass def close(self): pass Now, if all my objects behave in this way and all my code uses with statements or explicit calls to close() (or dispose()) I don't realy see the need for me to put any code in __del__. Should we really use __del__ to dispose of our objects?
[ "Short answer : No.\nLong answer: Using __del__ is tricky, mainly because it's not guaranteed to be called. That means you can't do things there that absolutely has to be done. This in turn means that __del__ basically only can be used for cleanups that would happen sooner or later anyway, like cleaning up resources that would be cleaned up when the process exits, so it doesn't matter if __del__ doesn't get called. Of course, these are also generally the same things Python will do for you. So that kinda makes __del__ useless.\nAlso, __del__ gets called when Python garbage collects, and you didn't want to wait for Pythons garbage collecting, which means you can't use __del__ anyway.\nSo, don't use __del__. Use __enter__/__exit__ instead.\nFYI: Here is an example of a non-circular situation where the destructor did not get called:\nclass A(object):\n def __init__(self):\n print('Constructing A')\n\n def __del__(self):\n print('Destructing A')\n\nclass B(object):\n a = A()\n\nOK, so it's a class attribute. Evidently that's a special case. But it just goes to show that making sure __del__ gets called isn't straightforward. I'm pretty sure I've seen more non-circular situations where __del__ isn't called.\n", "Not necessarily. You'll encounter problems when you have cyclic references. Eli Bendersky does a good job of explaining this in his blog post:\n\nSafely using destructors in Python\n\n", "If you are sure you will not go into cyclic references, then using __del__ in that way is OK: as soon as the reference count goes to zero, the CPython VM will call that method and destroy the object. \nIf you plan to use cyclic references - please think it very thoroughly, and check if weak references may help; in many cases, cyclic references are a first symptom of bad design. \nIf you have no control on the way your object is going to be used, then using __del__ may not be safe. \nIf you plan to use JPython or IronPython, __del__ is unreliable at all, because final object destruction will happen at garbage collection, and that's something you cannot control. \nIn sum, in my opinion, __del__ is usually perfectly safe and good; however, in many situation it could be better to make a step back, and try to look at the problem from a different perspective; a good use of try/except and of with contexts may be a more pythonic solution. \n" ]
[ 15, 8, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001111505_python.txt
Q: Pure Python Tidy-like application/library I'm looking for a pure Python library which works like Tidy. Please kindly advise. Thank you. A: Use ElementTree Tidy HTML Tree Builder.
Pure Python Tidy-like application/library
I'm looking for a pure Python library which works like Tidy. Please kindly advise. Thank you.
[ "Use ElementTree Tidy HTML Tree Builder.\n" ]
[ 2 ]
[]
[]
[ "html", "python", "tidy", "xhtml", "xml" ]
stackoverflow_0001113421_html_python_tidy_xhtml_xml.txt
Q: Where to store secret keys and password in Python I have a small Python program, which uses a Google Maps API secret key. I'm getting ready to check-in my code, and I don't want to include the secret key in SVN. In the canonical PHP app you put secret keys, database passwords, and other app specific config in LocalSettings.php. Is there a similar file/location which Python programmers expect to find and modify? A: A user must configure their own secret key. A configuration file is the perfect place to keep this information. You several choices for configuration files. Use ConfigParser to parse a config file. Use a simple Python module as the configuration file. You can simply execfile to load values from that file. Invent your own configuration file notation and parse that. A: No, there's no standard location - on Windows, it's usually in the directory os.path.join(os.environ['APPDATA'], 'appname') and on Unix it's usually os.path.join(os.environ['HOME'], '.appname'). A: Any path can reference the user's home directory in a cross-platform way by expanding the common ~ (tilde) with os.path.expanduser(), like so: appdir = os.path.join(os.path.expanduser('~'), '.myapp')
Where to store secret keys and password in Python
I have a small Python program, which uses a Google Maps API secret key. I'm getting ready to check-in my code, and I don't want to include the secret key in SVN. In the canonical PHP app you put secret keys, database passwords, and other app specific config in LocalSettings.php. Is there a similar file/location which Python programmers expect to find and modify?
[ "A user must configure their own secret key. A configuration file is the perfect place to keep this information.\nYou several choices for configuration files.\n\nUse ConfigParser to parse a config file.\nUse a simple Python module as the configuration file. You can simply execfile to load values from that file.\nInvent your own configuration file notation and parse that.\n\n", "No, there's no standard location - on Windows, it's usually in the directory os.path.join(os.environ['APPDATA'], 'appname') and on Unix it's usually os.path.join(os.environ['HOME'], '.appname').\n", "Any path can reference the user's home directory in a cross-platform way by expanding the common ~ (tilde) with os.path.expanduser(), like so:\nappdir = os.path.join(os.path.expanduser('~'), '.myapp')\n\n" ]
[ 4, 3, 1 ]
[]
[]
[ "configuration", "google_maps", "python" ]
stackoverflow_0001113479_configuration_google_maps_python.txt