content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python Twisted: restricting access by IP address What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution. Thanks. A: When a connection is established, a factory's buildProtocol is called to create a new protocol instance to handle that connection. buildProtocol is passed the address of the peer which established the connection and buildProtocol may return None to have the connection closed immediately. So, for example, you can write a factory like this: from twisted.internet.protocol import ServerFactory class LocalOnlyFactory(ServerFactory): def buildProtocol(self, addr): if addr.host == "127.0.0.1": return ServerFactory.buildProtocol(self, addr) return None And only local connections will be handled (but all connections will still be accepted initially since you must accept them to learn what the peer address is). You can apply this to the factory you're using to serve XML-RPC resources. Just subclass that factory and add logic like this (or you can do a wrapper instead of a subclass). iptables or some other platform firewall is also a good idea for some cases, though. With that approach, your process never even has to see the connection attempt. A: Okay, another answer is to get the ip address from the transport, inside any protocol: d = self.transport.getHost () ; print d.type, d.host, d.port Then use the value to filter it in any way you want. A: I'd use a firewall on windows, or iptables on linux.
Python Twisted: restricting access by IP address
What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution. Thanks.
[ "When a connection is established, a factory's buildProtocol is called to create a new protocol instance to handle that connection. buildProtocol is passed the address of the peer which established the connection and buildProtocol may return None to have the connection closed immediately.\nSo, for example, you can write a factory like this:\nfrom twisted.internet.protocol import ServerFactory\n\nclass LocalOnlyFactory(ServerFactory):\n def buildProtocol(self, addr):\n if addr.host == \"127.0.0.1\":\n return ServerFactory.buildProtocol(self, addr)\n return None\n\nAnd only local connections will be handled (but all connections will still be accepted initially since you must accept them to learn what the peer address is).\nYou can apply this to the factory you're using to serve XML-RPC resources. Just subclass that factory and add logic like this (or you can do a wrapper instead of a subclass).\niptables or some other platform firewall is also a good idea for some cases, though. With that approach, your process never even has to see the connection attempt.\n", "Okay, another answer is to get the ip address from the transport, inside any protocol:\nd = self.transport.getHost () ; print d.type, d.host, d.port\nThen use the value to filter it in any way you want.\n", "I'd use a firewall on windows, or iptables on linux.\n" ]
[ 5, 2, 0 ]
[]
[]
[ "python", "twisted" ]
stackoverflow_0001273297_python_twisted.txt
Q: Simulate multiple IP addresses for testing I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well. A: You should set up a virtual network adapter. They are called TAP/TUN devices. If you are using windows, you can easily setup some dummy addresses with somthing like this: http://www.ntkernel.com/w&p.php?id=32 Good luck! A: A. consider using Bonjour (zeroconf) for service discovery B. You can assign 1 or more IP addresses the same NIC: On XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection'). Scroll down to Internet Protocol (TCP/IP), select it and click on [Properties]. If you are using DHCP, you will need to get a static, base IP, from your IT. Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..] Enter the IP information for the additional IP you want to add. Repeat for each additional IP address. C. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of "computers". -- sky A: Normally you just listen on 0.0.0.0. This is an alias for all IP addresses.
Simulate multiple IP addresses for testing
I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.
[ "You should set up a virtual network adapter. They are called TAP/TUN devices. If you are using windows, you can easily setup some dummy addresses with somthing like this:\nhttp://www.ntkernel.com/w&p.php?id=32\nGood luck!\n", "A. consider using Bonjour (zeroconf) for service discovery\nB. You can assign 1 or more IP addresses the same NIC:\nOn XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection').\nScroll down to Internet Protocol (TCP/IP), select it and click on [Properties].\nIf you are using DHCP, you will need to get a static, base IP, from your IT.\n Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..]\n Enter the IP information for the additional IP you want to add.\nRepeat for each additional IP address.\nC. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of \"computers\".\n-- sky\n", "Normally you just listen on 0.0.0.0. This is an alias for all IP addresses.\n" ]
[ 6, 5, 2 ]
[]
[]
[ ".net", "networking", "python", "sockets" ]
stackoverflow_0001308879_.net_networking_python_sockets.txt
Q: Running function 5 seconds after pygtk widget is shown How to run function 5 seconds after pygtk widget is shown? A: You can use glib.timeout_add(interval, callback, ...) to periodically call a function. If the function returns True then it will be called again after the interval; if the function return False then it will not be called again. Here is a short example of adding a timeout after a widget's show event: import pygtk pygtk.require('2.0') import gtk import glib def timer_cb(): print "5 seconds elapsed." return False def show_cb(widget, data=None): glib.timeout_add(5000, timer_cb) def destroy_cb(widget, data=None): gtk.main_quit() def main(): window = gtk.Window(gtk.WINDOW_TOPLEVEL) window.connect("show", show_cb) window.connect("destroy", destroy_cb) window.show() gtk.main() if __name__ == "__main__": main() A: If the time is not critical to be exact to the tenth of a second, use glib.timeout_add_seconds(5, ..) else as above. timeout_add_seconds allows the system to align timeouts to other events, in the long run reducing CPU wakeups (especially if the timeout is reocurring) and save energy for the planet(!)
Running function 5 seconds after pygtk widget is shown
How to run function 5 seconds after pygtk widget is shown?
[ "You can use glib.timeout_add(interval, callback, ...) to periodically call a function.\nIf the function returns True then it will be called again after the interval; if the function return False then it will not be called again.\nHere is a short example of adding a timeout after a widget's show event:\nimport pygtk\npygtk.require('2.0')\nimport gtk\nimport glib\n\ndef timer_cb():\n print \"5 seconds elapsed.\"\n return False\n\ndef show_cb(widget, data=None):\n glib.timeout_add(5000, timer_cb)\n\ndef destroy_cb(widget, data=None):\n gtk.main_quit()\n\ndef main():\n window = gtk.Window(gtk.WINDOW_TOPLEVEL)\n\n window.connect(\"show\", show_cb)\n\n window.connect(\"destroy\", destroy_cb)\n\n window.show()\n\n gtk.main()\n\nif __name__ == \"__main__\":\n main()\n\n", "If the time is not critical to be exact to the tenth of a second, use\nglib.timeout_add_seconds(5, ..)\n\nelse as above.\ntimeout_add_seconds allows the system to align timeouts to other events, in the long run reducing CPU wakeups (especially if the timeout is reocurring) and save energy for the planet(!)\n" ]
[ 15, 9 ]
[]
[]
[ "function", "pygtk", "python", "time" ]
stackoverflow_0001309006_function_pygtk_python_time.txt
Q: In GTK, is there an easy way to scale all widgets by an arbitrary amount? I want my widget to look exactly like it does now, except to be smaller. It includes buttons, labels, text, images, etc. Is there any way to just say "scale this to be half the size", and have GTK do all the image processing, widget resizing, etc., necessary? If not, what's the easiest way to accomplish this? A: Change the theme from the user interface is not something that I recommend, but you can do it if you require it, using a custom gtkrc may help you to change the font and the way the buttons are drawed, mostly because of the xthickness and ythickness. import gtk file = "/path/to/the/gtkrc" gtk.rc_parse(file) gtk.rc_add_default_file(file) gtk.rc_reparse_all() And the custom gtkrc may look like this: gtk_color_scheme = "fg_color:#ECE9E9;bg_color:#ECE9E9;base_color:#FFFFFF;text_color:#000000;selected_bg_color:#008DD7;selected_fg_color:#FFFFFF;tooltip_bg_color:#000000;tooltip_fg_color:#F5F5B5" style "theme-fixes" { fg[NORMAL] = @fg_color fg[PRELIGHT] = @fg_color fg[SELECTED] = @selected_fg_color fg[ACTIVE] = @fg_color fg[INSENSITIVE] = darker (@bg_color) bg[NORMAL] = @bg_color bg[PRELIGHT] = shade (1.02, @bg_color) bg[SELECTED] = @selected_bg_color bg[INSENSITIVE] = @bg_color bg[ACTIVE] = shade (0.9, @bg_color) base[NORMAL] = @base_color base[PRELIGHT] = shade (0.95, @bg_color) base[ACTIVE] = shade (0.9, @selected_bg_color) base[SELECTED] = @selected_bg_color base[INSENSITIVE] = @bg_color text[NORMAL] = @text_color text[PRELIGHT] = @text_color text[ACTIVE] = @selected_fg_color text[SELECTED] = @selected_fg_color text[INSENSITIVE] = darker (@bg_color) GtkTreeView::odd_row_color = shade (0.929458256, @base_color) GtkTreeView::even_row_color = @base_color GtkTreeView::horizontal-separator = 12 font_name = "Helvetica World 7" } class "*" style "theme-fixes" A: Resolution independence has been worked on by some gtk devs, and here is an update with a very big patch to introduce it into GTK. The patch is however a year old now and it is still unclear how/when/if it is going to be included: (screenshots at the end) http://mail.gnome.org/archives/gtk-devel-list/2008-August/msg00044.html A: There is no built in way to do this. To do this, you'll have to consider what is "taking up space" in your ui, and how to reduce it. If your UI is mostly text and images, you can use a smaller font size, then scale all images down by an appropriate percentage. The widget sizing will shrink automatically once the text and images that they are displaying shrinks (unless you've done Bad Things like hardcode heights/widths, use GtkFixed, etc). The tricky part will be determining the relationship between font point size and image scale. EDIT: Here's a post about the pygtk syntax to change the font size. A: Having written a 100% scalable gtk app, what I did was limit myself to gdk_draw_line, and gdk_draw_rectangle, which were easy to then scale myself. Text was "done" via gdk_draw_line. (for certain low values of "done.") See: http://wordwarvi.sourceforge.net Not that it helps you any, I'm guessing.
In GTK, is there an easy way to scale all widgets by an arbitrary amount?
I want my widget to look exactly like it does now, except to be smaller. It includes buttons, labels, text, images, etc. Is there any way to just say "scale this to be half the size", and have GTK do all the image processing, widget resizing, etc., necessary? If not, what's the easiest way to accomplish this?
[ "Change the theme from the user interface is not something that I recommend, but you can do it if you require it, using a custom gtkrc may help you to change the font and the way the buttons are drawed, mostly because of the xthickness and ythickness.\n import gtk\n file = \"/path/to/the/gtkrc\"\n gtk.rc_parse(file)\n gtk.rc_add_default_file(file)\n gtk.rc_reparse_all()\n\nAnd the custom gtkrc may look like this:\ngtk_color_scheme = \"fg_color:#ECE9E9;bg_color:#ECE9E9;base_color:#FFFFFF;text_color:#000000;selected_bg_color:#008DD7;selected_fg_color:#FFFFFF;tooltip_bg_color:#000000;tooltip_fg_color:#F5F5B5\"\nstyle \"theme-fixes\" {\n\n fg[NORMAL] = @fg_color\n fg[PRELIGHT] = @fg_color\n fg[SELECTED] = @selected_fg_color\n fg[ACTIVE] = @fg_color\n fg[INSENSITIVE] = darker (@bg_color)\n\n bg[NORMAL] = @bg_color\n bg[PRELIGHT] = shade (1.02, @bg_color)\n bg[SELECTED] = @selected_bg_color\n bg[INSENSITIVE] = @bg_color\n bg[ACTIVE] = shade (0.9, @bg_color)\n\n base[NORMAL] = @base_color\n base[PRELIGHT] = shade (0.95, @bg_color)\n base[ACTIVE] = shade (0.9, @selected_bg_color)\n base[SELECTED] = @selected_bg_color\n base[INSENSITIVE] = @bg_color\n\n text[NORMAL] = @text_color\n text[PRELIGHT] = @text_color\n text[ACTIVE] = @selected_fg_color\n text[SELECTED] = @selected_fg_color\n text[INSENSITIVE] = darker (@bg_color)\n\n GtkTreeView::odd_row_color = shade (0.929458256, @base_color)\n GtkTreeView::even_row_color = @base_color\n GtkTreeView::horizontal-separator = 12\n\n font_name = \"Helvetica World 7\"\n }\n class \"*\" style \"theme-fixes\"\n\n", "Resolution independence has been worked on by some gtk devs, and here is an update with a very big patch to introduce it into GTK. The patch is however a year old now and it is still unclear how/when/if it is going to be included: (screenshots at the end)\nhttp://mail.gnome.org/archives/gtk-devel-list/2008-August/msg00044.html\n", "There is no built in way to do this. To do this, you'll have to consider what is \"taking up space\" in your ui, and how to reduce it.\nIf your UI is mostly text and images, you can use a smaller font size, then scale all images down by an appropriate percentage. The widget sizing will shrink automatically once the text and images that they are displaying shrinks (unless you've done Bad Things like hardcode heights/widths, use GtkFixed, etc).\nThe tricky part will be determining the relationship between font point size and image scale.\nEDIT:\nHere's a post about the pygtk syntax to change the font size.\n", "Having written a 100% scalable gtk app, what I did was limit myself to gdk_draw_line, and gdk_draw_rectangle, which were easy to then scale myself. Text was \"done\" via gdk_draw_line. (for certain low values of \"done.\") See: http://wordwarvi.sourceforge.net\nNot that it helps you any, I'm guessing.\n" ]
[ 2, 2, 1, 1 ]
[]
[]
[ "gtk", "pygtk", "python", "user_interface" ]
stackoverflow_0001269268_gtk_pygtk_python_user_interface.txt
Q: Does a UDP service have to respond from the connected IP address? Pyzor uses UDP/IP as the communication protocol. We recently switched the public server to a new machine, and started getting reports of many timeouts. I discovered that I could fix the problem if I changed the IP that was queried from eth0:1 to eth0. I can reproduce this problem with a simple example: This is the server code: #! /usr/bin/env python import SocketServer class RequestHandler(SocketServer.DatagramRequestHandler): def handle(self): print self.packet self.wfile.write("Pong") s = SocketServer.UDPServer(("0.0.0.0", 24440), RequestHandler) s.serve_forever() This is the client code (188.40.77.206 is eth0. 188.40.77.236 is the same server, but is eth0:1): >>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) >>> s.sendto('ping', 0, ("188.40.77.206", 24440)) 4 >>> s.recvfrom(1024) ('Pong', ('188.40.77.206', 24440)) >>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) >>> s.sendto('ping', 0, ("188.40.77.236", 24440)) 4 >>> s.recvfrom(1024) [never gets anything] The server gets the "ping" packet in both cases (and therefore sends the "pong" packet in both cases). Oddly, this does work from some places (i.e. I'll get a response from both IPs). For example, it works from 188.40.37.137 (same network/datacenter, different server), but also from 89.18.189.160 (different datacenter). In those cases, the recvfrom response does have the eth0 IP, rather than the one that was connected to. Is this just a rule of UDP? Is this a problem/limitation with the Python UDPServer class? Is it something I'm doing incorrectly? Is there any way that I can have this work apart from simply connecting to the eth0 IP (or listening on the specific IP rather than 0.0.0.0)? A: I came across this with a TFTP server. My server had two IP addresses facing the same network. Because UDP is connectionless, there can be issues with IP addresses not being set as expected in that situation. The sequence I had was: Client sends the initial packet to the server at a particular IP address Server reads the client's source address from the incoming packet, and sends a response. However, in the response, the server's "source address" is set according to the routing tables, and it gets set to the other IP address. It wasn't possible to control the server's "source" IP address because the OS didn't tell us which IP address the request came in through. The client gets a response from the "other" IP address, and rejects it. The solution in my case was to specifically bind the TFTP server to the IP address that I wanted to listen to, rather than binding to all interfaces. I found some text that may be relevant in a Linux man page for tftpd (TFTP server). Here it is: Unfortunately, on multi-homed systems, it is impossible for tftpd to determine the address on which a packet was received. As a result, tftpd uses two different mechanisms to guess the best source address to use for replies. If the socket that inetd(8) passed to tftpd is bound to a par‐ ticular address, tftpd uses that address for replies. Otherwise, tftpd uses ‘‘UDP connect’’ to let the kernel choose the reply address based on the destination of the replies and the routing tables. This means that most setups will work transparently, while in cases where the reply address must be fixed, the virtual hosting feature of inetd(8) can be used to ensure that replies go out from the correct address. These con‐ siderations are important, because most tftp clients will reject reply packets that appear to come from an unexpected address. See this answer which shows that on Linux it is possible to read the local address for incoming UDP packets, and set it for outgoing packets. It's possible in C; I'm not sure about Python though. A: Is this just a rule of UDP? No. Is this a problem/limitation with the Python UDPServer class? Doubtful. Is it something I'm doing incorrectly? Your program looks correct. There are any number of reasons why the datagram isn't getting to the server. UDP is connectionless so your client just sends the ping off into the ether without knowing if anyone receives it. See if you are allowed to bind to that address. There's a great little program called netcat that works great for low level network access. It's not always available on every system but it's easy to download and compile. nc -l -s 188.40.77.236 -p 24440 -u If you run your client program just like before, you should see "Ping" printed on your terminal. (You can type Pong and set it back to your client. It's kinda fun to play with.) If you get the ping, the networking issues aren't the problem and something is wrong with the Python server program or libraries. If you don't get the ping, you can't make the connection. "Contact your network administrator for assistance." Things to check would include... Firewall problems? Configuration issues with aliased network interfaces. User permission problems.
Does a UDP service have to respond from the connected IP address?
Pyzor uses UDP/IP as the communication protocol. We recently switched the public server to a new machine, and started getting reports of many timeouts. I discovered that I could fix the problem if I changed the IP that was queried from eth0:1 to eth0. I can reproduce this problem with a simple example: This is the server code: #! /usr/bin/env python import SocketServer class RequestHandler(SocketServer.DatagramRequestHandler): def handle(self): print self.packet self.wfile.write("Pong") s = SocketServer.UDPServer(("0.0.0.0", 24440), RequestHandler) s.serve_forever() This is the client code (188.40.77.206 is eth0. 188.40.77.236 is the same server, but is eth0:1): >>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) >>> s.sendto('ping', 0, ("188.40.77.206", 24440)) 4 >>> s.recvfrom(1024) ('Pong', ('188.40.77.206', 24440)) >>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) >>> s.sendto('ping', 0, ("188.40.77.236", 24440)) 4 >>> s.recvfrom(1024) [never gets anything] The server gets the "ping" packet in both cases (and therefore sends the "pong" packet in both cases). Oddly, this does work from some places (i.e. I'll get a response from both IPs). For example, it works from 188.40.37.137 (same network/datacenter, different server), but also from 89.18.189.160 (different datacenter). In those cases, the recvfrom response does have the eth0 IP, rather than the one that was connected to. Is this just a rule of UDP? Is this a problem/limitation with the Python UDPServer class? Is it something I'm doing incorrectly? Is there any way that I can have this work apart from simply connecting to the eth0 IP (or listening on the specific IP rather than 0.0.0.0)?
[ "I came across this with a TFTP server. My server had two IP addresses facing the same network. Because UDP is connectionless, there can be issues with IP addresses not being set as expected in that situation. The sequence I had was:\n\nClient sends the initial packet to the server at a particular IP address\nServer reads the client's source address from the incoming packet, and sends a response.\n\n\nHowever, in the response, the server's \"source address\" is set according to the routing tables, and it gets set to the other IP address.\nIt wasn't possible to control the server's \"source\" IP address because the OS didn't tell us which IP address the request came in through.\n\nThe client gets a response from the \"other\" IP address, and rejects it.\n\nThe solution in my case was to specifically bind the TFTP server to the IP address that I wanted to listen to, rather than binding to all interfaces.\nI found some text that may be relevant in a Linux man page for tftpd (TFTP server). Here it is:\n Unfortunately, on multi-homed systems, it is impossible for tftpd to\n determine the address on which a packet was received. As a result, tftpd\n uses two different mechanisms to guess the best source address to use for\n replies. If the socket that inetd(8) passed to tftpd is bound to a par‐\n ticular address, tftpd uses that address for replies. Otherwise, tftpd\n uses ‘‘UDP connect’’ to let the kernel choose the reply address based on\n the destination of the replies and the routing tables. This means that\n most setups will work transparently, while in cases where the reply\n address must be fixed, the virtual hosting feature of inetd(8) can be\n used to ensure that replies go out from the correct address. These con‐\n siderations are important, because most tftp clients will reject reply\n packets that appear to come from an unexpected address.\n\nSee this answer which shows that on Linux it is possible to read the local address for incoming UDP packets, and set it for outgoing packets. It's possible in C; I'm not sure about Python though.\n", "\nIs this just a rule of UDP?\n\nNo.\n\nIs this a problem/limitation with the\n Python UDPServer class?\n\nDoubtful.\n\nIs it something I'm doing incorrectly?\n\nYour program looks correct.\nThere are any number of reasons why the datagram isn't getting to the server. UDP is connectionless so your client just sends the ping off into the ether without knowing if anyone receives it. \nSee if you are allowed to bind to that address. There's a great little program called netcat that works great for low level network access. It's not always available on every system but it's easy to download and compile.\nnc -l -s 188.40.77.236 -p 24440 -u\n\nIf you run your client program just like before, you should see \"Ping\" printed on your terminal. (You can type Pong and set it back to your client. It's kinda fun to play with.) If you get the ping, the networking issues aren't the problem and something is wrong with the Python server program or libraries. If you don't get the ping, you can't make the connection. \"Contact your network administrator for assistance.\"\nThings to check would include...\n\nFirewall problems?\nConfiguration issues with aliased network interfaces.\nUser permission problems.\n\n" ]
[ 3, 1 ]
[]
[]
[ "ip", "multihomed", "python", "sockets", "udp" ]
stackoverflow_0001309370_ip_multihomed_python_sockets_udp.txt
Q: Python String Formatting And String Multiplication Oddity Python is doing string multiplication where I would expect it to do numeric multiplication, and I don't know why. >>> print('%d' % 2 * 4) 2222 >>> print('%d' % (2 * 4)) 8 Even forcing the type to integer does nothing. (I realize this is redundant, but it's an idiot-check for me: >>> print('%d' % int(2) * int(4)) 2222 Obviously I solved my problem (adding the parenthesis does it) but what's going on here? If this is just a quirk I have to remember, that's fine, but I'd rather understand the logic behind this. A: You are experiencing operator precedence. In python % has the same precedence as * so they group left to right. So, print('%d' % 2 * 4) is the same as, print( ('%d' % 2) * 4) Here is the python operator precedence table. Since it is difficult to remember operator precedence rules, and the rules can be subtle, it is often best to simply use explicit parenthesis when chaining multiple operators in an expression. A: Ah, I think I figured it out. Just after I posted the message, of course. It's an order-of-operations thing. The string formatting is being calculated, and the resulting string is being string multiplied against the last operand. When I type: >>> print '%d' % 2 * 4 2222 It turns out to be as if I had specified the precedence this way: >>> print ('%d' % 2) * 4 2222
Python String Formatting And String Multiplication Oddity
Python is doing string multiplication where I would expect it to do numeric multiplication, and I don't know why. >>> print('%d' % 2 * 4) 2222 >>> print('%d' % (2 * 4)) 8 Even forcing the type to integer does nothing. (I realize this is redundant, but it's an idiot-check for me: >>> print('%d' % int(2) * int(4)) 2222 Obviously I solved my problem (adding the parenthesis does it) but what's going on here? If this is just a quirk I have to remember, that's fine, but I'd rather understand the logic behind this.
[ "You are experiencing operator precedence.\nIn python % has the same precedence as * so they group left to right.\nSo,\nprint('%d' % 2 * 4)\n\nis the same as,\nprint( ('%d' % 2) * 4)\n\nHere is the python operator precedence table.\nSince it is difficult to remember operator precedence rules, and the rules can be subtle, it is often best to simply use explicit parenthesis when chaining multiple operators in an expression.\n", "Ah, I think I figured it out. Just after I posted the message, of course. It's an order-of-operations thing. The string formatting is being calculated, and the resulting string is being string multiplied against the last operand.\nWhen I type:\n>>> print '%d' % 2 * 4\n2222\n\nIt turns out to be as if I had specified the precedence this way:\n>>> print ('%d' % 2) * 4\n2222\n\n" ]
[ 12, 2 ]
[]
[]
[ "formatting", "operator_precedence", "python", "string" ]
stackoverflow_0001309737_formatting_operator_precedence_python_string.txt
Q: Is it possible to peek at the data in a urllib2 response? I need to detect character encoding in HTTP responses. To do this I look at the headers, then if it's not set in the content-type header I have to peek at the response and look for a "<meta http-equiv='content-type'>" header. I'd like to be able to write a function that looks and works something like this: response = urllib2.urlopen("http://www.example.com/") encoding = detect_html_encoding(response) ... page_text = response.read() However, if I do response.read() in my "detect_html_encoding" method, then the subseuqent response.read() after the call to my function will fail. Is there an easy way to peek at the response and/or rewind after a read? A: def detectit(response): # try headers &c, then, worst case...: content = response.read() response.read = lambda: content # now detect based on content The trick of course is ensuring that response.read() WILL return the same thing again if needed... that's why we assign that lambda to it if necessary, i.e., if we already needed to extract the content -- that ensures the same content can be extracted again (and again, and again, ...;-). A: If it's in the HTTP headers (not the document itself) you could use response.info() to detect the encoding If you want to parse the HTML, save the response data: page_text = response.read() encoding = detect_html_encoding(response, page_text)
Is it possible to peek at the data in a urllib2 response?
I need to detect character encoding in HTTP responses. To do this I look at the headers, then if it's not set in the content-type header I have to peek at the response and look for a "<meta http-equiv='content-type'>" header. I'd like to be able to write a function that looks and works something like this: response = urllib2.urlopen("http://www.example.com/") encoding = detect_html_encoding(response) ... page_text = response.read() However, if I do response.read() in my "detect_html_encoding" method, then the subseuqent response.read() after the call to my function will fail. Is there an easy way to peek at the response and/or rewind after a read?
[ "def detectit(response):\n # try headers &c, then, worst case...:\n content = response.read()\n response.read = lambda: content\n # now detect based on content\n\nThe trick of course is ensuring that response.read() WILL return the same thing again if needed... that's why we assign that lambda to it if necessary, i.e., if we already needed to extract the content -- that ensures the same content can be extracted again (and again, and again, ...;-).\n", "\nIf it's in the HTTP headers (not the document itself) you could use response.info() to detect the encoding\nIf you want to parse the HTML, save the response data:\npage_text = response.read()\nencoding = detect_html_encoding(response, page_text)\n\n\n" ]
[ 4, 0 ]
[]
[]
[ "encoding", "html", "http", "python", "urllib2" ]
stackoverflow_0001308584_encoding_html_http_python_urllib2.txt
Q: How to set smtplib sending timeout in python 2.4? I'm having problems with smtplib tying up my program when email sending fails, because a timeout is never raised. The server I'm using does not and will never have python greater than 2.4, so I can't make use of the timeout argument to the SMTP constructor in later versions of python. Python 2.4's docs show that the SMTP class does not have the 'timeout' argument: class SMTP([host[, port[, local_hostname]]]) So how do I simulate this functionality? A: import socket socket.setdefaulttimeout(120) will make any socket time out after 2 minutes, unless the specific socket's timeout is changed (and I believe SMTP in Python 2.4 doesn't do the latter). Edit: apparently per OP's comment this breaks TLS, so, plan B...: What about grabbing 2.6's smtplib source file and backporting it as your own module to your 2.4? Shouldn't be TOO hard... and does support timeout cleanly!-) See here...
How to set smtplib sending timeout in python 2.4?
I'm having problems with smtplib tying up my program when email sending fails, because a timeout is never raised. The server I'm using does not and will never have python greater than 2.4, so I can't make use of the timeout argument to the SMTP constructor in later versions of python. Python 2.4's docs show that the SMTP class does not have the 'timeout' argument: class SMTP([host[, port[, local_hostname]]]) So how do I simulate this functionality?
[ "import socket\nsocket.setdefaulttimeout(120)\n\nwill make any socket time out after 2 minutes, unless the specific socket's timeout is changed (and I believe SMTP in Python 2.4 doesn't do the latter).\nEdit: apparently per OP's comment this breaks TLS, so, plan B...:\nWhat about grabbing 2.6's smtplib source file and backporting it as your own module to your 2.4? Shouldn't be TOO hard... and does support timeout cleanly!-)\nSee here...\n" ]
[ 6 ]
[]
[]
[ "python", "python_2.4", "smtplib" ]
stackoverflow_0001309991_python_python_2.4_smtplib.txt
Q: How can i use TurboMail 3 together with TurboGears 2 Hy, I want to use TurboMail3 (website) together with a TurboGears 2(website) project. Which files to I have to modify to include TurboMail into my TurboGears project? Everything I find on the web is for TurboMail2 and TurboGears1. The TurboMail Documentation states that there actually is a TG2 integration but I never found documentation for it. Thanks! A: The integration is currently the same as for Pylons. There is a ticket for a TG2 specific integration which is currently in our bug tracker. If you really want answers for that topic, please ask in the turbomail google group: http://groups.google.com/group/turbomail-devel A: This might help you along: Getting TurboMail to work with TurboGears 2.0
How can i use TurboMail 3 together with TurboGears 2
Hy, I want to use TurboMail3 (website) together with a TurboGears 2(website) project. Which files to I have to modify to include TurboMail into my TurboGears project? Everything I find on the web is for TurboMail2 and TurboGears1. The TurboMail Documentation states that there actually is a TG2 integration but I never found documentation for it. Thanks!
[ "The integration is currently the same as for Pylons. There is a ticket for a TG2 specific integration which is currently in our bug tracker. If you really want answers for that topic, please ask in the turbomail google group: http://groups.google.com/group/turbomail-devel\n", "This might help you along: Getting TurboMail to work with TurboGears 2.0\n" ]
[ 1, 0 ]
[]
[]
[ "email", "python", "turbogears" ]
stackoverflow_0000598019_email_python_turbogears.txt
Q: oop instantiation pythonic practices I've got the code below, and I was planning on making several classes all within the same "import". I was hoping to instantiate each class and get a return value with the widgets I'm making. This isn't really a PyQt question at all, more of a "good practices" question, as I'll have a class for each widget. Should I make functions that return the widgets that were created, if so how? How do I ensure it is difficult to directly instantiate the class if that is the best method for what I'm after? I'd like to be able to do something like .... tabs = wqTabWidget( ['firstTab', 'Second', 'Last Tab'] ) or (which ever is a better practice) tabs = wqInstance.createTabs( ['firstTab', 'Second', 'Last Tab'] ) Here's my class so far.... from PyQt4 import QtCore as qc from PyQt4 import QtGui as qg class wqTabWidget(qg.QTabWidget): def __init__(self, *args): apply(qg.QTabWidget.__init__,(self, )) tabList = [] tabNames = args[0] for name in tabNames: tabWidget = qg.QWidget() self.addTab(tabWidget, name) tabList.append( { name:tabWidget } ) print 'hi' if __name__ == '__main__': app = qg.QApplication(sys.argv) window = wqTabWidget(['hi', 'there', 'and', 'stuff']) window.show() app.exec_() A: The answer will be decided if the list of tabs can be changed at runtime. If this widget really only supports adding a set of tabs, but never changing or appending new ones, the list of tabs should come from the initializer. Otherwise you should also add a method to do the job. Consider the QLabel widget which can set the label's text in the initializer and through the setText method. Other code idea tips. Your initializer's arguments is a little confusing because you accept an arbitrary number of arguments, but only do something with the first one, and expect it to be a list of strings. A clear list of arguments is important. Your use of apply to call the base class initializer is unnecessary. Change the code to simply qg.QTabWidget.__init__(self) When creating a PyQt widget, I almost always prefer to allow a "parent" argument, even when I know the widget is going to be a toplevel widget. This is what all the built in Pyqt methods do, and feels like good practice to follow. I also can't see the reason to store a list of tabs, with each one being a single element dictionary. I suspect you won't need to keep your own list of tabs and tab names. The QTabWidget can answer all questions about the contents. If I were to bend this example code to my own preferences it would look like this. from PyQt4 import QtCore as qc from PyQt4 import QtGui as qg class wqTabWidget(qg.QTabWidget): def __init__(self, parent, tabNames): qg.QTabWidget.__init__(self, parent) self.createTabs(tabNames) def createTabs(tabNames): for name in tabNames: tabWidget = qg.QWidget() self.addTab(tabWidget, name) if __name__ == '__main__': app = qg.QApplication(sys.argv) window = wqTabWidget(None, ['hi', 'there', 'and', 'stuff']) window.show() app.exec_()
oop instantiation pythonic practices
I've got the code below, and I was planning on making several classes all within the same "import". I was hoping to instantiate each class and get a return value with the widgets I'm making. This isn't really a PyQt question at all, more of a "good practices" question, as I'll have a class for each widget. Should I make functions that return the widgets that were created, if so how? How do I ensure it is difficult to directly instantiate the class if that is the best method for what I'm after? I'd like to be able to do something like .... tabs = wqTabWidget( ['firstTab', 'Second', 'Last Tab'] ) or (which ever is a better practice) tabs = wqInstance.createTabs( ['firstTab', 'Second', 'Last Tab'] ) Here's my class so far.... from PyQt4 import QtCore as qc from PyQt4 import QtGui as qg class wqTabWidget(qg.QTabWidget): def __init__(self, *args): apply(qg.QTabWidget.__init__,(self, )) tabList = [] tabNames = args[0] for name in tabNames: tabWidget = qg.QWidget() self.addTab(tabWidget, name) tabList.append( { name:tabWidget } ) print 'hi' if __name__ == '__main__': app = qg.QApplication(sys.argv) window = wqTabWidget(['hi', 'there', 'and', 'stuff']) window.show() app.exec_()
[ "The answer will be decided if the list of tabs can be changed at runtime. If this widget really only supports adding a set of tabs, but never changing or appending new ones, the list of tabs should come from the initializer. Otherwise you should also add a method to do the job. Consider the QLabel widget which can set the label's text in the initializer and through the setText method.\nOther code idea tips.\nYour initializer's arguments is a little confusing because you accept an arbitrary number of arguments, but only do something with the first one, and expect it to be a list of strings. A clear list of arguments is important.\nYour use of apply to call the base class initializer is unnecessary. Change the code to simply qg.QTabWidget.__init__(self)\nWhen creating a PyQt widget, I almost always prefer to allow a \"parent\" argument, even when I know the widget is going to be a toplevel widget. This is what all the built in Pyqt methods do, and feels like good practice to follow.\nI also can't see the reason to store a list of tabs, with each one being a single element dictionary. I suspect you won't need to keep your own list of tabs and tab names. The QTabWidget can answer all questions about the contents.\nIf I were to bend this example code to my own preferences it would look like this.\nfrom PyQt4 import QtCore as qc\nfrom PyQt4 import QtGui as qg\n\n\nclass wqTabWidget(qg.QTabWidget):\n def __init__(self, parent, tabNames):\n qg.QTabWidget.__init__(self, parent)\n self.createTabs(tabNames)\n\n def createTabs(tabNames):\n for name in tabNames:\n tabWidget = qg.QWidget()\n self.addTab(tabWidget, name)\n\n\nif __name__ == '__main__':\n app = qg.QApplication(sys.argv)\n window = wqTabWidget(None, ['hi', 'there', 'and', 'stuff'])\n window.show()\n app.exec_()\n\n" ]
[ 4 ]
[]
[]
[ "oop", "pyqt", "python" ]
stackoverflow_0001310158_oop_pyqt_python.txt
Q: Refresh QTextEdit in PyQt Im writing a PyQt app that takes some input in one widget, and then processes some text files. What ive got at the moment is when the user clicks the "process" button a seperate window with a QTextEdit in it pops up, and ouputs some logging messages. On Mac OS X this window is refreshed automatically and you cna see the process. On Windows, the window reports (Not Responding) and then once all the proccessing is done, the log output is shown. Im assuming I need to refresh the window after each write into the log, and ive had a look around at using a timer. etc, but havnt had much luck in getting it working. Below is the source code. It has two files, GUI.py which does all the GUI stuff and MOVtoMXF that does all the processing. GUI.py import os import sys import MOVtoMXF from PyQt4.QtCore import * from PyQt4.QtGui import * class Form(QDialog): def process(self): path = str(self.pathBox.displayText()) if(path == ''): QMessageBox.warning(self, "Empty Path", "You didnt fill something out.") return xmlFile = str(self.xmlFileBox.displayText()) if(xmlFile == ''): QMessageBox.warning(self, "No XML file", "You didnt fill something.") return outFileName = str(self.outfileNameBox.displayText()) if(outFileName == ''): QMessageBox.warning(self, "No Output File", "You didnt do something") return print path + " " + xmlFile + " " + outFileName mov1 = MOVtoMXF.MOVtoMXF(path, xmlFile, outFileName, self.log) self.log.show() rc = mov1.ScanFile() if( rc < 0): print "something happened" #self.done(0) def __init__(self, parent=None): super(Form, self).__init__(parent) self.log = Log() self.pathLabel = QLabel("P2 Path:") self.pathBox = QLineEdit("") self.pathBrowseB = QPushButton("Browse") self.pathLayout = QHBoxLayout() self.pathLayout.addStretch() self.pathLayout.addWidget(self.pathLabel) self.pathLayout.addWidget(self.pathBox) self.pathLayout.addWidget(self.pathBrowseB) self.xmlLabel = QLabel("FCP XML File:") self.xmlFileBox = QLineEdit("") self.xmlFileBrowseB = QPushButton("Browse") self.xmlLayout = QHBoxLayout() self.xmlLayout.addStretch() self.xmlLayout.addWidget(self.xmlLabel) self.xmlLayout.addWidget(self.xmlFileBox) self.xmlLayout.addWidget(self.xmlFileBrowseB) self.outFileLabel = QLabel("Save to:") self.outfileNameBox = QLineEdit("") self.outputFileBrowseB = QPushButton("Browse") self.outputLayout = QHBoxLayout() self.outputLayout.addStretch() self.outputLayout.addWidget(self.outFileLabel) self.outputLayout.addWidget(self.outfileNameBox) self.outputLayout.addWidget(self.outputFileBrowseB) self.exitButton = QPushButton("Exit") self.processButton = QPushButton("Process") self.buttonLayout = QHBoxLayout() #self.buttonLayout.addStretch() self.buttonLayout.addWidget(self.exitButton) self.buttonLayout.addWidget(self.processButton) self.layout = QVBoxLayout() self.layout.addLayout(self.pathLayout) self.layout.addLayout(self.xmlLayout) self.layout.addLayout(self.outputLayout) self.layout.addLayout(self.buttonLayout) self.setLayout(self.layout) self.pathBox.setFocus() self.setWindowTitle("MOVtoMXF") self.connect(self.processButton, SIGNAL("clicked()"), self.process) self.connect(self.exitButton, SIGNAL("clicked()"), self, SLOT("reject()")) self.ConnectButtons() class Log(QTextEdit): def __init__(self, parent=None): super(Log, self).__init__(parent) self.timer = QTimer() self.connect(self.timer, SIGNAL("timeout()"), self.updateText()) self.timer.start(2000) def updateText(self): print "update Called" AND MOVtoMXF.py import os import sys import time import string import FileUtils import shutil import re class MOVtoMXF: #Class to do the MOVtoMXF stuff. def __init__(self, path, xmlFile, outputFile, edit): self.MXFdict = {} self.MOVDict = {} self.path = path self.xmlFile = xmlFile self.outputFile = outputFile self.outputDirectory = outputFile.rsplit('/',1) self.outputDirectory = self.outputDirectory[0] sys.stdout = OutLog( edit, sys.stdout) class OutLog(): def __init__(self, edit, out=None, color=None): """(edit, out=None, color=None) -> can write stdout, stderr to a QTextEdit. edit = QTextEdit out = alternate stream ( can be the original sys.stdout ) color = alternate color (i.e. color stderr a different color) """ self.edit = edit self.out = None self.color = color def write(self, m): if self.color: tc = self.edit.textColor() self.edit.setTextColor(self.color) #self.edit.moveCursor(QtGui.QTextCursor.End) self.edit.insertPlainText( m ) if self.color: self.edit.setTextColor(tc) if self.out: self.out.write(m) self.edit.show() If any other code is needed (i think this is all that is needed) then just let me know. Any Help would be great. Mark A: It looks like your are running an external program, capturing its output into a QTextEdit. I didn't see the code of Form.process, but I am guessing on windows your function waits for the external program to finish, then quickly dumps everything to the QTextEdit. If your interface really is waiting for the other process to finish, then it will hang in the manner you describe. You'll need to look at subprocess or perhaps even popen to get the program's output in a "non-blocking" manner. The key to avoiding "(Not Responding)" is to call QApplication.processEvents a few times every few seconds. The QTimer is not going to help in this case, because if Qt cannot process its events, it cannot call any signal handlers.
Refresh QTextEdit in PyQt
Im writing a PyQt app that takes some input in one widget, and then processes some text files. What ive got at the moment is when the user clicks the "process" button a seperate window with a QTextEdit in it pops up, and ouputs some logging messages. On Mac OS X this window is refreshed automatically and you cna see the process. On Windows, the window reports (Not Responding) and then once all the proccessing is done, the log output is shown. Im assuming I need to refresh the window after each write into the log, and ive had a look around at using a timer. etc, but havnt had much luck in getting it working. Below is the source code. It has two files, GUI.py which does all the GUI stuff and MOVtoMXF that does all the processing. GUI.py import os import sys import MOVtoMXF from PyQt4.QtCore import * from PyQt4.QtGui import * class Form(QDialog): def process(self): path = str(self.pathBox.displayText()) if(path == ''): QMessageBox.warning(self, "Empty Path", "You didnt fill something out.") return xmlFile = str(self.xmlFileBox.displayText()) if(xmlFile == ''): QMessageBox.warning(self, "No XML file", "You didnt fill something.") return outFileName = str(self.outfileNameBox.displayText()) if(outFileName == ''): QMessageBox.warning(self, "No Output File", "You didnt do something") return print path + " " + xmlFile + " " + outFileName mov1 = MOVtoMXF.MOVtoMXF(path, xmlFile, outFileName, self.log) self.log.show() rc = mov1.ScanFile() if( rc < 0): print "something happened" #self.done(0) def __init__(self, parent=None): super(Form, self).__init__(parent) self.log = Log() self.pathLabel = QLabel("P2 Path:") self.pathBox = QLineEdit("") self.pathBrowseB = QPushButton("Browse") self.pathLayout = QHBoxLayout() self.pathLayout.addStretch() self.pathLayout.addWidget(self.pathLabel) self.pathLayout.addWidget(self.pathBox) self.pathLayout.addWidget(self.pathBrowseB) self.xmlLabel = QLabel("FCP XML File:") self.xmlFileBox = QLineEdit("") self.xmlFileBrowseB = QPushButton("Browse") self.xmlLayout = QHBoxLayout() self.xmlLayout.addStretch() self.xmlLayout.addWidget(self.xmlLabel) self.xmlLayout.addWidget(self.xmlFileBox) self.xmlLayout.addWidget(self.xmlFileBrowseB) self.outFileLabel = QLabel("Save to:") self.outfileNameBox = QLineEdit("") self.outputFileBrowseB = QPushButton("Browse") self.outputLayout = QHBoxLayout() self.outputLayout.addStretch() self.outputLayout.addWidget(self.outFileLabel) self.outputLayout.addWidget(self.outfileNameBox) self.outputLayout.addWidget(self.outputFileBrowseB) self.exitButton = QPushButton("Exit") self.processButton = QPushButton("Process") self.buttonLayout = QHBoxLayout() #self.buttonLayout.addStretch() self.buttonLayout.addWidget(self.exitButton) self.buttonLayout.addWidget(self.processButton) self.layout = QVBoxLayout() self.layout.addLayout(self.pathLayout) self.layout.addLayout(self.xmlLayout) self.layout.addLayout(self.outputLayout) self.layout.addLayout(self.buttonLayout) self.setLayout(self.layout) self.pathBox.setFocus() self.setWindowTitle("MOVtoMXF") self.connect(self.processButton, SIGNAL("clicked()"), self.process) self.connect(self.exitButton, SIGNAL("clicked()"), self, SLOT("reject()")) self.ConnectButtons() class Log(QTextEdit): def __init__(self, parent=None): super(Log, self).__init__(parent) self.timer = QTimer() self.connect(self.timer, SIGNAL("timeout()"), self.updateText()) self.timer.start(2000) def updateText(self): print "update Called" AND MOVtoMXF.py import os import sys import time import string import FileUtils import shutil import re class MOVtoMXF: #Class to do the MOVtoMXF stuff. def __init__(self, path, xmlFile, outputFile, edit): self.MXFdict = {} self.MOVDict = {} self.path = path self.xmlFile = xmlFile self.outputFile = outputFile self.outputDirectory = outputFile.rsplit('/',1) self.outputDirectory = self.outputDirectory[0] sys.stdout = OutLog( edit, sys.stdout) class OutLog(): def __init__(self, edit, out=None, color=None): """(edit, out=None, color=None) -> can write stdout, stderr to a QTextEdit. edit = QTextEdit out = alternate stream ( can be the original sys.stdout ) color = alternate color (i.e. color stderr a different color) """ self.edit = edit self.out = None self.color = color def write(self, m): if self.color: tc = self.edit.textColor() self.edit.setTextColor(self.color) #self.edit.moveCursor(QtGui.QTextCursor.End) self.edit.insertPlainText( m ) if self.color: self.edit.setTextColor(tc) if self.out: self.out.write(m) self.edit.show() If any other code is needed (i think this is all that is needed) then just let me know. Any Help would be great. Mark
[ "It looks like your are running an external program, capturing its output into a QTextEdit. I didn't see the code of Form.process, but I am guessing on windows your function waits for the external program to finish, then quickly dumps everything to the QTextEdit. \nIf your interface really is waiting for the other process to finish, then it will hang in the manner you describe. You'll need to look at subprocess or perhaps even popen to get the program's output in a \"non-blocking\" manner.\nThe key to avoiding \"(Not Responding)\" is to call QApplication.processEvents a few times every few seconds. The QTimer is not going to help in this case, because if Qt cannot process its events, it cannot call any signal handlers.\n" ]
[ 1 ]
[]
[]
[ "pyqt", "python" ]
stackoverflow_0001310142_pyqt_python.txt
Q: What does "built-in method decode" mean in Python when profiling? I'm trying to make my program faster, so I'm profiling it. Right now the top reason is: 566 1.780 0.003 1.780 0.003 (built-in method decode) What is this exactly? I never call 'decode' anywhere in my code. It reads text files, but I don't believe they are unicode-encoded. A: Most likely, this is the decode method of string objects. A: Presumably this is str.decode ... search your source for "decode". If it's not in your code, look at Python library routines that show up in the profile results. It's highly unlikely to be to be anything to do with cPickle. Care to show us a few more "reasons", preferably with the column headings, to give us a wider view of your problem? Can you explain the connection between "using cPickle" and "some test cases would run faster"? You left the X and Y out of "Is there anything that will do task X faster than resource Y?" ... Update so you were asking about cPickle. What are you using for the (optional) protocol arg of cPickle.dump() and/or cPickle.dumps() ? A: (Answering @Claudiu's latest question, weirdly hidden in a commennt...?!-)... To really speed up pickling, try unladen swallow -- most of its ambitious targets are still to come, but it DOES already give at least 20-25% speedup in pickling and unpickling. A: I believe decode is called anytime you are converting unicode strings into ascii strings. I am guessing you have a large amount of unicode data. I'm not sure how the internals of pickle work, but it sounds like that unicode data gets converted to ascii when pickled?
What does "built-in method decode" mean in Python when profiling?
I'm trying to make my program faster, so I'm profiling it. Right now the top reason is: 566 1.780 0.003 1.780 0.003 (built-in method decode) What is this exactly? I never call 'decode' anywhere in my code. It reads text files, but I don't believe they are unicode-encoded.
[ "Most likely, this is the decode method of string objects.\n", "Presumably this is str.decode ... search your source for \"decode\". If it's not in your code, look at Python library routines that show up in the profile results. It's highly unlikely to be to be anything to do with cPickle. Care to show us a few more \"reasons\", preferably with the column headings, to give us a wider view of your problem?\nCan you explain the connection between \"using cPickle\" and \"some test cases would run faster\"?\nYou left the X and Y out of \"Is there anything that will do task X faster than resource Y?\" ... Update so you were asking about cPickle. What are you using for the (optional) protocol arg of cPickle.dump() and/or cPickle.dumps() ?\n", "(Answering @Claudiu's latest question, weirdly hidden in a commennt...?!-)... To really speed up pickling, try unladen swallow -- most of its ambitious targets are still to come, but it DOES already give at least 20-25% speedup in pickling and unpickling.\n", "I believe decode is called anytime you are converting unicode strings into ascii strings. I am guessing you have a large amount of unicode data. I'm not sure how the internals of pickle work, but it sounds like that unicode data gets converted to ascii when pickled?\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "optimization", "performance", "pickle", "python", "string" ]
stackoverflow_0001310201_optimization_performance_pickle_python_string.txt
Q: What is the best way to create a Python object when you have the class implementaion stored in a string? What is the best way to dynamically create a Python object instance when all you have is the Python class saved as a string? For background, I am working in the Google Application Engine environment and I want to be able to load classes dynamically from a string version of the class. problem = “1,2,3,4,5” solvertext1 = “””class solver: def solve(self, problemstring): return len(problemstring) “”” solvertext2 = “””class solver: def solve(self, problemstring): return problemstring[0] “”” solver = #The solution code here (solvertext1) answer = solver.solve(problem) #answer should equal 9 solver = #The solution code here (solvertext2) answer = solver.solve(problem) # answer should equal 1 A: Alas, exec is your only choice, but at least do it right to avert disaster: pass an explicit dictionary (with an in clause, of course)! E.g.: >>> class X(object): pass ... >>> x=X() >>> exec 'a=23' in vars(x) >>> x.a 23 this way you KNOW the exec won't pollute general namespaces, and whatever classes are being defined are going to be available as attributes of x. Almost makes exec bearable...!-) A: Use the exec statement to define your class and then instantiate it: exec solvertext1 s = solver() answer = s.solve(problem) A: Simple example: >>> solvertext1 = "def f(problem):\n\treturn len(problem)\n" >>> ex_string = solvertext1 + "\nanswer = f(%s)"%('\"Hello World\"') >>> exec ex_string >>> answer 11
What is the best way to create a Python object when you have the class implementaion stored in a string?
What is the best way to dynamically create a Python object instance when all you have is the Python class saved as a string? For background, I am working in the Google Application Engine environment and I want to be able to load classes dynamically from a string version of the class. problem = “1,2,3,4,5” solvertext1 = “””class solver: def solve(self, problemstring): return len(problemstring) “”” solvertext2 = “””class solver: def solve(self, problemstring): return problemstring[0] “”” solver = #The solution code here (solvertext1) answer = solver.solve(problem) #answer should equal 9 solver = #The solution code here (solvertext2) answer = solver.solve(problem) # answer should equal 1
[ "Alas, exec is your only choice, but at least do it right to avert disaster: pass an explicit dictionary (with an in clause, of course)! E.g.:\n>>> class X(object): pass\n... \n>>> x=X()\n>>> exec 'a=23' in vars(x)\n>>> x.a\n23\n\nthis way you KNOW the exec won't pollute general namespaces, and whatever classes are being defined are going to be available as attributes of x. Almost makes exec bearable...!-)\n", "Use the exec statement to define your class and then instantiate it:\nexec solvertext1\ns = solver()\nanswer = s.solve(problem)\n\n", "Simple example:\n>>> solvertext1 = \"def f(problem):\\n\\treturn len(problem)\\n\"\n\n>>> ex_string = solvertext1 + \"\\nanswer = f(%s)\"%('\\\"Hello World\\\"')\n\n>>> exec ex_string\n\n>>> answer\n11\n\n" ]
[ 9, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001310254_python.txt
Q: Is it possible to filter on a related item in Django annotations? I have the following 2 models: class Job(models.Model): title = models.CharField(_('title'), max_length=50) description = models.TextField(_('description')) category = models.ForeignKey(JobCategory, related_name='jobs') created_date = models.DateTimeField(auto_now_add=True) class JobCategory(models.Model): title = models.CharField(_('title'), max_length=50) slug = models.SlugField(_('slug')) Here is where I am at with the query thus far: def job_categories(): categories = JobCategory.objects.annotate(num_postings=Count('jobs')) return {'categories': categories} The problem is that I only want to count jobs that were created in the past 30 days. I want to return all categories however, not only those categories that have qualifying jobs. A: Just a guess... but would this work? def job_categories(): thritydaysago = datetime.datetime.now() - datetime.timedelta(days=30) categories = JobCategory.objects.filter(job__created_date__gte=thritydaysago).annotate(num_postings=Count('jobs')) return {'categories': categories} See"lookups-that-span-relationships" for more details on spanning queries. Hmmm... probably need another query in there to get all categories... A: I decided to approach this differently and chose not to use annotations at all. I added a manager to the Job model that returned only active (30 days or less old) jobs, and created a property on the JobCategory model that queried for the instance's job count. My templatetag simply returned all categories. Here is the relevant code. class JobCategory(models.Model): title = models.CharField(_('title'), max_length=50, help_text=_("Max 50 chars. Required.")) slug = models.SlugField(_('slug'), help_text=_("Only letters, numbers, or hyphens. Required.")) class Meta: verbose_name = _('job category') verbose_name_plural = _('job categories') def __unicode__(self): return self.title def get_absolute_url(self): return reverse('djobs_category_jobs', args=[self.slug]) @property def active_job_count(self): return len(Job.active.filter(category=self)) class ActiveJobManager(models.Manager): def get_query_set(self): return super(ActiveJobManager, self).get_query_set().filter(created_date__gte=datetime.datetime.now() - datetime.timedelta(days=30)) class Job(models.Model): title = models.CharField(_('title'), max_length=50, help_text=_("Max 50 chars. Required.")) description = models.TextField(_('description'), help_text=_("Required.")) category = models.ForeignKey(JobCategory, related_name='jobs') employment_type = models.CharField(_('employment type'), max_length=5, choices=EMPLOYMENT_TYPE_CHOICES, help_text=_("Required.")) employment_level = models.CharField(_('employment level'), max_length=5, choices=EMPLOYMENT_LEVEL_CHOICES, help_text=_("Required.")) employer = models.ForeignKey(Employer) location = models.ForeignKey(Location) contact = models.ForeignKey(Contact) allow_applications = models.BooleanField(_('allow applications')) created_date = models.DateTimeField(auto_now_add=True) objects = models.Manager() active = ActiveJobManager() class Meta: verbose_name = _('job') verbose_name_plural = _('jobs') def __unicode__(self): return '%s at %s' % (self.title, self.employer.name) and the tag... def job_categories(): categories = JobCategory.objects.all() return {'categories': categories}
Is it possible to filter on a related item in Django annotations?
I have the following 2 models: class Job(models.Model): title = models.CharField(_('title'), max_length=50) description = models.TextField(_('description')) category = models.ForeignKey(JobCategory, related_name='jobs') created_date = models.DateTimeField(auto_now_add=True) class JobCategory(models.Model): title = models.CharField(_('title'), max_length=50) slug = models.SlugField(_('slug')) Here is where I am at with the query thus far: def job_categories(): categories = JobCategory.objects.annotate(num_postings=Count('jobs')) return {'categories': categories} The problem is that I only want to count jobs that were created in the past 30 days. I want to return all categories however, not only those categories that have qualifying jobs.
[ "Just a guess... but would this work?\ndef job_categories():\n thritydaysago = datetime.datetime.now() - datetime.timedelta(days=30)\n categories = JobCategory.objects.filter(job__created_date__gte=thritydaysago).annotate(num_postings=Count('jobs'))\n return {'categories': categories}\n\nSee\"lookups-that-span-relationships\" for more details on spanning queries.\nHmmm... probably need another query in there to get all categories...\n", "I decided to approach this differently and chose not to use annotations at all. I added a manager to the Job model that returned only active (30 days or less old) jobs, and created a property on the JobCategory model that queried for the instance's job count. My templatetag simply returned all categories. Here is the relevant code.\nclass JobCategory(models.Model):\n title = models.CharField(_('title'), max_length=50, help_text=_(\"Max 50 chars. Required.\"))\n slug = models.SlugField(_('slug'), help_text=_(\"Only letters, numbers, or hyphens. Required.\"))\n\n class Meta:\n verbose_name = _('job category')\n verbose_name_plural = _('job categories')\n\n def __unicode__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('djobs_category_jobs', args=[self.slug])\n\n @property\n def active_job_count(self):\n return len(Job.active.filter(category=self))\n\nclass ActiveJobManager(models.Manager):\n def get_query_set(self):\n return super(ActiveJobManager, self).get_query_set().filter(created_date__gte=datetime.datetime.now() - datetime.timedelta(days=30))\n\nclass Job(models.Model):\n title = models.CharField(_('title'), max_length=50, help_text=_(\"Max 50 chars. Required.\"))\n description = models.TextField(_('description'), help_text=_(\"Required.\"))\n category = models.ForeignKey(JobCategory, related_name='jobs')\n employment_type = models.CharField(_('employment type'), max_length=5, choices=EMPLOYMENT_TYPE_CHOICES, help_text=_(\"Required.\"))\n employment_level = models.CharField(_('employment level'), max_length=5, choices=EMPLOYMENT_LEVEL_CHOICES, help_text=_(\"Required.\"))\n employer = models.ForeignKey(Employer)\n location = models.ForeignKey(Location)\n contact = models.ForeignKey(Contact)\n allow_applications = models.BooleanField(_('allow applications'))\n created_date = models.DateTimeField(auto_now_add=True)\n\n objects = models.Manager()\n active = ActiveJobManager()\n\n class Meta:\n verbose_name = _('job')\n verbose_name_plural = _('jobs')\n\n def __unicode__(self):\n return '%s at %s' % (self.title, self.employer.name)\n\nand the tag...\ndef job_categories():\n categories = JobCategory.objects.all()\n return {'categories': categories}\n\n" ]
[ 4, 1 ]
[]
[]
[ "annotations", "django", "django_queryset", "python" ]
stackoverflow_0001292081_annotations_django_django_queryset_python.txt
Q: HTML forms not working with python I've created a HTML page with forms, which takes a name and password and passes it to a Python Script which is supposed to print the persons name with a welcome message. However, after i POST the values, i'm just getting the Python code displayed in the browser and not the welcome message. I have stored the html file and python file in the cgi-bin folder under Apache 2.2. If i just run a simple hello world python script in the browser, the "Hello World" message is being displayed. I'm using WinXP, Python 2.5, Apache 2.2. the code that i'm trying to run is the following: #!c:\python25\python.exe import cgi import cgitb; cgitb.enable() form = cgi.FieldStorage() reshtml = """Content-Type: text/html\n <html> <head><title>Security Precaution</title></head> <body> """ print reshtml User = form['UserName'].value Pass = form['PassWord'].value if User == 'Gold' and Pass == 'finger': print '<big><big>Welcome' print 'mr. Goldfinger !</big></big><br>' print '<br>' else: print 'Sorry, incorrect user name or password' print '</body>' print '</html>' The answer to it might be very obvious, but its completely escaping me. I'm very new to Python so any help would be greatly appreciated. Thanks. A: This i'm just getting the Python code displayed in the browser sounds like CGI handling with Apache and Python is not configured correctly. You can narrow the test case by passing UserName and PassWord as GET parameters: http://example.com/cgi-bin/my-script.py?UserName=Foo&PassWord=bar What happens if you do this? A: You may have to extract the field values like this User = form.getfirst('UserName') Pass = form.getfirst('PassWord') I know, it's strange.
HTML forms not working with python
I've created a HTML page with forms, which takes a name and password and passes it to a Python Script which is supposed to print the persons name with a welcome message. However, after i POST the values, i'm just getting the Python code displayed in the browser and not the welcome message. I have stored the html file and python file in the cgi-bin folder under Apache 2.2. If i just run a simple hello world python script in the browser, the "Hello World" message is being displayed. I'm using WinXP, Python 2.5, Apache 2.2. the code that i'm trying to run is the following: #!c:\python25\python.exe import cgi import cgitb; cgitb.enable() form = cgi.FieldStorage() reshtml = """Content-Type: text/html\n <html> <head><title>Security Precaution</title></head> <body> """ print reshtml User = form['UserName'].value Pass = form['PassWord'].value if User == 'Gold' and Pass == 'finger': print '<big><big>Welcome' print 'mr. Goldfinger !</big></big><br>' print '<br>' else: print 'Sorry, incorrect user name or password' print '</body>' print '</html>' The answer to it might be very obvious, but its completely escaping me. I'm very new to Python so any help would be greatly appreciated. Thanks.
[ "This\n\ni'm just getting the Python code\n displayed in the browser\n\nsounds like CGI handling with Apache and Python is not configured correctly.\nYou can narrow the test case by passing UserName and PassWord as GET parameters:\nhttp://example.com/cgi-bin/my-script.py?UserName=Foo&PassWord=bar\n\nWhat happens if you do this?\n", "You may have to extract the field values like this\nUser = form.getfirst('UserName')\nPass = form.getfirst('PassWord')\n\nI know, it's strange.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001310443_python.txt
Q: is there similar syntax to php's $$variable in python is there similar syntax to php's $$variable in python? what I am actually trying is to load a model based on a value. for example, if the value is Song, I would like to import Song module. I know I can use if statements or lambada, but something similar to php's $$variable will be much convenient. what I am after is something similar to this. from mypackage.models import [*variable] then in the views def xyz(request): xz = [*variable].objects.all() *variable is a value that is defined in a settings or can come from comandline. it can be any model in the project. A: def load_module_attr (path): modname, attr = path.rsplit ('.', 1) mod = __import__ (modname, {}, {}, [attr]) return getattr (mod, attr) def my_view (request): model_name = "myapp.models.Song" # Get from command line, user, wherever model = load_module_attr (model_name) print model.objects.all() A: I'm pretty sure you want __import__(). Read this: docs.python.org: __import__ This function is invoked by the import statement. It can be replaced (by importing the builtins module and assigning to builtins.__import__) in order to change semantics of the import statement, but nowadays it is usually simpler to use import hooks (see PEP 302). Direct use of __import__() is rare, except in cases where you want to import a module whose name is only known at runtime. A: It seems that you want to load all the potentially matchable modules/models on hand and according to request choose a particular one to use. You can "globals()" which returns dictionary of global level variables, indexable by string. So if you do something like globals()['Song'], it'd give you Song model. This is much like PHP's $$ except that it'll only grab variables of global scope. For local scope you'd have to call locals(). Here's some example code. from models import Song, Lyrics, Composers, BlaBla def xyz(request): try: modelname = get_model_name_somehow(request): model =globals()[modelname] model.objects.all() except KeyError: pass # Model/Module not loaded ... handle it the way you want to A: So you know the general concept, what you are trying to implement is known as "Reflective Programming". You can see examples in several languages in the wikipedia entry here http://en.wikipedia.org/wiki/Reflective_programming
is there similar syntax to php's $$variable in python
is there similar syntax to php's $$variable in python? what I am actually trying is to load a model based on a value. for example, if the value is Song, I would like to import Song module. I know I can use if statements or lambada, but something similar to php's $$variable will be much convenient. what I am after is something similar to this. from mypackage.models import [*variable] then in the views def xyz(request): xz = [*variable].objects.all() *variable is a value that is defined in a settings or can come from comandline. it can be any model in the project.
[ "def load_module_attr (path):\n modname, attr = path.rsplit ('.', 1)\n mod = __import__ (modname, {}, {}, [attr])\n return getattr (mod, attr)\n\ndef my_view (request):\n model_name = \"myapp.models.Song\" # Get from command line, user, wherever\n model = load_module_attr (model_name)\n print model.objects.all()\n\n", "I'm pretty sure you want __import__().\nRead this: docs.python.org: __import__\n\nThis function is invoked by the import statement. It can be replaced (by importing the builtins module and assigning to builtins.__import__) in order to change semantics of the import statement, but nowadays it is usually simpler to use import hooks (see PEP 302). Direct use of __import__() is rare, except in cases where you want to import a module whose name is only known at runtime.\n\n", "It seems that you want to load all the potentially matchable modules/models on hand and according to request choose a particular one to use. You can \"globals()\" which returns dictionary of global level variables, indexable by string. So if you do something like globals()['Song'], it'd give you Song model. This is much like PHP's $$ except that it'll only grab variables of global scope. For local scope you'd have to call locals(). \nHere's some example code.\nfrom models import Song, Lyrics, Composers, BlaBla\n\ndef xyz(request):\n try:\n modelname = get_model_name_somehow(request):\n model =globals()[modelname]\n model.objects.all()\n except KeyError:\n pass # Model/Module not loaded ... handle it the way you want to \n\n", "So you know the general concept, what you are trying to implement is known as \"Reflective Programming\". \nYou can see examples in several languages in the wikipedia entry here \nhttp://en.wikipedia.org/wiki/Reflective_programming\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001309366_django_python.txt
Q: Getting rid of Python console in wxPython under Windows Possible Duplicate: How can I hide the console window in a PyQt app running on Windows? How to get rid of the console that shows up as standard output when running wxPython programs in Windows? A: Not familiar with wxPython, but if you invoke your script with pythonw.exe rather than python.exe, the console window shouldn't appear. I believe saving the script as script.pyw also works. A: Others have already suggested of renaming from py to pyw. If you instead refer to Output redirection pass redirect=True when creating the wx.App class. See for instance http://www.wxpython.org/docs/api/wx.App-class.html The signature of __init__ is __init__(self, redirect=False, filename=None, useBestVisual=False, clearSigInt=True) If you set redirect=True and filename different from None, sys.stdout and sys.stderr will be redirect to filename. Note that on Windows and Mac redirect default value is True. If redirect==True and filename is None, the output will be printed in a popup window different from your other frames. This can be very useful so that while debugging you can follow what is happening, while not cluttering the user interface with the internals of your app in release mode. A: The easiest way to do this: if __name__ == "__main__": app = wx.App(0) #<--- or False frame = MyFrame('My Frame') frame.Show(True) app.MainLoop() This prints to stdout instead of the wxPython window. A: I don't know wxPython but the solution might be as simple as using pythonw.exe to run the program instead of python.exe.
Getting rid of Python console in wxPython under Windows
Possible Duplicate: How can I hide the console window in a PyQt app running on Windows? How to get rid of the console that shows up as standard output when running wxPython programs in Windows?
[ "Not familiar with wxPython, but if you invoke your script with pythonw.exe rather than python.exe, the console window shouldn't appear. I believe saving the script as script.pyw also works.\n", "Others have already suggested of renaming from py to pyw.\nIf you instead refer to Output redirection pass redirect=True when creating the wx.App class.\nSee for instance http://www.wxpython.org/docs/api/wx.App-class.html\nThe signature of __init__ is \n__init__(self, redirect=False, filename=None, useBestVisual=False, clearSigInt=True) \nIf you set redirect=True and filename different from None, sys.stdout and sys.stderr will be redirect to filename. Note that on Windows and Mac redirect default value is True. If redirect==True and filename is None, the output will be printed in a popup window different from your other frames.\nThis can be very useful so that while debugging you can follow what is happening, while not cluttering the user interface with the internals of your app in release mode.\n", "The easiest way to do this:\nif __name__ == \"__main__\":\n app = wx.App(0) #<--- or False\n frame = MyFrame('My Frame')\n frame.Show(True)\n app.MainLoop()\n\nThis prints to stdout instead of the wxPython window.\n", "I don't know wxPython but the solution might be as simple as using pythonw.exe to run the program instead of python.exe.\n" ]
[ 6, 6, 6, 3 ]
[]
[]
[ "console", "python", "windows", "windows_console", "wxpython" ]
stackoverflow_0001310972_console_python_windows_windows_console_wxpython.txt
Q: How nicely does Python 'flow' with HTML as compared to PHP? I'm thinking of switching from using PHP to Python for web applications, but I was wondering if Python is as adept as weaving in and out of HTML as PHP is. Essentially, I find it very easy/intuitive to use <? and ?> to put PHP where I want, and am then free to arrange/organize my HTML however I want. Is it just as easy to do this with Python? Fundamentally, the question is: is working with HTML with Python similar to working with HTML with PHP in terms of ease-of-use? Edit: I guess to help clear up some of the confusion in the comments below, I get the intuition that PHP would be better than Python at organizing the front-end, presentation part of a website, while Python would excel at the back-end part (the actual programming...). The question is - am I wrong and is Python just as good as PHP for front-end? Edit of my Edit: Ah, I'm starting to understand the error of my ways; it seems I have unknowingly picked up some bad habits. I always thought it was okay (read: standard) to, for example, have PHP in pseudo-code do the following: If user has filled out form: print this html else: print this html When in fact I should use an HTML template, with the PHP in a sep. file. And in that scenario, PHP and Python are on an even fighting field and it's probably up to my own programming language tastes. A: You can't easily compare PHP and Python. PHP is a web processing framework that is designed specifically as an Apache plug-in. It includes HTTP protocol handling as well as a programming language. Python is "just" a programming language. There are many Python web frameworks to plug Python into Apache. There's mod_wsgi, CGI's, as well as web application frameworks of varying degrees of sophistication. The "use to put PHP where I want" is not really an appropriate way to judge Python as language for building web applications. A framework (like Pylons, Django, TurboGears, etc.) separates the presentation (HTML templates) from programming from database access. PHP mixes all three aspects of a web application into a single thing -- the PHP language. If you want to switch from PHP to Python you must do the following. Start with no preconception, no bias, nothing. Start fresh with a tutorial on the framework you've chosen. Do the entire tutorial without comparing anything you're doing to PHP. Start fresh on solving your chosen problem with the framework you've chosen. Build the entire thing without comparing anything you're doing to PHP. Once you've built something using a Python-based web framework -- without comparing anything to PHP -- you can step back and compare and contrast the two things. Folks who ask questions like Python - substr, java and python equivalent of php's foreach($array as $key => $value), what is python equivalent to PHP $_SERVER? are sometimes trying to map their PHP knowledge to Python. Don't Do This. The only way to start using a Python web framework is to start completely fresh. Edit All Python web frameworks have some "presentation logic" capabilities in their template engines. This is a "slippery slope" where you can easily turn a simple template into a mess. Clearly, a simple {% if %} and {% for %} construct are helpful to simplify conditional and repetitive elements of an HTML template. Beyond that, it starts to get murky how much power should be put into the tag language. At one extreme we have PHP/JSP and related technologies where the template engine can (and often does) do everything. This turns into a mess. Is the middle are Jinja and Mako where the template engine can do a great deal. At the other end is Django where the template engine does as little as possible to avoid mingling presentation and processing logic. A: Well. First of all, you must know that mixing code with presentation is considered bad practice. Although you can use template engines that let you mix any python code inside the html, like Mako, usually python web libraries and frameworks tend to favor writing logic on a python script file and a separate html template to render the results. That said, python is much easier than PHP. But you can only be productive in python If you're willing to learn its way of programming. If you want PHP, nothing is more PHP than PHP itself, and python takes different approaches, so maybe you're going to be frustrated because things are not like you're used to. However, if you are searching for a new, better way of doing things, python is for you. After reading the basic python tutorial, try some python web framework like pylons and see for yourself. A: If you were to progress onto a MVC web framework, you would find that actually both PHP and Python use HTML in a similar way. The work around the request is done in the controllers, using the model for grabbing data. The results are then posted to a view, which is in effect a template of HTML. You can have a well layed out HTML file as a view and your controller will simply tell it what to populate itself with. In PHP this is often done with the alternative PHP syntax. A: First of all: Escaping out into a programming language from HTML is nice when you do small hacks. For an actual production web application I would never to that. Mixing in HTML into the programming language is unpractical. It's better to use some sort of templating language. Indentation is not an issue there in Python more than any other language. But, to answer your actual question: It's the same. There are Python templating languages that allow you to escape out into Python as well, should you want to. I would rather recommend that you don't, but you can. A: Yes, PHP is easy to use in that way. But that's not the recommended way! It's better you use templates. In fact, you can use the same with PHP too.
How nicely does Python 'flow' with HTML as compared to PHP?
I'm thinking of switching from using PHP to Python for web applications, but I was wondering if Python is as adept as weaving in and out of HTML as PHP is. Essentially, I find it very easy/intuitive to use <? and ?> to put PHP where I want, and am then free to arrange/organize my HTML however I want. Is it just as easy to do this with Python? Fundamentally, the question is: is working with HTML with Python similar to working with HTML with PHP in terms of ease-of-use? Edit: I guess to help clear up some of the confusion in the comments below, I get the intuition that PHP would be better than Python at organizing the front-end, presentation part of a website, while Python would excel at the back-end part (the actual programming...). The question is - am I wrong and is Python just as good as PHP for front-end? Edit of my Edit: Ah, I'm starting to understand the error of my ways; it seems I have unknowingly picked up some bad habits. I always thought it was okay (read: standard) to, for example, have PHP in pseudo-code do the following: If user has filled out form: print this html else: print this html When in fact I should use an HTML template, with the PHP in a sep. file. And in that scenario, PHP and Python are on an even fighting field and it's probably up to my own programming language tastes.
[ "You can't easily compare PHP and Python.\nPHP is a web processing framework that is designed specifically as an Apache plug-in. It includes HTTP protocol handling as well as a programming language.\nPython is \"just\" a programming language. There are many Python web frameworks to plug Python into Apache. There's mod_wsgi, CGI's, as well as web application frameworks of varying degrees of sophistication.\nThe \"use to put PHP where I want\" is not really an appropriate way to judge Python as language for building web applications.\nA framework (like Pylons, Django, TurboGears, etc.) separates the presentation (HTML templates) from programming from database access. PHP mixes all three aspects of a web application into a single thing -- the PHP language. \nIf you want to switch from PHP to Python you must do the following.\n\nStart with no preconception, no bias, nothing.\nStart fresh with a tutorial on the framework you've chosen. Do the entire tutorial without comparing anything you're doing to PHP.\nStart fresh on solving your chosen problem with the framework you've chosen. Build the entire thing without comparing anything you're doing to PHP.\n\nOnce you've built something using a Python-based web framework -- without comparing anything to PHP -- you can step back and compare and contrast the two things. \nFolks who ask questions like Python - substr, java and python equivalent of php's foreach($array as $key => $value), what is python equivalent to PHP $_SERVER? are sometimes trying to map their PHP knowledge to Python. Don't Do This.\nThe only way to start using a Python web framework is to start completely fresh.\n\nEdit\nAll Python web frameworks have some \"presentation logic\" capabilities in their template engines. This is a \"slippery slope\" where you can easily turn a simple template into a mess. Clearly, a simple {% if %} and {% for %} construct are helpful to simplify conditional and repetitive elements of an HTML template.\nBeyond that, it starts to get murky how much power should be put into the tag language.\nAt one extreme we have PHP/JSP and related technologies where the template engine can (and often does) do everything. This turns into a mess. Is the middle are Jinja and Mako where the template engine can do a great deal. At the other end is Django where the template engine does as little as possible to avoid mingling presentation and processing logic.\n", "Well. First of all, you must know that mixing code with presentation is considered bad practice. Although you can use template engines that let you mix any python code inside the html, like Mako, usually python web libraries and frameworks tend to favor writing logic on a python script file and a separate html template to render the results.\nThat said, python is much easier than PHP.\nBut you can only be productive in python If you're willing to learn its way of programming. \nIf you want PHP, nothing is more PHP than PHP itself, and python takes different approaches, so maybe you're going to be frustrated because things are not like you're used to.\nHowever, if you are searching for a new, better way of doing things, python is for you.\nAfter reading the basic python tutorial, try some python web framework like pylons and see for yourself.\n", "If you were to progress onto a MVC web framework, you would find that actually both PHP and Python use HTML in a similar way.\nThe work around the request is done in the controllers, using the model for grabbing data. The results are then posted to a view, which is in effect a template of HTML.\nYou can have a well layed out HTML file as a view and your controller will simply tell it what to populate itself with.\nIn PHP this is often done with the alternative PHP syntax.\n", "First of all: Escaping out into a programming language from HTML is nice when you do small hacks. For an actual production web application I would never to that.\nMixing in HTML into the programming language is unpractical. It's better to use some sort of templating language. Indentation is not an issue there in Python more than any other language.\nBut, to answer your actual question: It's the same. There are Python templating languages that allow you to escape out into Python as well, should you want to. I would rather recommend that you don't, but you can.\n", "Yes, PHP is easy to use in that way. But that's not the recommended way! It's better you use templates. In fact, you can use the same with PHP too.\n" ]
[ 26, 2, 2, 1, 0 ]
[]
[]
[ "html", "php", "python" ]
stackoverflow_0001311789_html_php_python.txt
Q: Problem while running python script in java code When i run a python script from the below java code, where an input file is given as an argument to the python script as well as an "-v" option, i get a IOException String pythonScriptPath="\"C:\\Program Files\\bin\\CsvFile.py\""; String Filepath="C:\\Documents and Settings\\user\\Desktop\\arbit.csv"; String[] cmd = new String[4]; cmd[0] = "python"; cmd[1] = pythonScriptPath; cmd[2] = "-v"; cmd[3] = Filepath; Runtime rt = Runtime.getRuntime(); Process pr = rt.exec(cmd); The following is the error: CreateProcess: python "C:\Program Files\bin\CsvFile.py" -v "C:\Documents and Settings\user \Desktop\arbit.csv" error=2 at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) at java.lang.ProcessBuilder.start(Unknown Source) at java.lang.Runtime.exec(Unknown Source) at java.lang.Runtime.exec(Unknown Source) Can somebody please let me know how to solve this exception. Thanking You, Harsha A: error=2 means the Win32 CreateProcess function is returning an error code of 2, or ERROR_FILE_NOT_FOUND. Either it can't find your script, or (more likely, IMO) it can't find python.exe. If it's the latter, make sure your Python installation (possibly C:\Program Files\Python\bin, though I'm not sure) is in your system path. You can change your system path by going into the Control Panel and opening up "System". If you're using Vista or 7, click "Advanced system settings"; if you're using XP or 2000, choose the "Advanced" tab. Hit "Environment Variables", find "Path" or "PATH" under "System variables" and add your Python bin directory to the beginning of the string (it's semicolon-delimited). A: You don't need all the extra quotes. String pythonScriptPath="C:\\Program Files\\bin\\CsvFile.py"; This should work fine. A: Is Python in your path ? I would most likely qualify it with a path to determine precisely which python you're picking up (if any) You don't need the quotes around the Python script argument A: Your variable Filepath does not match what you actually sent it according to your program output. The error lists it as "C:\Documents and Settings\user \Desktop\arbit.csv" with an extraneous space after the user profile name which is the most likely cause for a File Not Found error.
Problem while running python script in java code
When i run a python script from the below java code, where an input file is given as an argument to the python script as well as an "-v" option, i get a IOException String pythonScriptPath="\"C:\\Program Files\\bin\\CsvFile.py\""; String Filepath="C:\\Documents and Settings\\user\\Desktop\\arbit.csv"; String[] cmd = new String[4]; cmd[0] = "python"; cmd[1] = pythonScriptPath; cmd[2] = "-v"; cmd[3] = Filepath; Runtime rt = Runtime.getRuntime(); Process pr = rt.exec(cmd); The following is the error: CreateProcess: python "C:\Program Files\bin\CsvFile.py" -v "C:\Documents and Settings\user \Desktop\arbit.csv" error=2 at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(Unknown Source) at java.lang.ProcessImpl.start(Unknown Source) at java.lang.ProcessBuilder.start(Unknown Source) at java.lang.Runtime.exec(Unknown Source) at java.lang.Runtime.exec(Unknown Source) Can somebody please let me know how to solve this exception. Thanking You, Harsha
[ "error=2 means the Win32 CreateProcess function is returning an error code of 2, or ERROR_FILE_NOT_FOUND. Either it can't find your script, or (more likely, IMO) it can't find python.exe. If it's the latter, make sure your Python installation (possibly C:\\Program Files\\Python\\bin, though I'm not sure) is in your system path.\nYou can change your system path by going into the Control Panel and opening up \"System\". If you're using Vista or 7, click \"Advanced system settings\"; if you're using XP or 2000, choose the \"Advanced\" tab. Hit \"Environment Variables\", find \"Path\" or \"PATH\" under \"System variables\" and add your Python bin directory to the beginning of the string (it's semicolon-delimited).\n", "You don't need all the extra quotes.\nString pythonScriptPath=\"C:\\\\Program Files\\\\bin\\\\CsvFile.py\";\n\nThis should work fine.\n", "\nIs Python in your path ? I would most likely qualify it with a path to determine precisely which python you're picking up (if any)\nYou don't need the quotes around the Python script argument\n\n", "Your variable Filepath does not match what you actually sent it according to your program output. The error lists it as \"C:\\Documents and Settings\\user \\Desktop\\arbit.csv\" with an extraneous space after the user profile name which is the most likely cause for a File Not Found error.\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "java", "python" ]
stackoverflow_0001311513_java_python.txt
Q: Help with Python while loop behaviour I have a script that uses a simple while loop to display a progress bar but it doesn't seem to be working as I expected: count = 1 maxrecords = len(international) p = ProgressBar("Blue") t = time while count < maxrecords: print 'Processing %d of %d' % (count, maxrecords) percent = float(count) / float(maxrecords) * 100 p.render(int(percent)) t.sleep(0.5) count += 1 It appears to be looping at "p.render..." and does not go back to "print 'Processing %d of %d...'". UPDATE: My apologies. It appears that ProgressBar.render() removes the output of "print 'Processing..." when it renders the progress bar. The progress bar is from http://nadiana.com/animated-terminal-progress-bar-in-python A: I see you are using the ProgressBar implementation on my website. If you want to print a message you can use the message argument in render p.render(percent, message='Processing %d of %d' % (count, maxrecords)) A: That's not the way to write a loop in Python. maxrecords = len(international) p = ProgressBar("Blue") for count in range(1, maxrecords): print 'Processing %d of %d' % (count, maxrecords) percent = float(count) / float(maxrecords) * 100 p.render(int(percent)) time.sleep(0.5) If you actually want to do something with the record, rather than just render the bar, you would do this: maxrecords = len(international) for count, record in enumerate(international): print 'Processing %d of %d' % (count, maxrecords) percent = float(count) / float(maxrecords) * 100 p.render(int(percent)) process_record(record) # or whatever the function is A: What is the implementation for ProgressBar.render()? I assume that it is outputting terminal control characters that move the cursor so that previous output is overwritten. This can create the false impression that the control flow isn't working as it should be. A: (1) [not part of the problem, but ...] t = time followed much later by t.sleep(0.5) would be a source of annoyance to anyone seeing the bare t and having to read backwards to find what it is. (2) [not part of the problem, but ...] count can never enter the loop with the same value as maxrecords. E.g. if maxrecords is 10, the code in the loop is eexcuted only 9 times. (3) There is nothing in the code that you showed that would support the idea that it is "looping at p.render()" -- unless the render method itself loops if its arg is zero, which will be the case if maxrecords is 17909. Try replacing the p.render(....) temporarily with (say) print "pretend-render: pct =", int(percent)
Help with Python while loop behaviour
I have a script that uses a simple while loop to display a progress bar but it doesn't seem to be working as I expected: count = 1 maxrecords = len(international) p = ProgressBar("Blue") t = time while count < maxrecords: print 'Processing %d of %d' % (count, maxrecords) percent = float(count) / float(maxrecords) * 100 p.render(int(percent)) t.sleep(0.5) count += 1 It appears to be looping at "p.render..." and does not go back to "print 'Processing %d of %d...'". UPDATE: My apologies. It appears that ProgressBar.render() removes the output of "print 'Processing..." when it renders the progress bar. The progress bar is from http://nadiana.com/animated-terminal-progress-bar-in-python
[ "I see you are using the ProgressBar implementation on my website. If you want to print a message you can use the message argument in render\np.render(percent, message='Processing %d of %d' % (count, maxrecords))\n\n", "That's not the way to write a loop in Python.\nmaxrecords = len(international)\np = ProgressBar(\"Blue\")\nfor count in range(1, maxrecords):\n print 'Processing %d of %d' % (count, maxrecords)\n percent = float(count) / float(maxrecords) * 100\n p.render(int(percent))\n time.sleep(0.5)\n\nIf you actually want to do something with the record, rather than just render the bar, you would do this:\nmaxrecords = len(international)\nfor count, record in enumerate(international):\n print 'Processing %d of %d' % (count, maxrecords)\n percent = float(count) / float(maxrecords) * 100\n p.render(int(percent))\n process_record(record) # or whatever the function is\n\n", "What is the implementation for ProgressBar.render()? I assume that it is outputting terminal control characters that move the cursor so that previous output is overwritten. This can create the false impression that the control flow isn't working as it should be.\n", "(1) [not part of the problem, but ...] t = time followed much later by t.sleep(0.5) would be a source of annoyance to anyone seeing the bare t and having to read backwards to find what it is.\n(2) [not part of the problem, but ...] count can never enter the loop with the same value as maxrecords. E.g. if maxrecords is 10, the code in the loop is eexcuted only 9 times.\n(3) There is nothing in the code that you showed that would support the idea that it is \"looping at p.render()\" -- unless the render method itself loops if its arg is zero, which will be the case if maxrecords is 17909. Try replacing the p.render(....) temporarily with (say) \nprint \"pretend-render: pct =\", int(percent)\n" ]
[ 5, 3, 2, 1 ]
[]
[]
[ "python", "while_loop" ]
stackoverflow_0001312421_python_while_loop.txt
Q: display a QMessageBox PyQT when a different combobox /list box item is selected I have a combo box cbLayer and a function do_stuff of the following form: def do_stuff(item_selected_from_cbLayer): new_list = [] # do stuff based on item_selected_from_combobox and put the items in new_list return new_list How can I get a QMessageBox to pop up whenever a different item is selected in the following form: QMessageBox.warning(self, "items: ", do_stuff(cb_selected_item)) A: Write a method or function that contains this code and attach it to the combo boxes signal currentIndexChanged: def __init__(self): ... QObject.connect(self.cbLayer, SIGNAL("currentIndexChanged(int)"), self.warn) def warn(index): QMessageBox.warning(self, "items: ", do_stuff(cbLayer.itemData(index)) ) def do_stuff(self, item): QMessageBox.warning(self, str(item)) I didn't try this but it should get you started. Otherwise have a look at the PyQt examples.
display a QMessageBox PyQT when a different combobox /list box item is selected
I have a combo box cbLayer and a function do_stuff of the following form: def do_stuff(item_selected_from_cbLayer): new_list = [] # do stuff based on item_selected_from_combobox and put the items in new_list return new_list How can I get a QMessageBox to pop up whenever a different item is selected in the following form: QMessageBox.warning(self, "items: ", do_stuff(cb_selected_item))
[ "Write a method or function that contains this code and attach it to the combo boxes signal currentIndexChanged:\ndef __init__(self):\n ...\n QObject.connect(self.cbLayer, SIGNAL(\"currentIndexChanged(int)\"), self.warn)\n\ndef warn(index):\n QMessageBox.warning(self, \"items: \", do_stuff(cbLayer.itemData(index)) )\n\ndef do_stuff(self, item):\n QMessageBox.warning(self, str(item))\n\nI didn't try this but it should get you started. Otherwise have a look at the PyQt examples.\n" ]
[ 1 ]
[]
[]
[ "pyqt", "python" ]
stackoverflow_0001312598_pyqt_python.txt
Q: Python - ambiguity with decorators receiving a single arg I am trying to write a decorator that gets a single arg, i.e @Printer(1) def f(): print 3 So, naively, I tried: class Printer: def __init__(self,num): self.__num=num def __call__(self,func): def wrapped(*args,**kargs): print self.__num return func(*args,**kargs**) return wrapped This is ok, but it also works as a decorator receiving no args, i.e @Printer def a(): print 3 How can I prevent that? A: Well, it's already effectively prevented, in the sense that calling a() doesn't work. But to stop it as the function is defined, I suppose you'd have to change __init__ to check the type of num: def __init__(self,num): if callable(num): raise TypeError('Printer decorator takes an argument') self.__num=num I don't know if this is really worth the bother, though. It already doesn't work as-is; you're really asking to enforce the types of arguments in a duck-typed language. A: Are you sure it works without arguments? If I leave them out I get this error message: Traceback (most recent call last): File "/tmp/blah.py", line 28, in ? a() TypeError: __call__() takes exactly 2 arguments (1 given) You could try this alternative definition, though, if the class-based one doesn't work for you. def Printer(num): def wrapper(func): def wrapped(*args, **kwargs): print num return func(*args, **kwargs) return wrapped return wrapper A: The decorator is whatever the expression after @ evaluates to. In the first case, that's an instance of Printer, so what happens is (pretty much) equivalent to decorator = Printer(1) # an instance of Printer, the "1" is given to __init__ def f(): print 3 f = decorator(f) # == dec.__call__(f) , so in the end f is "wrapped" In the second case, that's the class Printer, so you have decorator = Printer # the class def a(): print 3 a = decorator(a) # == Printer(a), so a is an instance of Printer So, even though it works (because the constructor of Printer takes one extra argument, just like __call__), it's a totally different thing. The python way of preventing this usually is: Don't do it. Make it clear (e.g. in the docstring) how the decorator works, and then trust that people do the right thing. If you really want the check, Eevee's answer provides a way to catch this mistake (at runtime, of course---it's Python). A: I can't think of an ideal answer, but if you force the Printer class to be instantiated with a keyword argument, it can never try to instantiate via the decorator itself, since that only deals with non-keyword arguments: def __init__(self,**kwargs): self.__num=kwargs["num"] ... @Printer(num=1) def a(): print 3
Python - ambiguity with decorators receiving a single arg
I am trying to write a decorator that gets a single arg, i.e @Printer(1) def f(): print 3 So, naively, I tried: class Printer: def __init__(self,num): self.__num=num def __call__(self,func): def wrapped(*args,**kargs): print self.__num return func(*args,**kargs**) return wrapped This is ok, but it also works as a decorator receiving no args, i.e @Printer def a(): print 3 How can I prevent that?
[ "Well, it's already effectively prevented, in the sense that calling a() doesn't work.\nBut to stop it as the function is defined, I suppose you'd have to change __init__ to check the type of num:\ndef __init__(self,num):\n if callable(num):\n raise TypeError('Printer decorator takes an argument')\n self.__num=num\n\nI don't know if this is really worth the bother, though. It already doesn't work as-is; you're really asking to enforce the types of arguments in a duck-typed language.\n", "Are you sure it works without arguments? If I leave them out I get this error message:\n\nTraceback (most recent call last):\n File \"/tmp/blah.py\", line 28, in ?\n a()\nTypeError: __call__() takes exactly 2 arguments (1 given)\n\nYou could try this alternative definition, though, if the class-based one doesn't work for you.\ndef Printer(num):\n def wrapper(func):\n def wrapped(*args, **kwargs):\n print num\n return func(*args, **kwargs)\n return wrapped\n\n return wrapper\n\n", "The decorator is whatever the expression after @ evaluates to. In the first case, that's an instance of Printer, so what happens is (pretty much) equivalent to\ndecorator = Printer(1) # an instance of Printer, the \"1\" is given to __init__\n\ndef f():\n print 3\nf = decorator(f) # == dec.__call__(f) , so in the end f is \"wrapped\"\n\nIn the second case, that's the class Printer, so you have\ndecorator = Printer # the class\n\ndef a():\n print 3\na = decorator(a) # == Printer(a), so a is an instance of Printer\n\nSo, even though it works (because the constructor of Printer takes one extra argument, just like __call__), it's a totally different thing.\nThe python way of preventing this usually is: Don't do it. Make it clear (e.g. in the docstring) how the decorator works, and then trust that people do the right thing.\nIf you really want the check, Eevee's answer provides a way to catch this mistake (at runtime, of course---it's Python).\n", "I can't think of an ideal answer, but if you force the Printer class to be instantiated with a keyword argument, it can never try to instantiate via the decorator itself, since that only deals with non-keyword arguments:\ndef __init__(self,**kwargs):\n self.__num=kwargs[\"num\"]\n\n...\n@Printer(num=1)\ndef a():\n print 3\n\n" ]
[ 4, 1, 1, 1 ]
[]
[]
[ "arguments", "decorator", "python" ]
stackoverflow_0001312785_arguments_decorator_python.txt
Q: Inserting python tuple in a MySQL database I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus. A: Make another table and do one-to-many. Don't try to cram a programming language feature into a database as-is if you can avoid it. If you absolutely need to be able to store an object down the line, your options are a bit more limited. YAML is probably the best balance of human-readable and program-readable, and it has some syntax for specifying classes you might be able to use. A: How about JSON ... it's compact, and there are interpreters available for it in most languages. A: I'd look at serializing it to JSON, using the simplejson package, or the built-in json package in python 2.6. It's simple to use in python, importable by practically every other language, and you don't have to make all of the "what tag should I use? what attributes should this have?" decisions that you might in XML.
Inserting python tuple in a MySQL database
I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.
[ "Make another table and do one-to-many. Don't try to cram a programming language feature into a database as-is if you can avoid it.\nIf you absolutely need to be able to store an object down the line, your options are a bit more limited. YAML is probably the best balance of human-readable and program-readable, and it has some syntax for specifying classes you might be able to use.\n", "How about JSON ... it's compact, and there are interpreters available for it in most languages.\n", "I'd look at serializing it to JSON, using the simplejson package, or the built-in json package in python 2.6.\nIt's simple to use in python, importable by practically every other language, and you don't have to make all of the \"what tag should I use? what attributes should this have?\" decisions that you might in XML.\n" ]
[ 3, 2, 2 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001313000_mysql_python.txt
Q: Porting python app to mobile platforms I have a python app running fine on Windows, Linux and Mac which I would like to port to multiple mobile platforms such as Blackberry, Windows Mobile, Palm, Android and iPhone. I have couple of ideas: port app to platform supporting some kind of Python like Android and Windows Mobile port app to Java to target most platforms right away What would you recommend ? A: Here's what we're doing ... Make the app a generic web application/website. Host it on your server and have your server detect the type of browser. If it is a mobile browser, show the small-screen version of your app. Once you get that going, create individual apps for the particular phones/mobile hardware. Those will each have a single web browser control in them. The web browser will have a hardcoded URL which points to your web site. For example, write a java wrapper for Google Android. Write an Objective-C wrapper for Cocoa Touch (iPhone using XCode). Your wrapper for Windows Mobile will be in a .Net Framework app in C# or VB.Net (or IronPython for that matter). Here's how to do it for Android: http://developerlife.com/tutorials/?p=369 Here's how to do it for Windows Mobile: http://msdn.microsoft.com/en-us/library/ms229657.aspx The wrapper can then access the phone's firmware for motion, GPS info, sounds, and so forth. The beauty of this is You can now submit each app to the individual platform's AppStore which is the #1 way to get new customers. You have one set of source and one place to upgrade. When you upgrade in one location, everyone gets it immediately. A: Jython is out of the question, so either go with supported phones (Windows Mobile, Android, Nokia S60), or rewrite in J2ME.
Porting python app to mobile platforms
I have a python app running fine on Windows, Linux and Mac which I would like to port to multiple mobile platforms such as Blackberry, Windows Mobile, Palm, Android and iPhone. I have couple of ideas: port app to platform supporting some kind of Python like Android and Windows Mobile port app to Java to target most platforms right away What would you recommend ?
[ "Here's what we're doing ...\nMake the app a generic web application/website. Host it on your server and have your server detect the type of browser. If it is a mobile browser, show the small-screen version of your app.\nOnce you get that going, create individual apps for the particular phones/mobile hardware. Those will each have a single web browser control in them. The web browser will have a hardcoded URL which points to your web site.\nFor example, write a java wrapper for Google Android. Write an Objective-C wrapper for Cocoa Touch (iPhone using XCode). Your wrapper for Windows Mobile will be in a .Net Framework app in C# or VB.Net (or IronPython for that matter).\nHere's how to do it for Android: http://developerlife.com/tutorials/?p=369\nHere's how to do it for Windows Mobile: http://msdn.microsoft.com/en-us/library/ms229657.aspx\nThe wrapper can then access the phone's firmware for motion, GPS info, sounds, and so forth.\nThe beauty of this is\n\nYou can now submit each app to the\nindividual platform's AppStore which\nis the #1 way to get new customers.\nYou have one set of source and one\nplace to upgrade. When you upgrade\nin one location, everyone gets it\nimmediately.\n\n", "Jython is out of the question, so either go with supported phones (Windows Mobile, Android, Nokia S60), or rewrite in J2ME.\n" ]
[ 4, 1 ]
[]
[]
[ "mobile", "porting", "python" ]
stackoverflow_0001313164_mobile_porting_python.txt
Q: Using list_filter with Intermediary Models We have three models, Artist: class Artist(models.Model): family_name = models.CharField(max_length=50) given_name = models.CharField(max_length=50) Group: class Group(models.Model): name = models.CharField(max_length=50) members = models.ManyToManyField(Artist, through='Membership') and Membership: class Membership(models.Model) artist = models.ForeignKey(Artist) group = models.ForeignKey(Group) joined = models.DateField() Membership is an intermediary model connecting Artist and Group with some extra data (date joined, etc.) I was asked to see if one could filter artists by what group they're in but I can't figure out how to do that. A: If you define a m2m between artist and group using through=Membership, you can set up a filter directly on group without going through membership. Can't remember if the syntax is list_filter = ['group'] or list_filter = ['group_set'] or something similar.
Using list_filter with Intermediary Models
We have three models, Artist: class Artist(models.Model): family_name = models.CharField(max_length=50) given_name = models.CharField(max_length=50) Group: class Group(models.Model): name = models.CharField(max_length=50) members = models.ManyToManyField(Artist, through='Membership') and Membership: class Membership(models.Model) artist = models.ForeignKey(Artist) group = models.ForeignKey(Group) joined = models.DateField() Membership is an intermediary model connecting Artist and Group with some extra data (date joined, etc.) I was asked to see if one could filter artists by what group they're in but I can't figure out how to do that.
[ "If you define a m2m between artist and group using through=Membership, you can set up a filter directly on group without going through membership. Can't remember if the syntax is \nlist_filter = ['group']\n\nor\nlist_filter = ['group_set']\n\nor something similar.\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "python" ]
stackoverflow_0001309348_django_django_admin_python.txt
Q: Python: what kind of literal delimiter is "better" to use? What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python? A: ' because it's one keystroke less than ". Save your wrists! They're otherwise identical (except you have to escape whichever you choose to use, if they appear inside the string). A: Consider these strings: "Don't do that." 'I said, "okay".' """She said, "That won't work".""" Which quote is "best"? A: Semantically there is no difference in Python; use either. Python also provides the handy triple string delimiter """ or ''' which can simplify multi-line quotes. There is also the raw string literal (r"..." or r'...') to inhibit \ escapes. The Language Reference has all the details. A: For string constants containing a single quote use the double quote as delimiter. The other way around, if you need a double quote inside. Quick, shiftless typing leads to single quote delimiters. >>> "it's very simple" >>> 'reference to the "book"' A: Single and double quotes act identically in Python. Escapes (\n) always work, and there is no variable interpolation. (If you don't want escapes, you can use the r flag, as in r"\n".) Since I'm coming from a Perl background, I have a habit of using single quotes for plain strings and double-quotes for formats used with the % operator. But there is really no difference. A: Other answers are about nested quoting. Another point of view I've come across, but I'm not sure I subscribe to, is to use single-quotes(') for characters (which are strings, but ord/chr are quick picky) and to use double-quotes for strings. Which disambiguates between a string that is supposed to be one character and one that just happens to be one character. Personally I find most touch typists aren't affected noticably by the "load" of using the shift-key. YMMV on that part. Going down the "it's faster to not use the shift" is a slippery slope. It's also faster to use hyper-condensed variable/function/class/module names. Everyone just so loves the fast and short 8.3 DOS files names too. :) Pick what makes semantic sense to you, then optimize. A: This is a rule I have heard about: ") If the string is for human consuption, that is interface text or output, use "" ') If the string is a specifier, like a dictionary key or an option, use '' I think a well-enforced rule like that can make sense for a project, but it's nothing that I would personally care much about. I like the above, since I read it, but I always use "" (since I learned C first wayy back?). A: I don't think there is a single best string delimiter. I like to use different delimiters to indicate different kinds of string. Specifically, I like to use "..." to delimit stings that are used for interpolation or that are natural language messages, and '...' to delimit small symbol-like strings. This gives me a subtle extra clue to the expected use for the string literal. I try to always use raw strings (r"...") for regular expressions because (1) I don't have to escape backslash characters and (2) my editor recognises this convention and does syntax highlighting inside the regex. The stylistic issues of single- vs. double-quotes are covered in question 56011.
Python: what kind of literal delimiter is "better" to use?
What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python?
[ "' because it's one keystroke less than \". Save your wrists!\nThey're otherwise identical (except you have to escape whichever you choose to use, if they appear inside the string).\n", "Consider these strings:\n\"Don't do that.\"\n'I said, \"okay\".'\n\"\"\"She said, \"That won't work\".\"\"\"\n\nWhich quote is \"best\"?\n", "Semantically there is no difference in Python; use either. Python also provides the handy triple string delimiter \"\"\" or ''' which can simplify multi-line quotes. There is also the raw string literal (r\"...\" or r'...') to inhibit \\ escapes. The Language Reference has all the details.\n", "For string constants containing a single quote use the double quote as delimiter.\nThe other way around, if you need a double quote inside.\nQuick, shiftless typing leads to single quote delimiters.\n>>> \"it's very simple\"\n>>> 'reference to the \"book\"'\n\n", "Single and double quotes act identically in Python. Escapes (\\n) always work, and there is no variable interpolation. (If you don't want escapes, you can use the r flag, as in r\"\\n\".)\nSince I'm coming from a Perl background, I have a habit of using single quotes for plain strings and double-quotes for formats used with the % operator. But there is really no difference.\n", "Other answers are about nested quoting. Another point of view I've come across, but I'm not sure I subscribe to, is to use single-quotes(') for characters (which are strings, but ord/chr are quick picky) and to use double-quotes for strings. Which disambiguates between a string that is supposed to be one character and one that just happens to be one character.\nPersonally I find most touch typists aren't affected noticably by the \"load\" of using the shift-key. YMMV on that part. Going down the \"it's faster to not use the shift\" is a slippery slope. It's also faster to use hyper-condensed variable/function/class/module names. Everyone just so loves the fast and short 8.3 DOS files names too. :) Pick what makes semantic sense to you, then optimize.\n", "This is a rule I have heard about:\n\") If the string is for human consuption, that is interface text or output, use \"\"\n') If the string is a specifier, like a dictionary key or an option, use ''\nI think a well-enforced rule like that can make sense for a project, but it's nothing that I would personally care much about. I like the above, since I read it, but I always use \"\" (since I learned C first wayy back?).\n", "I don't think there is a single best string delimiter. I like to use different delimiters to indicate different kinds of string. Specifically, I like to use \"...\" to delimit stings that are used for interpolation or that are natural language messages, and '...' to delimit small symbol-like strings. This gives me a subtle extra clue to the expected use for the string literal.\nI try to always use raw strings (r\"...\") for regular expressions because (1) I don't have to escape backslash characters and (2) my editor recognises this convention and does syntax highlighting inside the regex.\nThe stylistic issues of single- vs. double-quotes are covered in question 56011.\n" ]
[ 9, 9, 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001312940_python_string.txt
Q: How can I scrape this frame? If you visit this link right now, you will probably get a VBScript error. On the other hand, if you visit this link first and then the above link (in the same session), the page comes through. The way this application is set up, the first page is meant to serve as a frame in the second (main) page. If you click around a bit, you'll see how it works. My question: How do I scrape the first page with Python? I've tried everything I can think of -- urllib, urllib2, mechanize -- and all I get is 500 errors or timeouts. I suspect the answers lies with mechanize, but my mechanize-fu isn't good enough to crack this. Can anyone help? A: It always comes down to the request/response model. You just have to craft a series of http requests such that you get the desired responses. In this case, you also need the server to treat each request as part of the same session. To do that, you need to figure out how the server is tracking sessions. It could be a number of things, from cookies to hidden inputs to form actions, post data, or query strings. If I had to guess I'd put my money on a cookie in this case (I haven't checked the links). If this holds true, you need to send the first request, save the cookie you get back, and then send that cookie along with the 2nd request. It could also be that the initial page will have buttons and links that get you to the second page. Those links will have something like <A href="http://cad.chp.ca.gov/iiqr.asp?Center=RDCC&LogNumber=0197D0820&t=Traffic%20Hazard&l=3358%20MYRTLE&b="> where a lot of the gobbedlygook is generated by the first page. The "Center=RDCC&LogNumber=0197D0820&t=Traffic%20Hazard&l=3358%20MYRTLE&b=" part encodes some session information that you must get from the first page. And, of course, you might even need to do both. A: You might also try BeautifulSoup in addition to Mechanize. I'm not positive, but you should be able to parse the DOM down into the framed page. I also find Tamper Data to be a rather useful plugin when I'm writing scrapers.
How can I scrape this frame?
If you visit this link right now, you will probably get a VBScript error. On the other hand, if you visit this link first and then the above link (in the same session), the page comes through. The way this application is set up, the first page is meant to serve as a frame in the second (main) page. If you click around a bit, you'll see how it works. My question: How do I scrape the first page with Python? I've tried everything I can think of -- urllib, urllib2, mechanize -- and all I get is 500 errors or timeouts. I suspect the answers lies with mechanize, but my mechanize-fu isn't good enough to crack this. Can anyone help?
[ "It always comes down to the request/response model. You just have to craft a series of http requests such that you get the desired responses. In this case, you also need the server to treat each request as part of the same session. To do that, you need to figure out how the server is tracking sessions. It could be a number of things, from cookies to hidden inputs to form actions, post data, or query strings. If I had to guess I'd put my money on a cookie in this case (I haven't checked the links). If this holds true, you need to send the first request, save the cookie you get back, and then send that cookie along with the 2nd request.\nIt could also be that the initial page will have buttons and links that get you to the second page. Those links will have something like <A href=\"http://cad.chp.ca.gov/iiqr.asp?Center=RDCC&LogNumber=0197D0820&t=Traffic%20Hazard&l=3358%20MYRTLE&b=\"> where a lot of the gobbedlygook is generated by the first page.\nThe \"Center=RDCC&LogNumber=0197D0820&t=Traffic%20Hazard&l=3358%20MYRTLE&b=\" part encodes some session information that you must get from the first page.\nAnd, of course, you might even need to do both.\n", "You might also try BeautifulSoup in addition to Mechanize. I'm not positive, but you should be able to parse the DOM down into the framed page.\nI also find Tamper Data to be a rather useful plugin when I'm writing scrapers.\n" ]
[ 8, 1 ]
[]
[]
[ "mechanize", "python", "screen_scraping", "vbscript" ]
stackoverflow_0001314052_mechanize_python_screen_scraping_vbscript.txt
Q: Python test framework with support of non-fatal failures I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework. In py.test or nose I can't see something like the EXPECT macros I know from google testing framework. I'd like to make several assertions in one test while not aborting the test at the first failure. Am I missing something in these frameworks or does this not work? Does anybody have suggestions for python test framworks usable for automated system tests? A: I was wanting something similar for functional testing that I'm doing using nose. I eventually came up with this: def raw_print(str, *args): out_str = str % args sys.stdout.write(out_str) class DeferredAsserter(object): def __init__(self): self.broken = False def assert_equal(self, expected, actual): outstr = '%s == %s...' % (expected, actual) raw_print(outstr) try: assert expected == actual except AssertionError: raw_print('FAILED\n\n') self.broken = True except Exception, e: raw_print('ERROR\n') traceback.print_exc() self.broken = True else: raw_print('PASSED\n\n') def invoke(self): assert not self.broken In other words, it's printing out strings indicating if a test passed or failed. At the end of the test, you call the invoke method which actually does the real assertion. It's definitely not preferable, but I haven't seen a Python testing framework that can handle this kind of testing. Nor have I gotten around to figuring out how to write a nose plugin to do this kind of thing. :-/ A: You asked for suggestions so I'll suggest robot framework. A: Oddly enough it sounds like you're looking for something like my claft (command line and filter tester). Something like it but far more mature. claft is (so far) just a toy I wrote to help students with programming exercises. The idea is to provide the exercises with simple configuration files that represent the program's requirements in terms which are reasonably human readable (and declarative rather than programmatic) while also being suitable for automated testing. claft runs all the defined tests, supplying arguments and inputs to each, checking return codes, and matching output (stdout) and error messages (stderr) against regular expression patterns. It collects all the failures in a list an prints the whole list at the end of each suite. It does NOT yet do arbitrary dialogs of input/output sequences. So far it just feeds data in then reads all data/errors out. It also doesn't implement timeouts and, in fact, doesn't even capture failed execute attempts. (I did say it's just a toy, so far, didn't I?). I also haven't yet implemented support for Setup, Teardown, and External Check scripts (though I have plans to do so). Bryan's suggestion of the "robot framework" might be better for your needs; though a quick glance through it suggests that it's considerably more involved than I want for my purposes. (I need to keep things simple enough that students new to programming can focus on their exercises and not spend lots of time fighting with setting up their test harness). You're welcome to look at claft and use it or derive your own solution there from (it's BSD licensed). Obviously you'd be welcome to contribute back. (It's on [bitbucket]:(http://www.bitbucket.org/) so you can use Mercurial to clone, and fork your own respository ... and submit a "pull request" if you ever want me to look at merging your changes back into my repo). Then again perhaps I'm misreading your question. A: Why not (in unittest, but this should work in any framework): class multiTests(MyTestCase): def testMulti(self, tests): tests( a == b ) tests( frobnicate()) ... assuming your implemented MyTestCase so that a function is wrapped into testlist = [] x.testMulti(testlist.append) assert all(testlist)
Python test framework with support of non-fatal failures
I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework. In py.test or nose I can't see something like the EXPECT macros I know from google testing framework. I'd like to make several assertions in one test while not aborting the test at the first failure. Am I missing something in these frameworks or does this not work? Does anybody have suggestions for python test framworks usable for automated system tests?
[ "I was wanting something similar for functional testing that I'm doing using nose. I eventually came up with this:\ndef raw_print(str, *args):\n out_str = str % args\n sys.stdout.write(out_str)\n\nclass DeferredAsserter(object):\n def __init__(self):\n self.broken = False\n def assert_equal(self, expected, actual):\n outstr = '%s == %s...' % (expected, actual)\n raw_print(outstr)\n try:\n assert expected == actual\n except AssertionError:\n raw_print('FAILED\\n\\n')\n self.broken = True\n except Exception, e:\n raw_print('ERROR\\n')\n traceback.print_exc()\n self.broken = True\n else:\n raw_print('PASSED\\n\\n')\n\n def invoke(self):\n assert not self.broken\n\nIn other words, it's printing out strings indicating if a test passed or failed. At the end of the test, you call the invoke method which actually does the real assertion. It's definitely not preferable, but I haven't seen a Python testing framework that can handle this kind of testing. Nor have I gotten around to figuring out how to write a nose plugin to do this kind of thing. :-/\n", "You asked for suggestions so I'll suggest robot framework. \n", "Oddly enough it sounds like you're looking for something like my claft (command line and filter tester). Something like it but far more mature.\nclaft is (so far) just a toy I wrote to help students with programming exercises. The idea is to provide the exercises with simple configuration files that represent the program's requirements in terms which are reasonably human readable (and declarative rather than programmatic) while also being suitable for automated testing.\nclaft runs all the defined tests, supplying arguments and inputs to each, checking return codes, and matching output (stdout) and error messages (stderr) against regular expression patterns. It collects all the failures in a list an prints the whole list at the end of each suite.\nIt does NOT yet do arbitrary dialogs of input/output sequences. So far it just feeds data in then reads all data/errors out. It also doesn't implement timeouts and, in fact, doesn't even capture failed execute attempts. (I did say it's just a toy, so far, didn't I?). I also haven't yet implemented support for Setup, Teardown, and External Check scripts (though I have plans to do so).\nBryan's suggestion of the \"robot framework\" might be better for your needs; though a quick glance through it suggests that it's considerably more involved than I want for my purposes. (I need to keep things simple enough that students new to programming can focus on their exercises and not spend lots of time fighting with setting up their test harness).\nYou're welcome to look at claft and use it or derive your own solution there from (it's BSD licensed). Obviously you'd be welcome to contribute back. (It's on [bitbucket]:(http://www.bitbucket.org/) so you can use Mercurial to clone, and fork your own respository ... and submit a \"pull request\" if you ever want me to look at merging your changes back into my repo).\nThen again perhaps I'm misreading your question.\n", "Why not (in unittest, but this should work in any framework):\nclass multiTests(MyTestCase):\n def testMulti(self, tests):\n tests( a == b )\n tests( frobnicate())\n ...\n\nassuming your implemented MyTestCase so that a function is wrapped into\ntestlist = []\nx.testMulti(testlist.append)\nassert all(testlist)\n\n" ]
[ 2, 1, 1, 0 ]
[ "nose will only abort on the first failure if you pass the -x option at the command line.\ntest.py:\ndef test1():\n assert False\n\ndef test2():\n assert False\n\nwithout -x option:\n\nC:\\temp\\py>C:\\Python26\\Scripts\\nosetests.exe test.py\nFF\n======================================================================\nFAIL: test.test1\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"C:\\Python26\\lib\\site-packages\\nose-0.11.1-py2.6.egg\\nose\\case.py\", line\n183, in runTest\n self.test(*self.arg)\n File \"C:\\temp\\py\\test.py\", line 2, in test1\n assert False\nAssertionError\n\n======================================================================\nFAIL: test.test2\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"C:\\Python26\\lib\\site-packages\\nose-0.11.1-py2.6.egg\\nose\\case.py\", line\n183, in runTest\n self.test(*self.arg)\n File \"C:\\temp\\py\\test.py\", line 5, in test2\n assert False\nAssertionError\n\n----------------------------------------------------------------------\nRan 2 tests in 0.031s\n\nFAILED (failures=2)\n\n\nwith -x option:\n\nC:\\temp\\py>C:\\Python26\\Scripts\\nosetests.exe test.py -x\nF\n======================================================================\nFAIL: test.test1\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"C:\\Python26\\lib\\site-packages\\nose-0.11.1-py2.6.egg\\nose\\case.py\", line\n183, in runTest\n self.test(*self.arg)\n File \"C:\\temp\\py\\test.py\", line 2, in test1\n assert False\nAssertionError\n\n----------------------------------------------------------------------\nRan 1 test in 0.047s\n\nFAILED (failures=1)\n\nYou might want to consider reviewing the nose documentation.\n" ]
[ -1 ]
[ "assert", "nose", "python", "testing" ]
stackoverflow_0001307367_assert_nose_python_testing.txt
Q: How to get pyodbc.connect to prompt? In my C++ programs, I'm used to the connection process prompting for a missing password or letting you select your own connection. Whe I use pyodbc.connect(), an exception is generated instead. Traceback (most recent call last): File "<pyshell#41>", line 1, in <module> c=pyodbc.connect('') Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnectW)') The pyodbc documentation for Connection Strings states that pyodbc calls the C function SQLDriverConnect. The prompting behavior is controlled by the DriverCompletion parameter, and I can't see a way to set that parameter from Python. A: I'm not sure if you can, I just checked the source for this and it seems like it always sends SQL_DRIVER_NOPROMPT. See line 88 in connection.cpp
How to get pyodbc.connect to prompt?
In my C++ programs, I'm used to the connection process prompting for a missing password or letting you select your own connection. Whe I use pyodbc.connect(), an exception is generated instead. Traceback (most recent call last): File "<pyshell#41>", line 1, in <module> c=pyodbc.connect('') Error: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnectW)') The pyodbc documentation for Connection Strings states that pyodbc calls the C function SQLDriverConnect. The prompting behavior is controlled by the DriverCompletion parameter, and I can't see a way to set that parameter from Python.
[ "I'm not sure if you can, I just checked the source for this and it seems like it always sends SQL_DRIVER_NOPROMPT.\nSee line 88 in connection.cpp\n" ]
[ 2 ]
[]
[]
[ "pyodbc", "python" ]
stackoverflow_0001314445_pyodbc_python.txt
Q: Is there a LGPL/Apache/BSD Python library for rendering modern HTML and Flash with a transparent background on Windows,Mac,Linux? I'm looking for a Python library that's suitable, with DOM access too. I don't mind if the flash transparency doesn't carry over. PyQT's license isn't compatible with the project, and PySide isn't compiled cross-platform yet. Any thoughts? A: Actually, the pyside project now provides LGPL 2.1 python bindings for Qt. The first public release was on August 18th. It is being developed with support from Nokia. According to the release announcement the bindings are initially focused on Linux/X11 but expect to support all Qt supported platforms eventually.
Is there a LGPL/Apache/BSD Python library for rendering modern HTML and Flash with a transparent background on Windows,Mac,Linux?
I'm looking for a Python library that's suitable, with DOM access too. I don't mind if the flash transparency doesn't carry over. PyQT's license isn't compatible with the project, and PySide isn't compiled cross-platform yet. Any thoughts?
[ "Actually, the pyside project now provides LGPL 2.1 python bindings for Qt.\nThe first public release was on August 18th. It is being developed with support from Nokia.\nAccording to the release announcement the bindings are initially focused on Linux/X11 but expect to support all Qt supported platforms eventually.\n" ]
[ 2 ]
[]
[]
[ "alphablending", "gecko", "python", "webkit", "widget" ]
stackoverflow_0001314596_alphablending_gecko_python_webkit_widget.txt
Q: cURL: https through a proxy I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue? A: I find testing with command-line curl a big help before moving to PHP/cURL. For example, w/ command-line, unless you've configured certificates, you'll need -k switch. And to go through a proxy, it's the -x <proxyhost[:port]> switch. I believe the -k equivalent is curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE); I believe the -x equivalent is curl_setopt($curl, CURLOPT_PROXY, '<proxyhost[:port]>'); DISCLAIMER: I have not tested any of this. If you give more information about what you've tried, it might be helpful. A: No problem since the proxy server supports the CONNECT method.
cURL: https through a proxy
I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?
[ "I find testing with command-line curl a big help before moving to PHP/cURL.\nFor example, w/ command-line, unless you've configured certificates, you'll need -k switch. And to go through a proxy, it's the -x <proxyhost[:port]> switch.\nI believe the -k equivalent is\ncurl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);\n\nI believe the -x equivalent is\ncurl_setopt($curl, CURLOPT_PROXY, '<proxyhost[:port]>');\n\n\n\nDISCLAIMER: I have not tested any of\n this. If you give more information\n about what you've tried, it might be\n helpful.\n\n", "No problem since the proxy server supports the CONNECT method.\n" ]
[ 2, 0 ]
[]
[]
[ "curl", "https", "php", "python", "urllib2" ]
stackoverflow_0001308760_curl_https_php_python_urllib2.txt
Q: Cannot connect to server externally using Twisted library in Python I am trying to get a simple TCP server running on my server. I am using echoserv.py and echoclient.py on the Twisted examples page. When I run echoserv.py on the server, I can connect fine using the following in echoclient.py: reactor.connectTCP('localhost', 8000, factory) <- for a localhost connection reactor.connectTCP('192.168.0.250', 8000, factory) <- for a lan connection but when I try to connect remotely via the Internet, I use the following line in echoclient.py: reactor.connectTCP('mydomain.com', 8000, factory) However when I try to run echoclient.py, there is a pause, and then I get: connection failed: User timeout caused connection failure. I know it is doing something with my domain, because when I do a random domain, I get: connection failed: Connection was refused by other side: 111: Connection refused. All my ports are configured correctly for port 8000, and I'm sure it's not my ISP blocking the ports (I can use random ports all the time with other applications). I've also tried using ports besides 8000, but no avail. Here is the port fowarding line in my router page if it helps: [X] tcp_server 192.168.0.250 TCP 8000/8000 always edit delete Any idea why this is happening? A: When you program your router to port-forward outcoming connections to your inbound server, it actually works only if the clients (those who try to connect) are really outside your network, really coming from the cloud. You, from inside your network, can't use it, it won't work for you. You will have the feeling that it is not working, but it is. At least for those outside. Search for Port Forwarding Tester at Google to take a proof of that. This one (first result of Google) is working pretty fine: http://www.yougetsignal.com/tools/open-ports/ A: Have you tried the following? Do a port scan on the IP of the machine running the server to confirm that the port is open. A simple [google search][1] will give you plenty of options. If the port is shown to be open, try the remote connection via telnet instead of the twisted client script. Some system firewalls block the application level (such as Windows XP) and could be blocking your outgoing connection without you realizing it.
Cannot connect to server externally using Twisted library in Python
I am trying to get a simple TCP server running on my server. I am using echoserv.py and echoclient.py on the Twisted examples page. When I run echoserv.py on the server, I can connect fine using the following in echoclient.py: reactor.connectTCP('localhost', 8000, factory) <- for a localhost connection reactor.connectTCP('192.168.0.250', 8000, factory) <- for a lan connection but when I try to connect remotely via the Internet, I use the following line in echoclient.py: reactor.connectTCP('mydomain.com', 8000, factory) However when I try to run echoclient.py, there is a pause, and then I get: connection failed: User timeout caused connection failure. I know it is doing something with my domain, because when I do a random domain, I get: connection failed: Connection was refused by other side: 111: Connection refused. All my ports are configured correctly for port 8000, and I'm sure it's not my ISP blocking the ports (I can use random ports all the time with other applications). I've also tried using ports besides 8000, but no avail. Here is the port fowarding line in my router page if it helps: [X] tcp_server 192.168.0.250 TCP 8000/8000 always edit delete Any idea why this is happening?
[ "When you program your router to port-forward outcoming connections to your inbound server, it actually works only if the clients (those who try to connect) are really outside your network, really coming from the cloud. You, from inside your network, can't use it, it won't work for you. You will have the feeling that it is not working, but it is. At least for those outside.\nSearch for Port Forwarding Tester at Google to take a proof of that. This one (first result of Google) is working pretty fine: http://www.yougetsignal.com/tools/open-ports/\n", "Have you tried the following?\n\nDo a port scan on the IP of the machine running the server to confirm that the port is open. A simple [google search][1] will give you plenty of options.\nIf the port is shown to be open, try the remote connection via telnet instead of the twisted client script. Some system firewalls block the application level (such as Windows XP) and could be blocking your outgoing connection without you realizing it.\n\n" ]
[ 4, 1 ]
[]
[]
[ "port", "python", "tcp", "twisted" ]
stackoverflow_0001315087_port_python_tcp_twisted.txt
Q: Can I create a Python extension module in D (instead of C) I hear D is link-compatible with C. I'd like to use D to create an extension module for Python. Am I overlooking some reason why it's never going to work? A: Wait? Something like this http://www.dsource.org/projects/pyd (previously http://pyd.dsource.org/) A: Sounds easy and people here who say it's just up to the C API don't know how difficult it is to integrate the Boehm GC used by D within Python. PyD looks like a typical concept proof where people haven't realized the real world problems.
Can I create a Python extension module in D (instead of C)
I hear D is link-compatible with C. I'd like to use D to create an extension module for Python. Am I overlooking some reason why it's never going to work?
[ "Wait? Something like this http://www.dsource.org/projects/pyd (previously http://pyd.dsource.org/)\n", "Sounds easy and people here who say it's just up to the C API don't know how difficult it is to integrate the Boehm GC used by D within Python. PyD looks like a typical concept proof where people haven't realized the real world problems.\n" ]
[ 15, 2 ]
[]
[]
[ "d", "module", "python" ]
stackoverflow_0001150093_d_module_python.txt
Q: Python, find a file in the same directory Let's say I have the files a.py and b.txt in the same directory. I can't garuntee where that directory is, but I know b.txt will be in the same directory as a.py. a.py needs to access b.txt, how would I go about finding the path to b.txt? Something like "./b.txt" won't work if the user runs the program from a directory other than the one it's saved in. A: Use the __file__ variable: os.path.join(os.path.dirname(__file__), "b.txt") A: If you want the location of the main script, even from code that might be running in an imported module, you need to use sys.argv[0] rather than __file__. (sys.argv[0] is always the path to the main script; see http://docs.python.org/library/sys.html#sys.argv) If you want the location of the current module, even if it's been imported by some other script, you should use __file__ as Martin says. Here's a way to use sys.argv[0]: import os, sys dirname, filename = os.path.split(os.path.abspath(sys.argv[0])) print os.path.join(dirname, "b.txt")
Python, find a file in the same directory
Let's say I have the files a.py and b.txt in the same directory. I can't garuntee where that directory is, but I know b.txt will be in the same directory as a.py. a.py needs to access b.txt, how would I go about finding the path to b.txt? Something like "./b.txt" won't work if the user runs the program from a directory other than the one it's saved in.
[ "Use the __file__ variable:\nos.path.join(os.path.dirname(__file__), \"b.txt\")\n\n", "If you want the location of the main script, even from code that might be running in an imported module, you need to use sys.argv[0] rather than __file__. (sys.argv[0] is always the path to the main script; see http://docs.python.org/library/sys.html#sys.argv)\nIf you want the location of the current module, even if it's been imported by some other script, you should use __file__ as Martin says.\nHere's a way to use sys.argv[0]:\nimport os, sys\ndirname, filename = os.path.split(os.path.abspath(sys.argv[0]))\nprint os.path.join(dirname, \"b.txt\")\n\n" ]
[ 5, 5 ]
[]
[]
[ "python" ]
stackoverflow_0001315390_python.txt
Q: Is there any way to get vim to auto wrap python strings at 79 chars? I found this answer about wrapping strings using parens extremely useful, but is there a way in Vim to make this happen automatically? I want to be within a string, typing away, and have Vim just put parens around my string and wrap it as necessary. For me, this would be a gigantic time saver as I spend so much time just wrapping long strings manually. Thanks in advance. Example: I type the following text: mylongervarname = "my really long string here so please wrap and quote automatically" Vim automatically does this when I hit column 80 with the string: mylongervarname = ("my really long string here so please wrap and " "quote automatically") A: More a direction than a solution. Use 'formatexpr' or 'formatprg'. When a line exceeds 'textwidth' and passes the criteria set by the 'formatoptions' these are used (if set) to break the line. The only real difference is that 'formatexpr' is a vimscript expression, while 'formatprg' filters the line through an exterior program. So if you know of a formatter that can do this transformation to lines of python code, or are willing to write one, this will give you a hook to have it executed. And since vim supports python (see :help python) you can even write your python formatter in python.
Is there any way to get vim to auto wrap python strings at 79 chars?
I found this answer about wrapping strings using parens extremely useful, but is there a way in Vim to make this happen automatically? I want to be within a string, typing away, and have Vim just put parens around my string and wrap it as necessary. For me, this would be a gigantic time saver as I spend so much time just wrapping long strings manually. Thanks in advance. Example: I type the following text: mylongervarname = "my really long string here so please wrap and quote automatically" Vim automatically does this when I hit column 80 with the string: mylongervarname = ("my really long string here so please wrap and " "quote automatically")
[ "More a direction than a solution.\nUse 'formatexpr' or 'formatprg'. When a line exceeds 'textwidth' and passes the criteria set by the 'formatoptions' these are used (if set) to break the line. The only real difference is that 'formatexpr' is a vimscript expression, while 'formatprg' filters the line through an exterior program.\nSo if you know of a formatter that can do this transformation to lines of python code, or are willing to write one, this will give you a hook to have it executed. And since vim supports python (see :help python) you can even write your python formatter in python.\n" ]
[ 12 ]
[]
[]
[ "python", "string", "vim", "word_wrap" ]
stackoverflow_0001314174_python_string_vim_word_wrap.txt
Q: How to migrate packages to a new Python installation? How can i quickly migrate/copy my python packages that i have installed over time to a new machine? This is my scenario; Am upgrading from an old laptop running python2.5 & Django1.0, to a new laptop which i intend to install python 2.6.2 & Django 1.1. In time i have downloaded and installed many python packages in my old machine(e.f pygame,pyro genshi,py2exe bla bla bla many...), is there an easier way i can copy my packages to the new laptop without running installation file for each individual package? Gath A: If they're pure Python, then in theory you could just copy them across from one Lib\site-packages directory to the other. However, this will not work for any packages which include C extensions (as these need to be recompiled anew for every Python version). You also need to consider e.g. .pth files which have been created by the installation packages, deleting pre-existing .pyc files etc. I'd advise just reinstalling the packages. A: As Vinay says, there are some parts of common installations that can't be just copied over. Also, keep in mind that setup.py scripts can perform arbitrary work, for example, they could test for the version of Python, and change how they install things, or they could write registry entries, or create .rc files, etc. I concur: re-install the packages. The time you save by trying to just copy everything over will be completely lost the first time something mysteriously doesn't work and you try to debug it. Also, another benefit to re-installation: if you only do it when you need the package, then you won't bother reinstalling the packages you no longer need. A: Use Portable Python then you can have everything on your USB stick. Your entire development environment always in your pocket, just plug it in in ANY pc and start coding. You can even have multiple versions of Portable Python on same USB stick and run them side by side which helps if you e.g. are busy with transition to Python 3.* or just want to experiment.
How to migrate packages to a new Python installation?
How can i quickly migrate/copy my python packages that i have installed over time to a new machine? This is my scenario; Am upgrading from an old laptop running python2.5 & Django1.0, to a new laptop which i intend to install python 2.6.2 & Django 1.1. In time i have downloaded and installed many python packages in my old machine(e.f pygame,pyro genshi,py2exe bla bla bla many...), is there an easier way i can copy my packages to the new laptop without running installation file for each individual package? Gath
[ "If they're pure Python, then in theory you could just copy them across from one Lib\\site-packages directory to the other. However, this will not work for any packages which include C extensions (as these need to be recompiled anew for every Python version). You also need to consider e.g. .pth files which have been created by the installation packages, deleting pre-existing .pyc files etc.\nI'd advise just reinstalling the packages.\n", "As Vinay says, there are some parts of common installations that can't be just copied over. Also, keep in mind that setup.py scripts can perform arbitrary work, for example, they could test for the version of Python, and change how they install things, or they could write registry entries, or create .rc files, etc. \nI concur: re-install the packages. The time you save by trying to just copy everything over will be completely lost the first time something mysteriously doesn't work and you try to debug it.\nAlso, another benefit to re-installation: if you only do it when you need the package, then you won't bother reinstalling the packages you no longer need.\n", "Use Portable Python then you can have everything on your USB stick. Your entire development environment always in your pocket, just plug it in in ANY pc and start coding.\nYou can even have multiple versions of Portable Python on same USB stick and run them side by side which helps if you e.g. are busy with transition to Python 3.* or just want to experiment.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "migration", "package", "python" ]
stackoverflow_0001315511_migration_package_python.txt
Q: How can Django projects be deployed with minimal installation work? To deploy a site with Python/Django/MySQL I had to do these on the server (RedHat Linux): Install MySQLPython Install ModPython Install Django (using python setup.py install) Add some directives on httpd.conf file (or use .htaccess) But, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are: Can the deployment process of Django project be made easier? Am I doing too much? Can some of the steps be omitted? What is the best way to deploy django site on a shared server? A: To enable easy Django deployement I would to the following: Fisrt-time server configuration Install mod_wsgi which allow you to run in embedded mode OR in daemon mode. Install python and virtualenv In your development environment Use virtualenv. Take a look at mod_wsgi and virtualenv configuration Install Django your django version (using python setup.py install) Install your python libs Develop your project Every time you want to deploy Copy your virtual environment to the production server Just add an Include directive in your httpd.conf file (or use .htaccess) to your project's apache configuration. As stated in mod_wsgi integration with django documentation, one example of how Apache included file could be configured would be: Alias /media/ /usr/local/django/mysite/media/ <Directory /usr/local/django/mysite/media> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi <Directory /usr/local/django/mysite/apache> Order deny,allow Allow from all </Directory> Automating deployement I would consider using Fabric to automate deployement A: Can the deployment process of django project be made easier? No. You can script some of this, if you want. However, you're never going to install MySQL, MySQLPuthon, mod_wsgi (or mod_python), or Django again. You will, however, tweak your application all the time. Am I doing too much? No. Python (and Django) are not part of Apache. PHP is embedded in Apache. PHP is exactly like mod_python (or mod_wsgi). Just one piece of the pie. (Apparently, some hosts handle the PHP installation for you, but don't handle the mod_wsgi or mod_python installation.) Can some of the steps be omitted? No. However, you only do it once. What is the best way to deploy django site on a shared server? You're doing it correctly. When I deployed another site with php (using CodeIgniter) I had to do nothing Certainly an unfair comparison. Apparently, they already installed PHP and the database for you. Nice of them. Also, PHP is not Python. PHP is a plug-in to Apache. Python is "just" a programming language, that requires a separate plug-in to Apache (i.e., mod_python or mod_wsgi). See How nicely does Python 'flow' with HTML as compared to PHP? A: Django hosting support is not as widespread as for PHP, but there are some good options. I can recommend WebFaction - they provide an easy-to-use control panel which offers various combinations of Django versions, Python versions, mod_python, mod_wsgi, MySQL, PostgreSQL etc. They're cost-effective, too. If you use their setup, you get SSH access but just about all of the setting up can be done via their control panel, apart from the actual uploading of your project folder. Disclaimer: apart from being a happy customer I have no other connection with them. A: You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you. A: You just install this already made solution if your allowed to run an image on a virtual machine. I can imagine installations will be done this way in future as complicated security configuration can be done automatically. A: Most shared hosting sites run the LAMP (Linux, Apache, MySQL, PHP) stack so deployment is just a matter of copying some files over. If you were using one of the PHP frameworks like CakePHP or something the service hasn't installed (like an imaging library) you'd be going through extra deployment steps as well. With Django (or Rails, or any other complex framework) you have to set up the stack yourself that one time, then you're good to go. However, you'll also want to think about post-deployment updating. If it's something you're going to do often you may also want to look into Fabric or Capistrano to help automate that. P.S. I'll second that WebFaction recommendation. It's as close to one-button installation as I've seen. Pretty happy customer although I mostly use them for test-sites and prototyping. A: You can use Python virtualenv and pip (see also "Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip"). I developed my Django project in the virtual environment. I copy the virtual environment file to the production machine when I deploy my application. I use mod_wsgi. You must write that in the mod_wsig file: import site site.addsitedir('C:\PythonVirtualEnv\IntegralEnv\Lib\site-packages')
How can Django projects be deployed with minimal installation work?
To deploy a site with Python/Django/MySQL I had to do these on the server (RedHat Linux): Install MySQLPython Install ModPython Install Django (using python setup.py install) Add some directives on httpd.conf file (or use .htaccess) But, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are: Can the deployment process of Django project be made easier? Am I doing too much? Can some of the steps be omitted? What is the best way to deploy django site on a shared server?
[ "To enable easy Django deployement I would to the following:\nFisrt-time server configuration\n\nInstall mod_wsgi which allow you to run in embedded mode OR in daemon mode.\nInstall python and virtualenv\n\nIn your development environment\n\nUse virtualenv. Take a look at mod_wsgi and virtualenv configuration\nInstall Django your django version (using python setup.py install)\nInstall your python libs\nDevelop your project\n\nEvery time you want to deploy\n\nCopy your virtual environment to the production server\nJust add an Include directive in your httpd.conf file (or use .htaccess) to your project's apache configuration. As stated in mod_wsgi integration with django documentation, one example of how Apache included file could be configured would be:\n\n\nAlias /media/ /usr/local/django/mysite/media/\n\n<Directory /usr/local/django/mysite/media>\n Order deny,allow\n Allow from all\n</Directory>\n\nWSGIScriptAlias / /usr/local/django/mysite/apache/django.wsgi\n\n<Directory /usr/local/django/mysite/apache>\n Order deny,allow\n Allow from all\n</Directory>\n\n\nAutomating deployement\n\nI would consider using Fabric to automate deployement\n\n", "Can the deployment process of django project be made easier?\nNo. You can script some of this, if you want. However, you're never going to install MySQL, MySQLPuthon, mod_wsgi (or mod_python), or Django again.\nYou will, however, tweak your application all the time.\nAm I doing too much?\nNo. Python (and Django) are not part of Apache. PHP is embedded in Apache. PHP is exactly like mod_python (or mod_wsgi). Just one piece of the pie. (Apparently, some hosts handle the PHP installation for you, but don't handle the mod_wsgi or mod_python installation.)\nCan some of the steps be omitted?\nNo. However, you only do it once.\nWhat is the best way to deploy django site on a shared server?\nYou're doing it correctly.\nWhen I deployed another site with php (using CodeIgniter) I had to do nothing\nCertainly an unfair comparison. Apparently, they already installed PHP and the database for you. Nice of them.\nAlso, PHP is not Python. PHP is a plug-in to Apache. Python is \"just\" a programming language, that requires a separate plug-in to Apache (i.e., mod_python or mod_wsgi).\nSee How nicely does Python 'flow' with HTML as compared to PHP?\n", "Django hosting support is not as widespread as for PHP, but there are some good options. I can recommend WebFaction - they provide an easy-to-use control panel which offers various combinations of Django versions, Python versions, mod_python, mod_wsgi, MySQL, PostgreSQL etc. They're cost-effective, too. If you use their setup, you get SSH access but just about all of the setting up can be done via their control panel, apart from the actual uploading of your project folder.\nDisclaimer: apart from being a happy customer I have no other connection with them.\n", "You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you.\n", "You just install this already made solution if your allowed to run an image on a virtual machine. I can imagine installations will be done this way in future as complicated security configuration can be done automatically.\n", "Most shared hosting sites run the LAMP (Linux, Apache, MySQL, PHP) stack so deployment is just a matter of copying some files over. If you were using one of the PHP frameworks like CakePHP or something the service hasn't installed (like an imaging library) you'd be going through extra deployment steps as well.\nWith Django (or Rails, or any other complex framework) you have to set up the stack yourself that one time, then you're good to go.\nHowever, you'll also want to think about post-deployment updating. If it's something you're going to do often you may also want to look into Fabric or Capistrano to help automate that.\nP.S. I'll second that WebFaction recommendation. It's as close to one-button installation as I've seen. Pretty happy customer although I mostly use them for test-sites and prototyping.\n", "You can use Python virtualenv and pip (see also \"Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip\"). I developed my Django project in the virtual environment. I copy the virtual environment file to the production machine when I deploy my application. I use mod_wsgi. You must write that in the mod_wsig file:\nimport site \nsite.addsitedir('C:\\PythonVirtualEnv\\IntegralEnv\\Lib\\site-packages') \n\n" ]
[ 4, 3, 2, 1, 1, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001313989_django_python.txt
Q: Pythonic way of iterating over 3D array I have a 3D array in Python and I need to iterate over all the cubes in the array. That is, for all (x,y,z) in the array's dimensions I need to access the cube: array[(x + 0, y + 0, z + 0)] array[(x + 1, y + 0, z + 0)] array[(x + 0, y + 1, z + 0)] array[(x + 1, y + 1, z + 0)] array[(x + 0, y + 0, z + 1)] array[(x + 1, y + 0, z + 1)] array[(x + 0, y + 1, z + 1)] array[(x + 1, y + 1, z + 1)] The array is a Numpy array, though that's not really necessary. I just found it very easy to read the data in with a one-liner using numpy.fromfile(). Is there any more Pythonic way to iterate over these than the following? That simply looks like C using Python syntax. for x in range(x_dimension): for y in range(y_dimension): for z in range(z_dimension): work_with_cube(array[(x + 0, y + 0, z + 0)], array[(x + 1, y + 0, z + 0)], array[(x + 0, y + 1, z + 0)], array[(x + 1, y + 1, z + 0)], array[(x + 0, y + 0, z + 1)], array[(x + 1, y + 0, z + 1)], array[(x + 0, y + 1, z + 1)], array[(x + 1, y + 1, z + 1)]) A: Have a look at itertools, especially itertools.product. You can compress the three loops into one with import itertools for x, y, z in itertools.product(*map(xrange, (x_dim, y_dim, z_dim)): ... You can also create the cube this way: cube = numpy.array(list(itertools.product((0,1), (0,1), (0,1)))) print cube array([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]) and add the offsets by a simple addition print cube + (10,100,1000) array([[ 10, 100, 1000], [ 10, 100, 1001], [ 10, 101, 1000], [ 10, 101, 1001], [ 11, 100, 1000], [ 11, 100, 1001], [ 11, 101, 1000], [ 11, 101, 1001]]) which would to translate to cube + (x,y,z) in your case. The very compact version of your code would be import itertools, numpy cube = numpy.array(list(itertools.product((0,1), (0,1), (0,1)))) x_dim = y_dim = z_dim = 10 for offset in itertools.product(*map(xrange, (x_dim, y_dim, z_dim))): work_with_cube(cube+offset) Edit: itertools.product makes the product over the different arguments, i.e. itertools.product(a,b,c), so I have to pass map(xrange, ...) with as *map(...) A: import itertools for x, y, z in itertools.product(xrange(x_size), xrange(y_size), xrange(z_size)): work_with_cube(array[x, y, z])
Pythonic way of iterating over 3D array
I have a 3D array in Python and I need to iterate over all the cubes in the array. That is, for all (x,y,z) in the array's dimensions I need to access the cube: array[(x + 0, y + 0, z + 0)] array[(x + 1, y + 0, z + 0)] array[(x + 0, y + 1, z + 0)] array[(x + 1, y + 1, z + 0)] array[(x + 0, y + 0, z + 1)] array[(x + 1, y + 0, z + 1)] array[(x + 0, y + 1, z + 1)] array[(x + 1, y + 1, z + 1)] The array is a Numpy array, though that's not really necessary. I just found it very easy to read the data in with a one-liner using numpy.fromfile(). Is there any more Pythonic way to iterate over these than the following? That simply looks like C using Python syntax. for x in range(x_dimension): for y in range(y_dimension): for z in range(z_dimension): work_with_cube(array[(x + 0, y + 0, z + 0)], array[(x + 1, y + 0, z + 0)], array[(x + 0, y + 1, z + 0)], array[(x + 1, y + 1, z + 0)], array[(x + 0, y + 0, z + 1)], array[(x + 1, y + 0, z + 1)], array[(x + 0, y + 1, z + 1)], array[(x + 1, y + 1, z + 1)])
[ "Have a look at itertools, especially itertools.product. You can compress the three loops into one with \nimport itertools\n\nfor x, y, z in itertools.product(*map(xrange, (x_dim, y_dim, z_dim)):\n ...\n\nYou can also create the cube this way:\ncube = numpy.array(list(itertools.product((0,1), (0,1), (0,1))))\nprint cube\narray([[0, 0, 0],\n [0, 0, 1],\n [0, 1, 0],\n [0, 1, 1],\n [1, 0, 0],\n [1, 0, 1],\n [1, 1, 0],\n [1, 1, 1]])\n\nand add the offsets by a simple addition\nprint cube + (10,100,1000)\narray([[ 10, 100, 1000],\n [ 10, 100, 1001],\n [ 10, 101, 1000],\n [ 10, 101, 1001],\n [ 11, 100, 1000],\n [ 11, 100, 1001],\n [ 11, 101, 1000],\n [ 11, 101, 1001]])\n\nwhich would to translate to cube + (x,y,z) in your case. The very compact version of your code would be \nimport itertools, numpy\n\ncube = numpy.array(list(itertools.product((0,1), (0,1), (0,1))))\n\nx_dim = y_dim = z_dim = 10\n\nfor offset in itertools.product(*map(xrange, (x_dim, y_dim, z_dim))):\n work_with_cube(cube+offset)\n\nEdit: itertools.product makes the product over the different arguments, i.e. itertools.product(a,b,c), so I have to pass map(xrange, ...) with as *map(...)\n", "import itertools\nfor x, y, z in itertools.product(xrange(x_size), \n xrange(y_size), \n xrange(z_size)):\n work_with_cube(array[x, y, z])\n\n" ]
[ 19, 8 ]
[]
[]
[ "arrays", "loops", "python" ]
stackoverflow_0001316068_arrays_loops_python.txt
Q: Downloading compressed content over HTTP using Python How do I take advantage of HTTP 1.1's compression when downloading web pages using Python? I am currently using the built-in urllib module for downloading web content. Reading through the documentation I couldn't find any information that is indeed using compression. Is it already built-in into urllib or is there another library that I can use? A: httplib2 supports 'deflate' and 'gzip' compression. Example import httplib2 h = httplib2.Http(".cache") resp, content = h.request("http://example.org/", "GET") The content is decompressed as necessary.
Downloading compressed content over HTTP using Python
How do I take advantage of HTTP 1.1's compression when downloading web pages using Python? I am currently using the built-in urllib module for downloading web content. Reading through the documentation I couldn't find any information that is indeed using compression. Is it already built-in into urllib or is there another library that I can use?
[ "httplib2 supports 'deflate' and 'gzip' compression. \nExample\nimport httplib2\nh = httplib2.Http(\".cache\")\nresp, content = h.request(\"http://example.org/\", \"GET\")\n\nThe content is decompressed as necessary. \n" ]
[ 6 ]
[]
[]
[ "compression", "gzip", "http", "httplib2", "python" ]
stackoverflow_0001316517_compression_gzip_http_httplib2_python.txt
Q: Django URL.py and the index I want to know what is the best way to write in the URL.py. I am trying to get the index in this way: www.example.com with (r'',index). But when I try r'', all pages in the website are going to the home page. Part of my url.py: (r'^index',homepages), (r'',homepages), Thanks :) A: Like this: #... (r'^$', index), #... A: Django URL matching is very powerful if not always as convenient as it could be. As Brian says, you need to use the pattern r'^$' to force your pattern to match the entire string. With r'', you are looking for an empty string anywhere in the URL, which is true for every URL. Django URL patterns nearly always start with ^ and end with $. You could in theory do some fancy URL matching where strings found anywhere in the URL determined what view function to call, but it's hard to imagine a scenario.
Django URL.py and the index
I want to know what is the best way to write in the URL.py. I am trying to get the index in this way: www.example.com with (r'',index). But when I try r'', all pages in the website are going to the home page. Part of my url.py: (r'^index',homepages), (r'',homepages), Thanks :)
[ "Like this:\n #...\n (r'^$', index),\n #...\n\n", "Django URL matching is very powerful if not always as convenient as it could be. As Brian says, you need to use the pattern r'^$' to force your pattern to match the entire string. With r'', you are looking for an empty string anywhere in the URL, which is true for every URL.\nDjango URL patterns nearly always start with ^ and end with $. You could in theory do some fancy URL matching where strings found anywhere in the URL determined what view function to call, but it's hard to imagine a scenario.\n" ]
[ 31, 5 ]
[]
[]
[ "django", "django_urls", "python" ]
stackoverflow_0001316682_django_django_urls_python.txt
Q: dictionary of object I have a sorted dict { 1L: '<'New_Config (type: 'String') (id: 1L) (value: 4L) (name: 'account_receivable')'>', 2L: '<'New_Config (type: 'string') (id: 2L) (value: 5L) (name: 'account_payable')'>', 3L: '<'New_Config (type: 'String') (id: 3L) (value: 8L) (name: 'account_cogs ')'>', 4L: '<'New_Config (type: 'String') (id: 4L)(value: 9L)(name: 'account_retained_earning')'>', 5L: '<'New_Config (type: 'String') (id: 5L) (value: 6L) (name: 'account_income')'>' } here new_config is object , i have to access object element how can i access the object properties???? suppose i want to access new_config.name A: Python dictionaries are not sorted. If you have some custom class which implements some mapping methods (like a dictionary) but over-rides some of them to give the appearance of maintaining some (sorted) ordering then the implementation details of that might also explain why your example doesn't look like valid Python. { 1L: New_Config(...)(...)(...)..., 2L: New_config(...)(...)(...)..., ... looks almost sorta like Python. 1L, 2L are representations of large integers (as keys if this were a dictionary). New_Config(...) looks kinda like a repr of something, and the (..) subsequent to that would be like a function call. So, my advice is don't try to post a question from memory or from some vague notion of what you thought you saw. Actually paste in some code. If you actually had objects there then you'd access their attributes using new_config.attribute or possibly (if someone coded up their class to be obnoxious) through some new_config.accessor() method calls (like foo.getThis() and foo.getThat() or something equally inane). A: class Foo(object): def __init__(self,name,weight): self.name = name self.weight = weight >>> D = {} >>> D['1L'] = Foo("James",67) >>> D['2L'] = Foo("Jack",83) >>> D {'2L': <__main__.Foo object at 0x013EB330>, '1L': <__main__.Foo object at 0x00C402D0>} >>> D['1L'].name 'James' In general, DictName[KEY] gives you VALUE for that KEY. so to access attribute where VALUE is an object you can use DictName[KEY].attritbute
dictionary of object
I have a sorted dict { 1L: '<'New_Config (type: 'String') (id: 1L) (value: 4L) (name: 'account_receivable')'>', 2L: '<'New_Config (type: 'string') (id: 2L) (value: 5L) (name: 'account_payable')'>', 3L: '<'New_Config (type: 'String') (id: 3L) (value: 8L) (name: 'account_cogs ')'>', 4L: '<'New_Config (type: 'String') (id: 4L)(value: 9L)(name: 'account_retained_earning')'>', 5L: '<'New_Config (type: 'String') (id: 5L) (value: 6L) (name: 'account_income')'>' } here new_config is object , i have to access object element how can i access the object properties???? suppose i want to access new_config.name
[ "Python dictionaries are not sorted. If you have some custom class which implements some mapping methods (like a dictionary) but over-rides some of them to give the appearance of maintaining some (sorted) ordering then the implementation details of that might also explain why your example doesn't look like valid Python.\n{\n1L: New_Config(...)(...)(...)...,\n2L: New_config(...)(...)(...)...,\n\n... looks almost sorta like Python. 1L, 2L are representations of large integers (as\nkeys if this were a dictionary). New_Config(...) looks kinda like a repr of something, and the (..) subsequent to that would be like a function call.\nSo, my advice is don't try to post a question from memory or from some vague notion of what you thought you saw. Actually paste in some code.\nIf you actually had objects there then you'd access their attributes using new_config.attribute or possibly (if someone coded up their class to be obnoxious) through some new_config.accessor() method calls (like foo.getThis() and foo.getThat() or something equally inane).\n", "class Foo(object):\n def __init__(self,name,weight):\n self.name = name\n self.weight = weight\n\n>>> D = {}\n>>> D['1L'] = Foo(\"James\",67)\n>>> D['2L'] = Foo(\"Jack\",83)\n>>> D\n{'2L': <__main__.Foo object at 0x013EB330>,\n '1L': <__main__.Foo object at 0x00C402D0>}\n\n>>> D['1L'].name\n'James'\n\nIn general, \nDictName[KEY] gives you VALUE for that KEY.\nso to access attribute where VALUE is an object you can use\nDictName[KEY].attritbute\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "python", "sorting" ]
stackoverflow_0001315407_dictionary_python_sorting.txt
Q: Using PyUNO on Windows and CentOS Is there any way to use OpenOffice's PyUNO without using the version of Python that comes with OpenOffice? I mean, can I install a package (on Windows and CentOS) that uses the version of Python that's already on the server? I'm trying to use OpenOffice in headless mode so that I can do document conversion with a script (ultimately on a hosted server running CentOS) but my development work is being done on Windows and, occasionally, the Mac). I'm having nothing but trouble getting this to work. A: You can't use PyUNO with just any version of Python. You need to use the specific one that's integrated into your OpenOffice installation. However, the very latest OO (3.1 I believe) comes (on all platforms) with the very latest Python (2.6.2 I believe), so if you can upgrade your OpenOffice to the very latest released version on all platforms, you should be just fine.
Using PyUNO on Windows and CentOS
Is there any way to use OpenOffice's PyUNO without using the version of Python that comes with OpenOffice? I mean, can I install a package (on Windows and CentOS) that uses the version of Python that's already on the server? I'm trying to use OpenOffice in headless mode so that I can do document conversion with a script (ultimately on a hosted server running CentOS) but my development work is being done on Windows and, occasionally, the Mac). I'm having nothing but trouble getting this to work.
[ "You can't use PyUNO with just any version of Python. You need to use the specific one that's integrated into your OpenOffice installation. However, the very latest OO (3.1 I believe) comes (on all platforms) with the very latest Python (2.6.2 I believe), so if you can upgrade your OpenOffice to the very latest released version on all platforms, you should be just fine.\n" ]
[ 2 ]
[]
[]
[ "openoffice.org", "python", "pyuno" ]
stackoverflow_0001314009_openoffice.org_python_pyuno.txt
Q: app-engine-patch with pyamf = No module named encoding I'm trying to use app-engine-patch with pyamf by following this: http://pyamf.org/wiki/GoogleAppEngine because I want to migrate my Django <-> pyamf application to app-engine-patch <-> pyamf. What I have now is that I created my gateway.py with only one line of code: import pyamf just to test can I use pyamf and I get blank page when I point my browser to that url/file so that looks good (no import problems and pyamf is found) but in the command prompt where I started server with "manage.py runserver" I see bunch of errors like: ... File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2238, in Dispatch self._module_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2156, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2052, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\main.py", line 16, in <module> patch_all() File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\appenginepatcher\patch.py", line 29, in patch_all patch_app_engine() File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\appenginepatcher\patch.py", line 193, in patch_app_engine from django.utils.encoding import force_unicode, smart_str ImportError: No module named encoding Are there any pyamf <-> app-engine-patch gurus out there which can give me a hint what am I doing wrong and how can I setup pyamf to work with app-engine-patch? A: Are you activating Django 1.0.2 in your app engine startup code? App Engine now comes with it, but also (for backwards compatibility) with 0.9.6, and (still for backwards compatibility) 0.9.6 is what it defaults to -- all it takes to fix this is, at startup, use: from google.appengine.dist import use_library use_library('django', '1.0') and then "Subsequent attempts to import the django package will use Django 1.0.2.". You do need to install 1.0.2 with the SDK separately. See all instructions here.
app-engine-patch with pyamf = No module named encoding
I'm trying to use app-engine-patch with pyamf by following this: http://pyamf.org/wiki/GoogleAppEngine because I want to migrate my Django <-> pyamf application to app-engine-patch <-> pyamf. What I have now is that I created my gateway.py with only one line of code: import pyamf just to test can I use pyamf and I get blank page when I point my browser to that url/file so that looks good (no import problems and pyamf is found) but in the command prompt where I started server with "manage.py runserver" I see bunch of errors like: ... File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2238, in Dispatch self._module_dict) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2156, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2052, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\main.py", line 16, in <module> patch_all() File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\appenginepatcher\patch.py", line 29, in patch_all patch_app_engine() File "C:\Users\[my app-engine-patch application path]\common\appenginepatch\appenginepatcher\patch.py", line 193, in patch_app_engine from django.utils.encoding import force_unicode, smart_str ImportError: No module named encoding Are there any pyamf <-> app-engine-patch gurus out there which can give me a hint what am I doing wrong and how can I setup pyamf to work with app-engine-patch?
[ "Are you activating Django 1.0.2 in your app engine startup code? App Engine now comes with it, but also (for backwards compatibility) with 0.9.6, and (still for backwards compatibility) 0.9.6 is what it defaults to -- all it takes to fix this is, at startup, use:\nfrom google.appengine.dist import use_library\nuse_library('django', '1.0')\n\nand then \"Subsequent attempts to import the django package will use Django 1.0.2.\". You do need to install 1.0.2 with the SDK separately. See all instructions here.\n" ]
[ 1 ]
[]
[]
[ "app_engine_patch", "google_app_engine", "pyamf", "python" ]
stackoverflow_0001315368_app_engine_patch_google_app_engine_pyamf_python.txt
Q: What do you make of this Python error? Here's the error. Traceback (most recent call last): File "_ctypes/callbacks.c", line 295, in 'calling callback function' File "USB2.py", line 454, in ff self.drv_locked = False SystemError: Objects/cellobject.c:24: bad argument to internal function Here's the Python code involved. def drv_send(self, data, size): if not self.Connected(): return def f(): self.drv_locked = True buffer = ''.join(chr(c) for c in data[:size]) out_buffer = cast(buffer, POINTER(c_uint8)) request_handle = (OPENUSB_REQUEST_HANDLE * 1)() request = (OPENUSB_INTR_REQUEST * 1)() request_handle[0].dev = self.usbhandle request_handle[0].interface = INTERFACE_ID request_handle[0].endpoint = LIBUSB_ENDPOINT_OUT + 1 request_handle[0].type = USB_TYPE_INTERRUPT request_handle[0].req.intr = request def f(req): print req[0].req.intr[0].result.status, req[0].req.intr[0].result.transferred_bytes self.drv_locked = False # Line 454 request_handle[0].cb = REQUEST_CALLBACK(f) request_handle[0].arg = None request[0].payload = out_buffer request[0].length = size request[0].timeout = 5000 request[0].flags = 0 request[0].next = None r = lib.openusb_xfer_aio(request_handle) print "result", r self.command_queue.put(f) And here's the Python source involved. PyObject * PyCell_Get(PyObject *op) { if (!PyCell_Check(op)) { PyErr_BadInternalCall(); // Line 24 return NULL; } Py_XINCREF(((PyCellObject*)op)->ob_ref); return PyCell_GET(op); } A: An internal error is clearly a bug in Python itself, and if you're interested in further exploring this and offering a fix for the Python core, then simplifying your code down to where it still triggers the bug would be the right strategy. If you're more interested in having your code work, rather than in fixing the Python core, then I suggest you avoid some of the several anomalies in your code that might be contributing to confusing Python. For example, I don't know that anybody ever thought to test property for a nested function named f containing yet another further-nested function also named f -- it SHOULD work, but it's exactly the kind of thing that might not have been well tested just because nobody thought of it yet, and while deliberately provoking such anomalies is a very good strategy for strenghtening a suite of tests, it might be best avoided if you're not deliberately out to trigger bugs in Python's internals. So, first, I would make sure there's no homonimy around. If that still leaves the bug, I would next remove the use of cell objects by turning what currently are accesses to nonlocal variables into "prebound arguments", for example your "semi-outer" f could be changes to start with: def f(self=self): and your "fully-inner one" could become: def g(req, self=self): This would make accesses to self in either of those functions (currently nonlocal variable accesses) into local variable accesses. Yes, you should not have to do this (there should be no bugs in any software, that requires you to work around them), but alas perfection is not a characteristic of this sublunar world, so that learning bug-workaround strategies is an inevitable part of life;-). A: The PyCell_Check function checks that its argument actually is a cell object (an internal type used to implement variables referenced by multiple scopes). If op is not a cell object, you would get this error. The code you posted does not give enough context/information to determine exactly how the bad parameter came to be passed.
What do you make of this Python error?
Here's the error. Traceback (most recent call last): File "_ctypes/callbacks.c", line 295, in 'calling callback function' File "USB2.py", line 454, in ff self.drv_locked = False SystemError: Objects/cellobject.c:24: bad argument to internal function Here's the Python code involved. def drv_send(self, data, size): if not self.Connected(): return def f(): self.drv_locked = True buffer = ''.join(chr(c) for c in data[:size]) out_buffer = cast(buffer, POINTER(c_uint8)) request_handle = (OPENUSB_REQUEST_HANDLE * 1)() request = (OPENUSB_INTR_REQUEST * 1)() request_handle[0].dev = self.usbhandle request_handle[0].interface = INTERFACE_ID request_handle[0].endpoint = LIBUSB_ENDPOINT_OUT + 1 request_handle[0].type = USB_TYPE_INTERRUPT request_handle[0].req.intr = request def f(req): print req[0].req.intr[0].result.status, req[0].req.intr[0].result.transferred_bytes self.drv_locked = False # Line 454 request_handle[0].cb = REQUEST_CALLBACK(f) request_handle[0].arg = None request[0].payload = out_buffer request[0].length = size request[0].timeout = 5000 request[0].flags = 0 request[0].next = None r = lib.openusb_xfer_aio(request_handle) print "result", r self.command_queue.put(f) And here's the Python source involved. PyObject * PyCell_Get(PyObject *op) { if (!PyCell_Check(op)) { PyErr_BadInternalCall(); // Line 24 return NULL; } Py_XINCREF(((PyCellObject*)op)->ob_ref); return PyCell_GET(op); }
[ "An internal error is clearly a bug in Python itself, and if you're interested in further exploring this and offering a fix for the Python core, then simplifying your code down to where it still triggers the bug would be the right strategy.\nIf you're more interested in having your code work, rather than in fixing the Python core, then I suggest you avoid some of the several anomalies in your code that might be contributing to confusing Python. For example, I don't know that anybody ever thought to test property for a nested function named f containing yet another further-nested function also named f -- it SHOULD work, but it's exactly the kind of thing that might not have been well tested just because nobody thought of it yet, and while deliberately provoking such anomalies is a very good strategy for strenghtening a suite of tests, it might be best avoided if you're not deliberately out to trigger bugs in Python's internals.\nSo, first, I would make sure there's no homonimy around. If that still leaves the bug, I would next remove the use of cell objects by turning what currently are accesses to nonlocal variables into \"prebound arguments\", for example your \"semi-outer\" f could be changes to start with:\ndef f(self=self):\nand your \"fully-inner one\" could become:\ndef g(req, self=self):\nThis would make accesses to self in either of those functions (currently nonlocal variable accesses) into local variable accesses. Yes, you should not have to do this (there should be no bugs in any software, that requires you to work around them), but alas perfection is not a characteristic of this sublunar world, so that learning bug-workaround strategies is an inevitable part of life;-).\n", "The PyCell_Check function checks that its argument actually is a cell object (an internal type used to implement variables referenced by multiple scopes). If op is not a cell object, you would get this error.\nThe code you posted does not give enough context/information to determine exactly how the bad parameter came to be passed.\n" ]
[ 6, 2 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0001315465_ctypes_python.txt
Q: use python to access mysql i am finally starting with python. i wanted to ask if i use the mysql db with python, how should i expect python to connect to the db? what i mean is, i have mysql installed in xampp and have my database created in mysql through php myadmin. now my python is in C:\python25\ and my *.py files would be in the same folder as well. now do i need any prior configuration for the connection? what i am doing now >>> cnx = MySQLdb.connect(host=’localhost’, user=’root’, passwd=’’, db=’tablename’) SyntaxError: invalid syntax how do i need to go around this? A: the basics is import MySQLdb conn = MySQLdb.connect(host="localhost", user="root", passwd="nobodyknow", db="amit") cursor = conn.cursor() stmt = "SELECT * FROM overflows" cursor.execute(stmt) # Fetch and output result = cursor.fetchall() print result # get the number of rows numrows = int(cursor.rowcount) # Close connection conn.close() and don´t use ’ use single or double ' ou " quotes A: If you simply cut and pasted, you have the wrong kind of quotes. You've got some kind of asymmetric quote. Use simple apostrophes ' or simple quotes ". Do not use ’ .
use python to access mysql
i am finally starting with python. i wanted to ask if i use the mysql db with python, how should i expect python to connect to the db? what i mean is, i have mysql installed in xampp and have my database created in mysql through php myadmin. now my python is in C:\python25\ and my *.py files would be in the same folder as well. now do i need any prior configuration for the connection? what i am doing now >>> cnx = MySQLdb.connect(host=’localhost’, user=’root’, passwd=’’, db=’tablename’) SyntaxError: invalid syntax how do i need to go around this?
[ "the basics is\nimport MySQLdb\n\nconn = MySQLdb.connect(host=\"localhost\", user=\"root\", passwd=\"nobodyknow\", db=\"amit\")\ncursor = conn.cursor()\n\nstmt = \"SELECT * FROM overflows\"\ncursor.execute(stmt)\n\n# Fetch and output\nresult = cursor.fetchall()\nprint result\n\n# get the number of rows\nnumrows = int(cursor.rowcount)\n\n# Close connection\nconn.close()\n\nand don´t use ’\nuse single or double ' ou \" quotes\n", "If you simply cut and pasted, you have the wrong kind of quotes.\nYou've got some kind of asymmetric quote.\nUse simple apostrophes ' or simple quotes \". \nDo not use ’ .\n" ]
[ 4, 2 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001317103_mysql_python.txt
Q: Transforming nested list in Python Assuming I have a structure like this: a = [ ('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I']) ] How can I transform it into: a = [ ('A', 'D'), ('A', 'E'), ('A', 'F'), ('A', 'G'), ('B', 'H'), ('C', 'I'), ] Thanks for your time! A: Try: >>> a = [('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I'])] >>> [(k,j) for k, more in a for j in more] [('A', 'D'), ('A', 'E'), ('A', 'F'), ('A', 'G'), ('B', 'H'), ('C', 'I')] This handles only one level of nesting of course. A: Here's a simple solution: data = [ ('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I']) ] result = [] for x in data: for y in x[1]: result.append((x[0], y)) A: (Side comment) Why on earth did you indent like that? Isn't the following more readable? a = [ ('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I']) ]
Transforming nested list in Python
Assuming I have a structure like this: a = [ ('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I']) ] How can I transform it into: a = [ ('A', 'D'), ('A', 'E'), ('A', 'F'), ('A', 'G'), ('B', 'H'), ('C', 'I'), ] Thanks for your time!
[ "Try:\n>>> a = [('A', ['D', 'E', 'F', 'G']), ('B', ['H']), ('C', ['I'])]\n>>> [(k,j) for k, more in a for j in more]\n[('A', 'D'), ('A', 'E'), ('A', 'F'), ('A', 'G'), ('B', 'H'), ('C', 'I')]\n\nThis handles only one level of nesting of course.\n", "Here's a simple solution:\ndata = [\n('A',\n ['D',\n 'E',\n 'F',\n 'G']),\n('B',\n ['H']),\n('C',\n ['I'])\n]\n\nresult = []\n\nfor x in data:\n for y in x[1]:\n result.append((x[0], y))\n\n", "(Side comment) Why on earth did you indent like that? Isn't the following more readable?\na = [\n('A', ['D', 'E', 'F', 'G']),\n('B', ['H']),\n('C', ['I'])\n]\n\n" ]
[ 10, 4, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001316319_python.txt
Q: Should I use "from package import utils, settings" or "from . import utils, settings" I'm developing a Python application; it has all its code in one package and runs inside this of course. The application's Python package is of no interest from the interpreter to the user, it's simply a GUI application. The question is, which style is preferred when importing modules inside the application package from application import settings, utils or from . import settings, utils That is I can either specify the name as it is (here 'application') or I can say "current package" by using "." This is a Free software package so the possibility exists that someone wants to make a fork of my application and change its name. In that case, alternative 1 is a slight nuisance. Still, I use style 1 all the time (although early code uses style 2 in some places), since style 1 looks much better. Are there any arguments for my style (1) that I have missed? Or is it stupid not to go with style 2? A: The Python Style Guide recommends explicitly against relative imports (the . style): Relative imports for intra-package imports are highly discouraged. Always use the absolute package path for all imports. Even now that PEP 328 [7] is fully implemented in Python 2.5, its style of explicit relative imports is actively discouraged; absolute imports are more portable and usually more readable. I tend to agree. Relative imports mean the same module is imported in different ways in different files, and requires that I remember what I'm looking at when reading and writing. Not really worth it, and a rename can be done with sed. Besides the issue of renaming, the only problem with absolute imports is that import foo might mean the top-level module foo or a submodule foo beneath the current module. If this is a problem, you can use from __future__ import absolute_import; this is standard in Python 3.
Should I use "from package import utils, settings" or "from . import utils, settings"
I'm developing a Python application; it has all its code in one package and runs inside this of course. The application's Python package is of no interest from the interpreter to the user, it's simply a GUI application. The question is, which style is preferred when importing modules inside the application package from application import settings, utils or from . import settings, utils That is I can either specify the name as it is (here 'application') or I can say "current package" by using "." This is a Free software package so the possibility exists that someone wants to make a fork of my application and change its name. In that case, alternative 1 is a slight nuisance. Still, I use style 1 all the time (although early code uses style 2 in some places), since style 1 looks much better. Are there any arguments for my style (1) that I have missed? Or is it stupid not to go with style 2?
[ "The Python Style Guide recommends explicitly against relative imports (the . style):\n\nRelative imports for intra-package imports are highly discouraged.\n Always use the absolute package path for all imports.\n Even now that PEP 328 [7] is fully implemented in Python 2.5,\n its style of explicit relative imports is actively discouraged;\n absolute imports are more portable and usually more readable.\n\nI tend to agree. Relative imports mean the same module is imported in different ways in different files, and requires that I remember what I'm looking at when reading and writing. Not really worth it, and a rename can be done with sed.\nBesides the issue of renaming, the only problem with absolute imports is that import foo might mean the top-level module foo or a submodule foo beneath the current module. If this is a problem, you can use from __future__ import absolute_import; this is standard in Python 3.\n" ]
[ 10 ]
[]
[]
[ "python" ]
stackoverflow_0001317624_python.txt
Q: Amazon S3 Python Bulk File Transfer through Python I want to tranfer files in around 1000 directories to an Amazon S3 bucket using Pythons S3 package. How could I do it ? A: I like boto, http://code.google.com/p/boto/
Amazon S3 Python Bulk File Transfer through Python
I want to tranfer files in around 1000 directories to an Amazon S3 bucket using Pythons S3 package. How could I do it ?
[ "I like boto,\nhttp://code.google.com/p/boto/\n" ]
[ 3 ]
[]
[]
[ "amazon_s3", "python" ]
stackoverflow_0001315660_amazon_s3_python.txt
Q: How can I hide incompatible code from older Python versions? I'm writing unit tests for a function that takes both an *args and a **kwargs argument. A reasonable use-case for this function is using keyword arguments after the *args argment, i.e. of the form def f(a, *b, **c): print a, b, c f(1, *(2, 3, 4), keyword=13) Now this only became legal in Python 2.6; in earlier versions the above line is a syntax error and so won't even compile to byte-code. My question is: How can I test the functionality provided in the newer Python version and still have the tests run for older Python versions? I should point out that the function itself works fine for earlier Python versions, it is only some invocations that are syntax errors before Python 2.6. The various methods I've seen for checking the Python version don't work for this as it doesn't get past the compilation stage. I would prefer not to have to split the tests into multiple files if at all possible. A: I don't think you should be testing whether Python works correctly; instead, focus on testing your own code. In doing so, it is perfectly possible to write the specific invocation in a way that works for all Python versions, namely: f(1, *(2,3,4), **{'keyword':13}) A: One approach might be to use eval() or exec along with a test for the current version of Python. This will defer compilation to runtime, where you can control whether compilation actually happens or not. A: Why do you want to use such syntax? I mean, this 2.6 feature does not bring any real advantage other than shortcut. a = [2,3,4] a.insert(0, 1) kw = {'keyword'='test'} f(*a, **kw)
How can I hide incompatible code from older Python versions?
I'm writing unit tests for a function that takes both an *args and a **kwargs argument. A reasonable use-case for this function is using keyword arguments after the *args argment, i.e. of the form def f(a, *b, **c): print a, b, c f(1, *(2, 3, 4), keyword=13) Now this only became legal in Python 2.6; in earlier versions the above line is a syntax error and so won't even compile to byte-code. My question is: How can I test the functionality provided in the newer Python version and still have the tests run for older Python versions? I should point out that the function itself works fine for earlier Python versions, it is only some invocations that are syntax errors before Python 2.6. The various methods I've seen for checking the Python version don't work for this as it doesn't get past the compilation stage. I would prefer not to have to split the tests into multiple files if at all possible.
[ "I don't think you should be testing whether Python works correctly; instead, focus on testing your own code. In doing so, it is perfectly possible to write the specific invocation in a way that works for all Python versions, namely:\nf(1, *(2,3,4), **{'keyword':13})\n\n", "One approach might be to use eval() or exec along with a test for the current version of Python. This will defer compilation to runtime, where you can control whether compilation actually happens or not.\n", "Why do you want to use such syntax? I mean, this 2.6 feature does not bring any real advantage other than shortcut.\na = [2,3,4]\na.insert(0, 1)\nkw = {'keyword'='test'}\nf(*a, **kw)\n\n" ]
[ 11, 2, 1 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0001317946_python_unit_testing.txt
Q: Dealing with dynamic urls Let's say my main controller 'hotels' has a pattern for the url, such as: /hotels/colorado/aspen/hotel-name/ How should I program my controller ( keep in mind I'm still learning MVC ) to handle this variable? I know that I have to probably check if anything after /hotels/ is set, otherwise show the default hotels page. If a state is set, show the state page, and so forth with the city and hotel name. class hotelController { function state() { } function city() { } function hotel() { } } Should I have 3 separate methods for each of those? Any advice is appreciated. A: Usually this is solved with Object Dispatch. You can also create nested Controllers to handle this. An advantage is, that you can follow a major OOP principle, namely encapsulation, as you group all functionality that only concerns Hotels generally in the Hotel controller (for example adding a new one) Another advantage is, you dont have to check what is set after /hotels/ for example. It will only be dispatched to a new controller if there is something left to dispatch i.e. if the current controller wasnt able to handle the entire request. This isnt really specific to a certain framework, but it is fully implemented in Pylons and Turbogears 2.0. (For more details you may refer to http://turbogears.org/2.0/docs/main/TGControllers.html#the-lookup-method ) class HotelController(Controller): """ Controller to handle requests to Hotels """ def index(self): """ Handle the index page here """ pass def addNewHotel(self): """ Register a new hotel here """ pass def lookup(self, state_name, *remainder): """ Read the state, create a new StateController and dispatch """ state_dispatch = StateController(state_name) return state_dispatch, remainder class StateController(object): """ Controller used to dispatch """ def __init__(self, state_name): # do your work on the state here pass def create(self, state_name): """ Create a new state here """ def lookup(self, city_name, *remainder): """ keep on dispatching to other controllers """ city_dispatch = CityController(city_name) return city_dispatch, remainder A: It's perfectly valid to have separate methods to get the state, city and hotel name. An alternative, if your templating language support it, is to have a hotel_info() method that returns a dictionary, so that you in the template do "info/state", info/city", etc. I do however think you should look into an MVC framework, because otherwise you'll just end up writing your own, which is pointless. These are the ones I have looked at, they are all good: Pylons: http://pylonshq.com/ BFG: http://bfg.repoze.org/ Bobo: http://bobo.digicool.com/ There are tons more just for Python.
Dealing with dynamic urls
Let's say my main controller 'hotels' has a pattern for the url, such as: /hotels/colorado/aspen/hotel-name/ How should I program my controller ( keep in mind I'm still learning MVC ) to handle this variable? I know that I have to probably check if anything after /hotels/ is set, otherwise show the default hotels page. If a state is set, show the state page, and so forth with the city and hotel name. class hotelController { function state() { } function city() { } function hotel() { } } Should I have 3 separate methods for each of those? Any advice is appreciated.
[ "Usually this is solved with Object Dispatch. You can also create nested Controllers to handle this. An advantage is, that you can follow a major OOP principle, namely encapsulation, as you group all functionality that only concerns Hotels generally in the Hotel controller (for example adding a new one)\nAnother advantage is, you dont have to check what is set after /hotels/ for example. It will only be dispatched to a new controller if there is something left to dispatch i.e. if the current controller wasnt able to handle the entire request.\nThis isnt really specific to a certain framework, but it is fully implemented in Pylons and Turbogears 2.0. (For more details you may refer to http://turbogears.org/2.0/docs/main/TGControllers.html#the-lookup-method )\nclass HotelController(Controller):\n \"\"\" Controller to handle requests to Hotels \"\"\"\n\n def index(self):\n \"\"\" Handle the index page here \"\"\"\n pass\n\n def addNewHotel(self):\n \"\"\" Register a new hotel here \"\"\"\n pass\n\n def lookup(self, state_name, *remainder):\n \"\"\" Read the state, create a new StateController and dispatch \"\"\"\n state_dispatch = StateController(state_name)\n return state_dispatch, remainder\n\nclass StateController(object):\n \"\"\" Controller used to dispatch \"\"\"\n\n def __init__(self, state_name):\n # do your work on the state here\n pass\n\n def create(self, state_name):\n \"\"\" Create a new state here \"\"\"\n\n def lookup(self, city_name, *remainder):\n \"\"\" keep on dispatching to other controllers \"\"\"\n city_dispatch = CityController(city_name)\n return city_dispatch, remainder\n\n", "It's perfectly valid to have separate methods to get the state, city and hotel name.\nAn alternative, if your templating language support it, is to have a hotel_info() method that returns a dictionary, so that you in the template do \"info/state\", info/city\", etc.\nI do however think you should look into an MVC framework, because otherwise you'll just end up writing your own, which is pointless.\nThese are the ones I have looked at, they are all good:\n\nPylons: http://pylonshq.com/\nBFG: http://bfg.repoze.org/\nBobo: http://bobo.digicool.com/\n\nThere are tons more just for Python.\n" ]
[ 1, 0 ]
[]
[]
[ "php", "python", "url_routing" ]
stackoverflow_0001317541_php_python_url_routing.txt
Q: Parallel Python: What is a callback? In Parallel Python it has something in the submit function called a callback (documentation) however it doesn't seem to explain it too well. I've posted on their forum a couple days ago and I've not received a response. Would someone explain what a callback is and what it's used for? A: A callback is a function provided by the consumer of an API that the API can then turn around and invoke (calling you back). If I setup a Dr.'s appointment, I can give them my phone number, so they can call me the day before to confirm the appointment. A callback is like that, except instead of just being a phone number, it can be arbitrary instructions like "send me an email at this address, and also call my secretary and have her put it in my calendar. Callbacks are often used in situations where an action is asynchronous. If you need to call a function, and immediately continue working, you can't sit there wait for its return value to let you know what happened, so you provide a callback. When the function is done completely its asynchronous work it will then invoke your callback with some predetermined arguments (usually some you supply, and some about the status and result of the asynchronous action you requested). If the Dr. is out of the office, or they are still working on the schedule, rather than having me wait on hold until he gets back, which could be several hours, we hang up, and once the appointment has been scheduled, they call me. In this specific case, Parallel Python's submit function will invoke your callback with any arguments you supply and the result of func, once func has finished executing. A: The relevant spot in the docs: callback - callback function which will be called with argument list equal to callbackargs+(result,) as soon as calculation is done callbackargs - additional arguments for callback function So, if you want some code to be executed as soon as the result is ready, you put that code into a function and pass that function as the callback argument. If you don't need other arguments, it will be just, e.g.: def itsdone(result): print "Done! result=%r" % (result,) ... submit(..., callback=itsdone) For more on the callback pattern in Python, see e.g. my presentation here. A: Looking at the link, just looks like a hook which is called. callback - callback function which will be called with argument list equal to callbackargs+(result,) as soon as calculation is done The "as soon as calculation is done" bit seems ambiguous. The point, as far as I can see of this thing is that the submit() call distributes work to other servers and then returns. Because the finishing is asynchronous, rather block, it allows you to provide a function which is called when some unit of work finishes. If you do: submit( ..., callback=work_finished, ... ) Then submit will ensure work_finished() is called when the unit of distributed work is completed on the target server. When you call submit() you can provide a callback which is called in the same runtime as the caller of submit() ... and it is called after the distribution of the workload function is complete. Kind of like "call foo(x,y) when you have done some stuff in submit()" But yea, the documentation could be better. Have a ganders at the ppython source and see at which point the callback is called in submit() A: A callback is a function you define that's later called by a function you call. As an example, consider how AJAX works: you write code that calls a back-end server function. At some point in the future, it returns from that function (the "A" stands for Asynchronous, which is what the "Parallel" in "Parallel Python" is all about). Now - because your code calls the code on the server, you want it to tell you when it's done, and you want to do something with its results. It does so by calling your callback function. When the called function completes, the standard way for it to tell you it's done is for you to tell it to call a function in your code. That's the callback function, and its job is to handle the results/output from the lower-level function you've called. A: A callback is simply a function. In Python, functions are just more objects, and so the name of a function can be used as a variable, like so: def func(): ... something(func) Note that many functions which accept a callback as an argument usually require that the callback accept certain arguments. In this case, the callback function will need to accept a list of arguments specified in callbackargs. I'm not familiar with Parallel Python so I don't know exactly what it wants.
Parallel Python: What is a callback?
In Parallel Python it has something in the submit function called a callback (documentation) however it doesn't seem to explain it too well. I've posted on their forum a couple days ago and I've not received a response. Would someone explain what a callback is and what it's used for?
[ "A callback is a function provided by the consumer of an API that the API can then turn around and invoke (calling you back). If I setup a Dr.'s appointment, I can give them my phone number, so they can call me the day before to confirm the appointment. A callback is like that, except instead of just being a phone number, it can be arbitrary instructions like \"send me an email at this address, and also call my secretary and have her put it in my calendar. \nCallbacks are often used in situations where an action is asynchronous. If you need to call a function, and immediately continue working, you can't sit there wait for its return value to let you know what happened, so you provide a callback. When the function is done completely its asynchronous work it will then invoke your callback with some predetermined arguments (usually some you supply, and some about the status and result of the asynchronous action you requested). \nIf the Dr. is out of the office, or they are still working on the schedule, rather than having me wait on hold until he gets back, which could be several hours, we hang up, and once the appointment has been scheduled, they call me.\nIn this specific case, Parallel Python's submit function will invoke your callback with any arguments you supply and the result of func, once func has finished executing.\n", "The relevant spot in the docs:\ncallback - callback function which will be called with argument \n list equal to callbackargs+(result,) \n as soon as calculation is done\ncallbackargs - additional arguments for callback function\n\nSo, if you want some code to be executed as soon as the result is ready, you put that code into a function and pass that function as the callback argument. If you don't need other arguments, it will be just, e.g.:\ndef itsdone(result):\n print \"Done! result=%r\" % (result,)\n...\nsubmit(..., callback=itsdone)\n\nFor more on the callback pattern in Python, see e.g. my presentation here.\n", "Looking at the link, just looks like a hook which is called.\n\ncallback - callback function which\n will be called with argument \n list equal to callbackargs+(result,) \n as soon as calculation is done\n\nThe \"as soon as calculation is done\" bit seems ambiguous. The point, as far as I can see of this thing is that the submit() call distributes work to other servers and then returns. Because the finishing is asynchronous, rather block, it allows you to provide a function which is called when some unit of work finishes. If you do:\nsubmit( ..., callback=work_finished, ... )\n\nThen submit will ensure work_finished() is called when the unit of distributed work is completed on the target server.\nWhen you call submit() you can provide a callback which is called in the same runtime as the caller of submit() ... and it is called after the distribution of the workload function is complete.\nKind of like \"call foo(x,y) when you have done some stuff in submit()\"\nBut yea, the documentation could be better. Have a ganders at the ppython source and see at which point the callback is called in submit()\n", "A callback is a function you define that's later called by a function you call.\nAs an example, consider how AJAX works: you write code that calls a back-end server function. At some point in the future, it returns from that function (the \"A\" stands for Asynchronous, which is what the \"Parallel\" in \"Parallel Python\" is all about). Now - because your code calls the code on the server, you want it to tell you when it's done, and you want to do something with its results. It does so by calling your callback function.\nWhen the called function completes, the standard way for it to tell you it's done is for you to tell it to call a function in your code. That's the callback function, and its job is to handle the results/output from the lower-level function you've called.\n", "A callback is simply a function. In Python, functions are just more objects, and so the name of a function can be used as a variable, like so:\ndef func():\n ...\n\nsomething(func)\n\nNote that many functions which accept a callback as an argument usually require that the callback accept certain arguments. In this case, the callback function will need to accept a list of arguments specified in callbackargs. I'm not familiar with Parallel Python so I don't know exactly what it wants.\n" ]
[ 254, 21, 4, 3, 2 ]
[]
[]
[ "callback", "parallel_python", "python" ]
stackoverflow_0001319074_callback_parallel_python_python.txt
Q: Starting an INDIVIDUAL instance of a subclass from asynchat So the situation I have is that I have loaded more than one class that I've made that subclasses from asynchat, but I only want one of them to run. Of course, this doesn't work out when I call asyncore.loop() as they all begin. Is there any way to make only one of them begin running? edit: I think it has something to do with the map parameter that can be passed to asyncore.loop but I can't get it working. edit2: I got it. Basically I did the following: asyncore.loop(map=my_instance._map) A: For all who were curious, I figured it out. If you pass your instance's _map to loop() it seems to only start the single instance. Example: my_asyncore_obj = SomeAsyncoreObj() asyncore.loop(map=my_asyncore_obj._map)
Starting an INDIVIDUAL instance of a subclass from asynchat
So the situation I have is that I have loaded more than one class that I've made that subclasses from asynchat, but I only want one of them to run. Of course, this doesn't work out when I call asyncore.loop() as they all begin. Is there any way to make only one of them begin running? edit: I think it has something to do with the map parameter that can be passed to asyncore.loop but I can't get it working. edit2: I got it. Basically I did the following: asyncore.loop(map=my_instance._map)
[ "For all who were curious, I figured it out. If you pass your instance's _map to loop() it seems to only start the single instance.\nExample:\nmy_asyncore_obj = SomeAsyncoreObj()\nasyncore.loop(map=my_asyncore_obj._map)\n\n" ]
[ 0 ]
[]
[]
[ "asyncore", "python" ]
stackoverflow_0001105814_asyncore_python.txt
Q: Python/GAE web request error handling I am developing an application on the Google App Engine using Python. I have a handler that can return a variety of outputs (html and json at the moment), I am testing for obvious errors in the system based on invalid parameters sent to the request handler. However what I am doing feels dirty (see below): class FeedHandler(webapp.RequestHandler): def get(self): app = self.request.get("id") name = self.request.get("name") output_type = self.request.get("output", default_value = "html") pretty = self.request.get("pretty", default_value = "") application = model.Application.GetByKey(app) if application is None: if output_type == "json": self.response.out.write(simplejson.dumps({ "errorCode" : "Application not found."})) self.set_status(404) return category = model.FeedCategory.GetByKey(application, name) if category is None: if output_type == "json": self.response.out.write(simplejson.dumps({ "errorCode" : "Category not found."})) self.set_status(404) return I am specifically handling cases per output type and also and per "assert". I am keen to here suggestions, patterns and examples on how to clear it up (I know it is going to be a nightmare to try and maintain what I am doing). I am toying with the idea of having and raising custom exceptions and having a decorator that will automatically work out how to display the error messages - I think it is a good idea but I am would love to get some feedback and suggestions based on how people have done this in the past. A: There's a couple of handy methods here. The first is self.error(code). By default this method simply sets the status code and clears the output buffer, but you can override it to output custom error pages depending on the error result. The second method is self.handle__exception(exception, debug_mode). This method is called by the webapp infrastructure if any of your get/post/etc methods return an unhandled exception. By default it calls self.error(500) and logs the exception (as well as printing it to the output if debug mode is enabled). You can override this method to handle exceptions however you like. Here's an example to allow you to throw exceptions for various statuses: class StatusCodeException(Exception): def __init__(self, code): self.status_code = code class RedirectException(StatusCodeException): def __init__(self, location, status=302): super(RedirectException, self).__init__(status) self.location = location class ForbiddenException(StatusCodeException): def __init__(self): super(ForbiddenException, self).__init__(403) class ExtendedHandler(webapp.RequestHandler): def handle_exception(self, exception, debug_mode): if isinstance(exception, RedirectException): self.redirect(exception.location) else: self.error(exception.status_code) A: As least, you should refactor repetitive code such as: if application is None: if output_type == "json": self.response.out.write(simplejson.dumps({ "errorCode" : "Application not found."})) self.set_status(404) return into an auxiliary method: def _Mayerr(self, result, msg): if result is None: if output_type == 'json': self.response.out.write(simplejson.dumps( {"errorCode": msg}) self.set_status(404) return True and call it e.g. as: if self._Mayerr(application, "Application not found."): return Beyond that, custom exceptions (and wrapping all your handlers with a decorator that catches the exceptions and gives proper error messages) is an excellent architecture, though it's more invasive (requires more rework of your code) than the simple refactoring I just mentioned, the extra investment right now may well be worthwhile in preventing repetitious and boilerplatey error handling spread all over your application level code!-)
Python/GAE web request error handling
I am developing an application on the Google App Engine using Python. I have a handler that can return a variety of outputs (html and json at the moment), I am testing for obvious errors in the system based on invalid parameters sent to the request handler. However what I am doing feels dirty (see below): class FeedHandler(webapp.RequestHandler): def get(self): app = self.request.get("id") name = self.request.get("name") output_type = self.request.get("output", default_value = "html") pretty = self.request.get("pretty", default_value = "") application = model.Application.GetByKey(app) if application is None: if output_type == "json": self.response.out.write(simplejson.dumps({ "errorCode" : "Application not found."})) self.set_status(404) return category = model.FeedCategory.GetByKey(application, name) if category is None: if output_type == "json": self.response.out.write(simplejson.dumps({ "errorCode" : "Category not found."})) self.set_status(404) return I am specifically handling cases per output type and also and per "assert". I am keen to here suggestions, patterns and examples on how to clear it up (I know it is going to be a nightmare to try and maintain what I am doing). I am toying with the idea of having and raising custom exceptions and having a decorator that will automatically work out how to display the error messages - I think it is a good idea but I am would love to get some feedback and suggestions based on how people have done this in the past.
[ "There's a couple of handy methods here. The first is self.error(code). By default this method simply sets the status code and clears the output buffer, but you can override it to output custom error pages depending on the error result.\nThe second method is self.handle__exception(exception, debug_mode). This method is called by the webapp infrastructure if any of your get/post/etc methods return an unhandled exception. By default it calls self.error(500) and logs the exception (as well as printing it to the output if debug mode is enabled). You can override this method to handle exceptions however you like. Here's an example to allow you to throw exceptions for various statuses:\nclass StatusCodeException(Exception):\n def __init__(self, code):\n self.status_code = code\n\nclass RedirectException(StatusCodeException):\n def __init__(self, location, status=302):\n super(RedirectException, self).__init__(status)\n self.location = location\n\nclass ForbiddenException(StatusCodeException):\n def __init__(self):\n super(ForbiddenException, self).__init__(403)\n\nclass ExtendedHandler(webapp.RequestHandler):\n def handle_exception(self, exception, debug_mode):\n if isinstance(exception, RedirectException):\n self.redirect(exception.location)\n else:\n self.error(exception.status_code)\n\n", "As least, you should refactor repetitive code such as:\nif application is None:\n if output_type == \"json\":\n self.response.out.write(simplejson.dumps({ \"errorCode\" : \"Application not found.\"}))\n self.set_status(404)\n return\n\ninto an auxiliary method:\ndef _Mayerr(self, result, msg):\n if result is None:\n if output_type == 'json':\n self.response.out.write(simplejson.dumps(\n {\"errorCode\": msg})\n self.set_status(404)\n return True\n\nand call it e.g. as:\nif self._Mayerr(application, \"Application not found.\"):\n return\n\nBeyond that, custom exceptions (and wrapping all your handlers with a decorator that catches the exceptions and gives proper error messages) is an excellent architecture, though it's more invasive (requires more rework of your code) than the simple refactoring I just mentioned, the extra investment right now may well be worthwhile in preventing repetitious and boilerplatey error handling spread all over your application level code!-)\n" ]
[ 9, 0 ]
[]
[]
[ "design_patterns", "error_handling", "google_app_engine", "python" ]
stackoverflow_0001318960_design_patterns_error_handling_google_app_engine_python.txt
Q: Ctypes pro and con I have heard that Ctypes can cause crashes (or stop errors) in Python and windows. Should I stay away from their use? Where did I hear? It was back when I tried to control various aspects of windows, automation, that sort of thing. I hear of swig, but I see Ctypes more often than not. Any danger here? If so, what should I watch out for? I did search for ctype pro con python. A: In terms of robustness, I still think swig is somewhat superior to ctypes, because it's possible to have a C compiler check things more thoroughly for you; however, this is pretty moot by now (while it loomed larger in earlier ctypes versons), thanks to the argtypes feature @Mark already mentioned. However, there is no doubt that the runtime overhead IS much more significant for ctypes than for swig (and sip and boost python and other "wrapping" approaches): so, I think of ctypes as a convenient way to reach for a few functions within a DLL when the calls happen outside of a key bottleneck, not as a way to make large C libraries available to Python in performance-critical situations. For a nice middle way between the runtime performance of swig (&c) and the convenience of ctypes, with the added bonus of being able to add more code that can use a subset of Python syntax yet run at just about C-code speeds, also consider Cython -- a python-like language that compiles down to C and is specialized for writing Python-callable extensions and wrapping C libraries (including ones that may be available only as static libraries, not DLLs: ctypes wouldn't let you play with those;-). A: ctypes is a safe module to use, if you use it right. Some libraries provide a lower level access to things, some modules simply allow you to shoot yourself in the foot. So naturally some modules are more dangerous than others. This doesn't mean you should not use them though! You probably heard someone referring to something like this: #Crash python interpreter from ctypes import * def crashme(): c = c_char('x') p = pointer(c) i = 0 while True: p[i] = 'x' i += 1 The python interpreter crashing is different than just the python code itself erroring out with a runtime error. For example infinite recursion with a default recursion limit set would cause a runtime error but the python interpreter would still be alive afterwards. Another good example of this is with the sys module. You wouldn't stop using the sys module though because it can crash the python interpreter. import sys sys.setrecursionlimit(2**30) def f(x): f(x+1) #This will cause no more resources left and then crash the python interpreter f(1) There are many libraries as well that provide lower level access. For example the The gc module can be manipulated to give access to partially constructed object, accessing fields of which can cause crashes. Reference and ideas taken from: Crashing Python A: ctypes can indeed cause crashes, if the C library you're using can already cause crashes. If anything, ctypes can help reduce crashes, because you can enforce runtime type safety with the argtypes property on C functions using ctypes. But if your C library is already stable and tested, there is absolutely no reason not to use ctypes if it performs what you need in terms of bringing C and Python together. A: I highly suggest you look into reading this book: Gray Hat Python: Python Programming for Hackers and Reverse Engineers The book functions as an in-depth tutorial for the ctypes library, and shows you how to run incredibly low-level code
Ctypes pro and con
I have heard that Ctypes can cause crashes (or stop errors) in Python and windows. Should I stay away from their use? Where did I hear? It was back when I tried to control various aspects of windows, automation, that sort of thing. I hear of swig, but I see Ctypes more often than not. Any danger here? If so, what should I watch out for? I did search for ctype pro con python.
[ "In terms of robustness, I still think swig is somewhat superior to ctypes, because it's possible to have a C compiler check things more thoroughly for you; however, this is pretty moot by now (while it loomed larger in earlier ctypes versons), thanks to the argtypes feature @Mark already mentioned. However, there is no doubt that the runtime overhead IS much more significant for ctypes than for swig (and sip and boost python and other \"wrapping\" approaches): so, I think of ctypes as a convenient way to reach for a few functions within a DLL when the calls happen outside of a key bottleneck, not as a way to make large C libraries available to Python in performance-critical situations.\nFor a nice middle way between the runtime performance of swig (&c) and the convenience of ctypes, with the added bonus of being able to add more code that can use a subset of Python syntax yet run at just about C-code speeds, also consider Cython -- a python-like language that compiles down to C and is specialized for writing Python-callable extensions and wrapping C libraries (including ones that may be available only as static libraries, not DLLs: ctypes wouldn't let you play with those;-).\n", "ctypes is a safe module to use, if you use it right.\nSome libraries provide a lower level access to things, some modules simply allow you to shoot yourself in the foot. So naturally some modules are more dangerous than others. This doesn't mean you should not use them though!\nYou probably heard someone referring to something like this:\n#Crash python interpreter\nfrom ctypes import *\ndef crashme():\n c = c_char('x')\n p = pointer(c)\n i = 0\n while True:\n p[i] = 'x'\n i += 1\n\nThe python interpreter crashing is different than just the python code itself erroring out with a runtime error. For example infinite recursion with a default recursion limit set would cause a runtime error but the python interpreter would still be alive afterwards. \nAnother good example of this is with the sys module. You wouldn't stop using the sys module though because it can crash the python interpreter. \nimport sys\nsys.setrecursionlimit(2**30)\ndef f(x):\n f(x+1)\n\n#This will cause no more resources left and then crash the python interpreter\nf(1)\n\nThere are many libraries as well that provide lower level access. For example the The gc module can be manipulated to give access to partially constructed object, accessing fields of which can cause crashes.\nReference and ideas taken from: Crashing Python\n", "ctypes can indeed cause crashes, if the C library you're using can already cause crashes.\nIf anything, ctypes can help reduce crashes, because you can enforce runtime type safety with the argtypes property on C functions using ctypes.\nBut if your C library is already stable and tested, there is absolutely no reason not to use ctypes if it performs what you need in terms of bringing C and Python together.\n", "I highly suggest you look into reading this book:\nGray Hat Python: Python Programming for Hackers and Reverse Engineers\nThe book functions as an in-depth tutorial for the ctypes library, and shows you how to run incredibly low-level code\n" ]
[ 13, 6, 4, 2 ]
[]
[]
[ "ctypes", "python", "winapi" ]
stackoverflow_0001318736_ctypes_python_winapi.txt
Q: Is this a good approach to avoid using SQLAlchemy/SQLObject? Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell. Rather than translate a row from the database into an object: each table is represented by a class a row is retrieved as a dict an object representing a cursor provides access to a table like so: cursor.mytable.get_by_ids(low, high) removing means setting the time_of_removal to the current time So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row. Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types. If you see any potential problems with going down this road, please let me know. Thanks. A: That doesn't do away with the need for an ORM. That is an ORM. In which case, why reinvent the wheel? Is there a compelling reason you're trying to avoid using an established ORM? A: You will still be using SQLAlchemy. ResultProxy is actually a dictionary once you go for .fetchmany() or similar. Use SQLAlchemy as a tool that makes managing connections easier, as well as executing statements. Documentation is pretty much separated in sections, so you will be reading just the part that you need. A: web.py has in a decent db abstraction too (not an ORM). Queries are written in SQL (not specific to any rdbms), but your code remains compatible with any of the supported dbs (sqlite, mysql, postresql, and others). from http://webpy.org/cookbook/select: myvar = dict(name="Bob") results = db.select('mytable', myvar, where="name = $name")
Is this a good approach to avoid using SQLAlchemy/SQLObject?
Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell. Rather than translate a row from the database into an object: each table is represented by a class a row is retrieved as a dict an object representing a cursor provides access to a table like so: cursor.mytable.get_by_ids(low, high) removing means setting the time_of_removal to the current time So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row. Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types. If you see any potential problems with going down this road, please let me know. Thanks.
[ "That doesn't do away with the need for an ORM. That is an ORM. In which case, why reinvent the wheel?\nIs there a compelling reason you're trying to avoid using an established ORM?\n", "You will still be using SQLAlchemy. ResultProxy is actually a dictionary once you go for .fetchmany() or similar.\nUse SQLAlchemy as a tool that makes managing connections easier, as well as executing statements. Documentation is pretty much separated in sections, so you will be reading just the part that you need.\n", "web.py has in a decent db abstraction too (not an ORM). \nQueries are written in SQL (not specific to any rdbms), but your code remains compatible with any of the supported dbs (sqlite, mysql, postresql, and others).\nfrom http://webpy.org/cookbook/select:\nmyvar = dict(name=\"Bob\")\nresults = db.select('mytable', myvar, where=\"name = $name\")\n\n" ]
[ 8, 2, 0 ]
[]
[]
[ "python", "sqlalchemy", "sqlobject" ]
stackoverflow_0001319585_python_sqlalchemy_sqlobject.txt
Q: what are the pros/cons of py2exe im looking for simple script that will compile to exe , and i found py2exe before i decide to work with it , what do you think are the pros and cons of the py2exe tool? A: Pros: Your app becomes standalone, can run on a PC without Python Cons: False sense of security, your app is still interpreted, it's just that the script is no longer visible but the byte code is and AFAIK it can be easily converted back to the source. Large application size, the simplest script packaged with py2exe becomes several megabytes in size. Potential problems, in certain cases(mostly if you use encodings) you need to retest your application as an exe and make sure everything works as expected, you may need to check in the code to find out if you are running inside py2exe and do something special. May not work if your application depends on certain third-party python modules. Check Py2exe homepage to find how to more and how to workaround some of these problems A: One con that I'm aware of: no support for Python 3.x. As far as I'm aware, there has been no work done on this (nothing in the SourceForge SVN repo anyway), and no plans for 3.x published on the py2exe site at this time. A: Look through the third-party libraries that you use. Some libraries (e.g. PIL) do tricks with conditional imports that make it hard for py2exe to bundle the right code. These issues can often be worked around, but a bit of googling up front might save you some headaches later.
what are the pros/cons of py2exe
im looking for simple script that will compile to exe , and i found py2exe before i decide to work with it , what do you think are the pros and cons of the py2exe tool?
[ "Pros:\n\nYour app becomes standalone, can run\non a PC without Python\n\nCons:\n\nFalse sense of security, your app is still interpreted, it's just that the script is no longer visible but the byte code is and AFAIK it can be easily converted back to the source.\nLarge application size, the simplest script packaged with py2exe becomes several megabytes in size.\nPotential problems, in certain cases(mostly if you use encodings) you need to retest your application as an exe and make sure everything works as expected, you may need to check in the code to find out if you are running inside py2exe and do something special.\nMay not work if your application depends on certain third-party python modules.\n\nCheck Py2exe homepage to find how to more and how to workaround some of these problems\n", "One con that I'm aware of: no support for Python 3.x. As far as I'm aware, there has been no work done on this (nothing in the SourceForge SVN repo anyway), and no plans for 3.x published on the py2exe site at this time.\n", "Look through the third-party libraries that you use. Some libraries (e.g. PIL) do tricks with conditional imports that make it hard for py2exe to bundle the right code. These issues can often be worked around, but a bit of googling up front might save you some headaches later.\n" ]
[ 10, 5, 2 ]
[]
[]
[ "py2exe", "python" ]
stackoverflow_0001318311_py2exe_python.txt
Q: What's a more elegant rephrasing of this cropping algorithm? (in Python) I want to crop a thumbnail image in my Django application, so that I get a quadratic image that shows the center of the image. This is not very hard, I agree. I have already written some code that does exactly this, but somehow it lacks a certain ... elegance. I don't want to play code golf, but there must be a way to express this shorter and more pythonic, I think. x = y = 200 # intended size image = Image.open(filename) width = image.size[0] height = image.size[1] if (width > height): crop_box = ( ((width - height)/2), 0, ((width - height)/2)+height, height ) image = image.crop(crop_box) elif (height > width): crop_box = ( 0, ((height - width)/2), width, ((height - width)/2)+width ) image = image.crop(crop_box) image.thumbnail([x, y], Image.ANTIALIAS) Do you have any ideas, SO? edit: explained x, y A: I think this should do. size = min(image.Size) originX = image.Size[0] / 2 - size / 2 originY = image.Size[1] / 2 - size / 2 cropBox = (originX, originY, originX + size, originY + size) A: The fit() function in the PIL ImageOps module does what you want: ImageOps.fit(image, (min(*image.size),) * 2, Image.ANTIALIAS, 0, (.5, .5)) A: width, height = image.size if width > height: crop_box = # something 1 else: crop_box = # something 2 image = image.crop(crop_box) image.thumbnail([x, x], Image.ANTIALIAS) # explicitly show "square" thumbnail A: I want to a content analysis of a jepg image. I wish to take a jpeg imafe say 251 x 261 and pass it through an algorithm to crop it to say 96 x 87. Can this program do that like t write an intelligent cropping algorithm, with a prompt to rezie the image.
What's a more elegant rephrasing of this cropping algorithm? (in Python)
I want to crop a thumbnail image in my Django application, so that I get a quadratic image that shows the center of the image. This is not very hard, I agree. I have already written some code that does exactly this, but somehow it lacks a certain ... elegance. I don't want to play code golf, but there must be a way to express this shorter and more pythonic, I think. x = y = 200 # intended size image = Image.open(filename) width = image.size[0] height = image.size[1] if (width > height): crop_box = ( ((width - height)/2), 0, ((width - height)/2)+height, height ) image = image.crop(crop_box) elif (height > width): crop_box = ( 0, ((height - width)/2), width, ((height - width)/2)+width ) image = image.crop(crop_box) image.thumbnail([x, y], Image.ANTIALIAS) Do you have any ideas, SO? edit: explained x, y
[ "I think this should do.\nsize = min(image.Size)\n\noriginX = image.Size[0] / 2 - size / 2\noriginY = image.Size[1] / 2 - size / 2\n\ncropBox = (originX, originY, originX + size, originY + size)\n\n", "The fit() function in the PIL ImageOps module does what you want:\nImageOps.fit(image, (min(*image.size),) * 2, Image.ANTIALIAS, 0, (.5, .5))\n\n", "width, height = image.size\nif width > height:\n crop_box = # something 1\nelse:\n crop_box = # something 2\nimage = image.crop(crop_box)\nimage.thumbnail([x, x], Image.ANTIALIAS) # explicitly show \"square\" thumbnail\n\n", "I want to a content analysis of a jepg image. I wish to take a jpeg imafe say 251 x 261 and pass it through an algorithm to crop it to say 96 x 87. Can this program do that like t write an intelligent cropping algorithm, with a prompt to rezie the image.\n" ]
[ 9, 6, 1, 0 ]
[]
[]
[ "crop", "image", "python", "python_imaging_library" ]
stackoverflow_0000709388_crop_image_python_python_imaging_library.txt
Q: How to submit data of a flash form? [python] I would like to know if it is possible to submit a flash form from python and, if it is, how? I have done form submitting from python before, but the forms were HTML not flash. I really have no idea on how to do this. In my research about this I kept getting 'Ming'. However, Ming is only to create .swf files and that's not what I intend to do. Any help on this is greatly appreciated. A: You can set the url attribute (I think it's url, please correct me if I'm wrong) on a Flash form control to a Python script - then it will pass it through HTTP POST like any normal HTML form. You've got nothing to be afraid of, it uses the same protocol to communicate, it's just a different submission process. A: For your flash app, there's no difference if the backend is python, php or anything, so you can follow a normal "php + flash contact form" guide and then build the backend using django or any other python web framework, receive the information from the http request (GET or POST, probably the last one) and do whatever you wanted to do with them. Notice the response from python to flash works the same as with php, it's just http content, so you can use XML or even better, JSON.
How to submit data of a flash form? [python]
I would like to know if it is possible to submit a flash form from python and, if it is, how? I have done form submitting from python before, but the forms were HTML not flash. I really have no idea on how to do this. In my research about this I kept getting 'Ming'. However, Ming is only to create .swf files and that's not what I intend to do. Any help on this is greatly appreciated.
[ "You can set the url attribute (I think it's url, please correct me if I'm wrong) on a Flash form control to a Python script - then it will pass it through HTTP POST like any normal HTML form.\nYou've got nothing to be afraid of, it uses the same protocol to communicate, it's just a different submission process.\n", "For your flash app, there's no difference if the backend is python, php or anything, so you can follow a normal \"php + flash contact form\" guide and then build the backend using django or any other python web framework, receive the information from the http request (GET or POST, probably the last one) and do whatever you wanted to do with them.\nNotice the response from python to flash works the same as with php, it's just http content, so you can use XML or even better, JSON.\n" ]
[ 1, 0 ]
[]
[]
[ "flash", "forms", "python" ]
stackoverflow_0001319895_flash_forms_python.txt
Q: microcrontroller output to python cgi script I bought this temperature sensor logger kit: http://quozl.netrek.org/ts/. It works great with the supplied C code, I like to use python because of its simplicity, so I wrote a script in python that displays the output from the microcontroller. I only have one temperature sensor hooked up to the kit. I want the temperature to be displayed on a web page, but can't seem to figure it out, I'm pretty sure it has something to do with the output from the micro having a \r\n DOS EOL character and linux web servers do not interpret it properly. The book I have says "Depending on the web server you are using, you might need to make configuration changes to understand how to serve CGI files." I am using debian and apache2 and basic cgi scripts work fine. Here is my code for just displaying the sensor to the console (this works fine): import serial ser = serial.Serial('/dev/ttyS0', 2400) while 1: result = ser.readline() if result: print result Here is my test.cgi script that works: #!/usr/bin/python print "Content-type: text/html\n" print "<title>CGI Text</title>\n" print "<h1>cgi works!</h1>" Here is the cgi script I have started to display temp (doesn't work - 500 internal server error): #!/usr/bin/python import sys, serial sys.stderr = sys.stdout ser = serial.Serial('/dev/ttyS0', 2400) print "Content-type: text/html\n" print """ <title>Real Time Temperature</title> <h1>Real Time Temperature:</h1> """ #result = ser.readline() #if result: print ser.readline() If i run python rtt.cgi in the console it outputs the correct html and temperature, I know this will not be real time and that the page will have to be reloaded every time that the user wants to see the temperature, but that stuff is coming in the future.. From my apache2 error log it says: malformed header from script. Bad header= File "/usr/lib/cgi-bin/rtt.c: rtt.cgi A: I'm guessing that the execution context under which your CGI is running is unable to complete the read() from the serial port. Incidentally the Python standard libraries have MUCH better ways for writing CGI scripts than what you're doing here; and even the basic string handling offers a better way to interpolate your results (assuming you code has the necessary permissions to read() them) into the HTML. At least I'd recommend something like: #!/usr/bin/python import sys, serial sys.stderr = sys.stdout ser = serial.Serial('/dev/ttyS0', 2400) html = """Content-type: text/html <html><head><title>Real Time Temperature</title></head><body> <h1>Real Time Temperature:</h1> <p>%s</p> </body></html> """ % ser.readline() # should be cgi.escape(ser.readline())! ser.close() sys.exit(0) Notice we just interpolate the results of ser.readline() into our string using the % string operator. (Incidentally your HTML was missing <html>, <head>, <body>, and <p> (paragraph) tags). There are still problems with this. For example we really should at least import cgi wrap the foreign data in that to ensure that HTML entities are properly substituted for any reserved characters, etc). I'd suggest further reading: [Python Docs]: http://docs.python.org/library/cgi.html A: one more time: # Added to allow cgi-bin to execute cgi, python and perl scripts ScriptAlias /cgi-bin/ /var/www/cgi-bin/ AddHandler cgi-script .cgi .py .pl <Directory /var/www> Options +Execcgi AddHandler cgi-script .cgi .py .pl </Directory> A: Michael, It looks like the issue is definitely permissions, however, you shouldn't try to make your script have the permission of /dev/ttyS0. What you will probably need to do is spawn another process where the first thing you do is change your group to the group of the /dev/ttyS0 device. On my box that's 'dialout' you're may be different. You'll need to import the os package, look in the docs for the Process Parameters, on that page you will find some functions that allow you to change your ownership. You will also need to use one of the functions in Process Management also in the os package, these functions spawn processes, but you will need to choose one that will return the data from the spawned process. The subprocess package may be better for this. The reason you need to spawn another process is that the CGI script need to run under the Apache process and the spawn process needs to access the serial port. If I get a chance in the next few days I'll try to put something together for you, but give it a try, don't wait for me. Also one other thing all HTTP headers need to end in two CRLF sequences. So your header needs to be: print "Content-type: text/html\r\n\r\n" If you don't do this your browser may not know when the header ends and the entity data begins. Read RFC-2616 ~Carl
microcrontroller output to python cgi script
I bought this temperature sensor logger kit: http://quozl.netrek.org/ts/. It works great with the supplied C code, I like to use python because of its simplicity, so I wrote a script in python that displays the output from the microcontroller. I only have one temperature sensor hooked up to the kit. I want the temperature to be displayed on a web page, but can't seem to figure it out, I'm pretty sure it has something to do with the output from the micro having a \r\n DOS EOL character and linux web servers do not interpret it properly. The book I have says "Depending on the web server you are using, you might need to make configuration changes to understand how to serve CGI files." I am using debian and apache2 and basic cgi scripts work fine. Here is my code for just displaying the sensor to the console (this works fine): import serial ser = serial.Serial('/dev/ttyS0', 2400) while 1: result = ser.readline() if result: print result Here is my test.cgi script that works: #!/usr/bin/python print "Content-type: text/html\n" print "<title>CGI Text</title>\n" print "<h1>cgi works!</h1>" Here is the cgi script I have started to display temp (doesn't work - 500 internal server error): #!/usr/bin/python import sys, serial sys.stderr = sys.stdout ser = serial.Serial('/dev/ttyS0', 2400) print "Content-type: text/html\n" print """ <title>Real Time Temperature</title> <h1>Real Time Temperature:</h1> """ #result = ser.readline() #if result: print ser.readline() If i run python rtt.cgi in the console it outputs the correct html and temperature, I know this will not be real time and that the page will have to be reloaded every time that the user wants to see the temperature, but that stuff is coming in the future.. From my apache2 error log it says: malformed header from script. Bad header= File "/usr/lib/cgi-bin/rtt.c: rtt.cgi
[ "I'm guessing that the execution context under which your CGI is running is unable to complete the read() from the serial port.\nIncidentally the Python standard libraries have MUCH better ways for writing CGI scripts than what you're doing here; and even the basic string handling offers a better way to interpolate your results (assuming you code has the necessary permissions to read() them) into the HTML.\nAt least I'd recommend something like:\n#!/usr/bin/python\nimport sys, serial\n\nsys.stderr = sys.stdout\nser = serial.Serial('/dev/ttyS0', 2400)\n\nhtml = \"\"\"Content-type: text/html\n\n<html><head><title>Real Time Temperature</title></head><body>\n<h1>Real Time Temperature:</h1>\n<p>%s</p>\n</body></html>\n\"\"\" % ser.readline() # should be cgi.escape(ser.readline())!\nser.close()\nsys.exit(0)\n\nNotice we just interpolate the results of ser.readline() into our string using the\n% string operator. (Incidentally your HTML was missing <html>, <head>, <body>, and <p> (paragraph) tags).\nThere are still problems with this. For example we really should at least import cgi wrap the foreign data in that to ensure that HTML entities are properly substituted for any reserved characters, etc).\nI'd suggest further reading: [Python Docs]: http://docs.python.org/library/cgi.html\n", "one more time:\n# Added to allow cgi-bin to execute cgi, python and perl scripts\nScriptAlias /cgi-bin/ /var/www/cgi-bin/\nAddHandler cgi-script .cgi .py .pl\n<Directory /var/www>\nOptions +Execcgi\nAddHandler cgi-script .cgi .py .pl\n</Directory>\n\n", "Michael,\nIt looks like the issue is definitely permissions, however, you shouldn't try to make your script have the permission of /dev/ttyS0. What you will probably need to do is spawn another process where the first thing you do is change your group to the group of the /dev/ttyS0 device. On my box that's 'dialout' you're may be different.\nYou'll need to import the os package, look in the docs for the Process Parameters, on that page you will find some functions that allow you to change your ownership. You will also need to use one of the functions in Process Management also in the os package, these functions spawn processes, but you will need to choose one that will return the data from the spawned process. The subprocess package may be better for this.\nThe reason you need to spawn another process is that the CGI script need to run under the Apache process and the spawn process needs to access the serial port.\nIf I get a chance in the next few days I'll try to put something together for you, but give it a try, don't wait for me.\nAlso one other thing all HTTP headers need to end in two CRLF sequences. So your header needs to be:\nprint \"Content-type: text/html\\r\\n\\r\\n\"\nIf you don't do this your browser may not know when the header ends and the entity data begins. Read RFC-2616\n~Carl\n" ]
[ 2, 0, 0 ]
[]
[]
[ "cgi", "python", "serial_port" ]
stackoverflow_0001291624_cgi_python_serial_port.txt
Q: Shorter, more pythonic way of writing an if statement I have this bc = 'off' if c.page == 'blog': bc = 'on' print(bc) Is there a more pythonic (and/or shorter) way of writing this in Python? A: Shortest one should be: bc = 'on' if c.page=='blog' else 'off' Generally this might look a bit confusing, so you should only use it when it is clear what it means. Don't use it for big boolean clauses, since it begins to look ugly fast. A: This is: definitely shorter arguably Pythonic (pre-Python 2.5, which introduced the controversial X if Z else Y syntax) questionably readable. With those caveats in mind, here it goes: bc = ("off","on")[c.page=="blog"] EDIT: As per request, the generalized form is: result = (on_false, on_true)[condition] Explanation: condition can be anything that evaluates to a Boolean. It is then treated as an integer since it is used to index the tuple: False == 0, True == 1, which then selects the right item from the tuple. A: Well, not being a python guy please take this with a huge grain of salt, but having written (and, with more difficulty, read) a lot of clever code over the years, I find myself with a strong preference now for readable code. I got the gist of what your original code was doing even though I'm a nobody as a Python guy. To be sure, you could hide it and maybe impress a Python wonk or two, but why? A: You could use an inline if statement: >>> cpage = 'blog' >>> bc = 'on' if cpage == 'blog' else 'off' >>> bc 'on' >>> cpage = 'asdf' >>> bc = 'on' if cpage == 'blog' else 'off' >>> bc 'off' There's a bit of a writeup on that feature at this blog, and the relevant PEP is PEP308. The inline if statement was introduced in Python 2.5. This one is less pythonic, but you can use and/or in this fashion: >>> cpage = 'asdf' >>> bc = (cpage == 'blog') and 'on' or 'off' >>> bc 'off' >>> cpage = 'blog' >>> bc = (cpage == 'blog') and 'on' or 'off' >>> bc 'on' This one is used more often in lambda statements than on a line by itself, but the form A and B or C is similar to if A: return B else: return C The major caveat to this method (as PEP 308 mentions) is that it returns C when B is false. A: Another possibility is to use a dict if you can compute the values outside of the function that accesses them (i.e. the values are static, which also addresses the evaluation issue in scrible's answer's comments). want_bc = {True: "on", False: "off"} # ... bc = want_bc[c.page == "blog"] I prefer this and/or the tuple indexing solutions under the general rubric of preferring computation to testing. A: You can use, a = b if c else d but if you are using a python version prior to 2.5, bc = c.page == "blog" and "on" or "off" can do the trick also.
Shorter, more pythonic way of writing an if statement
I have this bc = 'off' if c.page == 'blog': bc = 'on' print(bc) Is there a more pythonic (and/or shorter) way of writing this in Python?
[ "Shortest one should be:\nbc = 'on' if c.page=='blog' else 'off'\n\nGenerally this might look a bit confusing, so you should only use it when it is clear what it means. Don't use it for big boolean clauses, since it begins to look ugly fast.\n", "This is:\n\ndefinitely shorter\narguably Pythonic (pre-Python 2.5, which introduced the controversial X if Z else Y syntax)\nquestionably readable. With those caveats in mind, here it goes:\nbc = (\"off\",\"on\")[c.page==\"blog\"]\n\n\nEDIT: As per request, the generalized form is:\n result = (on_false, on_true)[condition]\n\nExplanation: condition can be anything that evaluates to a Boolean. It is then treated as an integer since it is used to index the tuple: False == 0, True == 1, which then selects the right item from the tuple.\n", "Well, not being a python guy please take this with a huge grain of salt, but having written (and, with more difficulty, read) a lot of clever code over the years, I find myself with a strong preference now for readable code. I got the gist of what your original code was doing even though I'm a nobody as a Python guy. To be sure, you could hide it and maybe impress a Python wonk or two, but why?\n", "You could use an inline if statement:\n>>> cpage = 'blog'\n>>> bc = 'on' if cpage == 'blog' else 'off'\n>>> bc\n'on'\n>>> cpage = 'asdf'\n>>> bc = 'on' if cpage == 'blog' else 'off'\n>>> bc\n'off'\n\nThere's a bit of a writeup on that feature at this blog, and the relevant PEP is PEP308. The inline if statement was introduced in Python 2.5.\nThis one is less pythonic, but you can use and/or in this fashion:\n>>> cpage = 'asdf'\n>>> bc = (cpage == 'blog') and 'on' or 'off'\n>>> bc\n'off'\n>>> cpage = 'blog'\n>>> bc = (cpage == 'blog') and 'on' or 'off'\n>>> bc\n'on'\n\nThis one is used more often in lambda statements than on a line by itself, but the form\n A and B or C\n\nis similar to\n if A:\n return B\n else:\n return C\n\nThe major caveat to this method (as PEP 308 mentions) is that it returns C when B is false.\n", "Another possibility is to use a dict if you can compute the values outside of the function that accesses them (i.e. the values are static, which also addresses the evaluation issue in scrible's answer's comments).\nwant_bc = {True: \"on\", False: \"off\"}\n# ...\nbc = want_bc[c.page == \"blog\"]\n\nI prefer this and/or the tuple indexing solutions under the general rubric of preferring computation to testing.\n", "You can use,\na = b if c else d \n\nbut if you are using a python version prior to 2.5,\nbc = c.page == \"blog\" and \"on\" or \"off\"\n\ncan do the trick also.\n" ]
[ 103, 65, 32, 15, 4, 3 ]
[]
[]
[ "if_statement", "python" ]
stackoverflow_0001319214_if_statement_python.txt
Q: GTK: Modify bg color of a CheckButton I tried the following, yet the button still has a white background: self.button = gtk.CheckButton() self.button.modify_fg(gtk.STATE_NORMAL, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_ACTIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_PRELIGHT, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_PRELIGHT, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_SELECTED, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_SELECTED, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_INSENSITIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_INSENSITIVE, gtk.gdk.Color(65535,0,0)) I also added the CheckButton to an EventBox, and changed the color of that, but all it did is set the background of the space around the button - the button itself was still w/ a white background. A: So you want the part with the check mark on it to be a different color? Then use this button.modify_base(gtk.STATE_NORMAL, gtk.gdk.color_parse("red")). alt text http://www.ubuntu-pics.de/bild/22793/screenshot_009_MCxjbu.png
GTK: Modify bg color of a CheckButton
I tried the following, yet the button still has a white background: self.button = gtk.CheckButton() self.button.modify_fg(gtk.STATE_NORMAL, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_NORMAL, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_ACTIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_ACTIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_PRELIGHT, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_PRELIGHT, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_SELECTED, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_SELECTED, gtk.gdk.Color(65535,0,0)) self.button.modify_fg(gtk.STATE_INSENSITIVE, gtk.gdk.Color(65535,0,0)) self.button.modify_bg(gtk.STATE_INSENSITIVE, gtk.gdk.Color(65535,0,0)) I also added the CheckButton to an EventBox, and changed the color of that, but all it did is set the background of the space around the button - the button itself was still w/ a white background.
[ "So you want the part with the check mark on it to be a different color?\nThen use this button.modify_base(gtk.STATE_NORMAL, gtk.gdk.color_parse(\"red\")).\nalt text http://www.ubuntu-pics.de/bild/22793/screenshot_009_MCxjbu.png\n" ]
[ 3 ]
[]
[]
[ "colors", "events", "gtk", "pygtk", "python" ]
stackoverflow_0001240764_colors_events_gtk_pygtk_python.txt
Q: Good graph traversal algorithm Abstract problem : I have a graph of about 250,000 nodes and the average connectivity is around 10. Finding a node's connections is a long process (10 seconds lets say). Saving a node to the database also takes about 10 seconds. I can check if a node is already present in the db very quickly. Allowing concurrency, but not having more than 10 long requests at a time, how would you traverse the graph to gain the highest coverage the quickest. Concrete problem : I'm trying to scrape a website user pages. To discover new users I'm fetching the friend list from already known users. I've already imported about 10% of the graph but I keep getting stuck in cycles or using too much memory remembering too many nodes. My current implementation : def run() : import_pool = ThreadPool(10) user_pool = ThreadPool(1) do_user("arcaneCoder", import_pool, user_pool) def do_user(user, import_pool, user_pool) : id = user alias = models.Alias.get(id) # if its been updates in the last 7 days if alias and alias.modified + datetime.timedelta(days=7) > datetime.datetime.now() : sys.stderr.write("Skipping: %s\n" % user) else : sys.stderr.write("Importing: %s\n" % user) while import_pool.num_jobs() > 20 : print "Too many queued jobs, sleeping" time.sleep(15) import_pool.add_job(alias_view.import_id, [id], lambda rv : sys.stderr.write("Done Importing %s\n" % user)) sys.stderr.write("Crawling: %s\n" % user) users = crawl(id, 5) if len(users) >= 2 : for user in random.sample(users, 2) : if (user_pool.num_jobs() < 100) : user_pool.add_job(do_user, [user, import_pool, user_pool]) def crawl(id, limit=50) : '''returns the first 'limit' friends of a user''' *not relevant* Problems of current implementation : Gets stuck in cliques that I've already imported, thereby wasting time and the importing threads are idle. Will add more as they get pointed out. So, marginal improvments are welcome, as well as full rewrites. Thanks! A: To remember IDs of the users you've already visited, you need a map of a length of 250,000 integers. That's far from "too much". Just maintain such a map and only traverse through the edges that lead to the already undiscovered users, adding them to that map at the point of finding such edge. As far I can see, you're close to implement Breadth-first search (BFS). Check google about the details of this algorithm. And, of course, do not forget about mutexes -- you'll need them. A: I am really confused as to why it takes 10 seconds to add a node to the DB. That sounds like a problem. What database are you using? Do you have severe platform restrictions? With modern systems, and their oodles of memory, I would recommend a nice simple cache of some kind. You should be able to create a very quick cache of user information that would allow you to avoid repeated work. When you have encountered a node already, stop processing. This will avoid cycling forever in cliques. If you need to allow for rehashing existing nodes after a while, you can use a last_visit_number which would be a global value in the dB. If the node has that number, then this crawl is the one that encountered it. If you want to automatically revisit any nodes, you just need to bump the last_visit_number before starting the crawl. By your description, I am not quite sure how you are getting stuck. Edit ------ I just noticed you had a concrete question. In order to increase how quickly you pull in new data, I would keep track of the number of times a given user was linked to in your data (imported or not yet imported). When choosing a user to crawl, I would pick users that have a low number of links. I would specifically go for either the lowest number of links or a random choice among the users with the lowest number of links. Jacob A: There is no particular algorithm that will help you optimise the construction of a graph from scratch. One way or another, you are going to have to visit each node at least once. Whether you do this depth first or breadth first is irrelevant from a speed perspective. Theran correctly points out in a comment below that breadth-first search, by exploring nearer nodes first, may give you a more useful graph immediately, before the whole graph is completed; this may or may not be a concern for you. He also notes that the neatest version of depth-first search is implemented using recursion, which could potentially be a problem for you. Note that recursion is not required, however; you can add incompletely explored nodes to a stack and process them linearly if you wish. If you do a simple existence check for new nodes (O(1) if you use a hash for lookup), then cycles will not be a problem at all. Cycles are only a concern if you do not store the complete graph. You can optimise searches through the graph, but the construction step itself will always take linear time. I agree with other posters that the size of your graph should not be a problem. 250,000 is not very large! Regarding concurrent execution; the graph is updated by all threads, so it needs to be a synchronised data structure. Since this is Python, you can make use of the Queue module to store new links still to be processed by your threads. A: Although you say that getting a friend list takes a lot of time (10 seconds or more), a variant of good-old Dijkstra's algorithm just might work: Get any node. Get a connection from any node you already loaded. If the other end hasn't been loaded yet, add the node to the graph. Go to step 2. The trick is to select the connection you load in step 2 in a smart way. A few short remarks about this: You should somehow prevent the same connection to be loaded twice or more often. Selecting a random connection and discard it if it's been loaded already is very inefficient if you're after all connections. If you want to load all connections eventually, load all connections of a node at the same time. In order to really say something about efficiency, please provide more details about datastructure.
Good graph traversal algorithm
Abstract problem : I have a graph of about 250,000 nodes and the average connectivity is around 10. Finding a node's connections is a long process (10 seconds lets say). Saving a node to the database also takes about 10 seconds. I can check if a node is already present in the db very quickly. Allowing concurrency, but not having more than 10 long requests at a time, how would you traverse the graph to gain the highest coverage the quickest. Concrete problem : I'm trying to scrape a website user pages. To discover new users I'm fetching the friend list from already known users. I've already imported about 10% of the graph but I keep getting stuck in cycles or using too much memory remembering too many nodes. My current implementation : def run() : import_pool = ThreadPool(10) user_pool = ThreadPool(1) do_user("arcaneCoder", import_pool, user_pool) def do_user(user, import_pool, user_pool) : id = user alias = models.Alias.get(id) # if its been updates in the last 7 days if alias and alias.modified + datetime.timedelta(days=7) > datetime.datetime.now() : sys.stderr.write("Skipping: %s\n" % user) else : sys.stderr.write("Importing: %s\n" % user) while import_pool.num_jobs() > 20 : print "Too many queued jobs, sleeping" time.sleep(15) import_pool.add_job(alias_view.import_id, [id], lambda rv : sys.stderr.write("Done Importing %s\n" % user)) sys.stderr.write("Crawling: %s\n" % user) users = crawl(id, 5) if len(users) >= 2 : for user in random.sample(users, 2) : if (user_pool.num_jobs() < 100) : user_pool.add_job(do_user, [user, import_pool, user_pool]) def crawl(id, limit=50) : '''returns the first 'limit' friends of a user''' *not relevant* Problems of current implementation : Gets stuck in cliques that I've already imported, thereby wasting time and the importing threads are idle. Will add more as they get pointed out. So, marginal improvments are welcome, as well as full rewrites. Thanks!
[ "To remember IDs of the users you've already visited, you need a map of a length of 250,000 integers. That's far from \"too much\". Just maintain such a map and only traverse through the edges that lead to the already undiscovered users, adding them to that map at the point of finding such edge.\nAs far I can see, you're close to implement Breadth-first search (BFS). Check google about the details of this algorithm. And, of course, do not forget about mutexes -- you'll need them.\n", "I am really confused as to why it takes 10 seconds to add a node to the DB. That sounds like a problem. What database are you using? Do you have severe platform restrictions?\nWith modern systems, and their oodles of memory, I would recommend a nice simple cache of some kind. You should be able to create a very quick cache of user information that would allow you to avoid repeated work. When you have encountered a node already, stop processing. This will avoid cycling forever in cliques.\nIf you need to allow for rehashing existing nodes after a while, you can use a last_visit_number which would be a global value in the dB. If the node has that number, then this crawl is the one that encountered it. If you want to automatically revisit any nodes, you just need to bump the last_visit_number before starting the crawl.\nBy your description, I am not quite sure how you are getting stuck.\nEdit ------\nI just noticed you had a concrete question. In order to increase how quickly you pull in new data, I would keep track of the number of times a given user was linked to in your data (imported or not yet imported). When choosing a user to crawl, I would pick users that have a low number of links. I would specifically go for either the lowest number of links or a random choice among the users with the lowest number of links.\nJacob\n", "There is no particular algorithm that will help you optimise the construction of a graph from scratch. One way or another, you are going to have to visit each node at least once. Whether you do this depth first or breadth first is irrelevant from a speed perspective. Theran correctly points out in a comment below that breadth-first search, by exploring nearer nodes first, may give you a more useful graph immediately, before the whole graph is completed; this may or may not be a concern for you. He also notes that the neatest version of depth-first search is implemented using recursion, which could potentially be a problem for you. Note that recursion is not required, however; you can add incompletely explored nodes to a stack and process them linearly if you wish.\nIf you do a simple existence check for new nodes (O(1) if you use a hash for lookup), then cycles will not be a problem at all. Cycles are only a concern if you do not store the complete graph. You can optimise searches through the graph, but the construction step itself will always take linear time.\nI agree with other posters that the size of your graph should not be a problem. 250,000 is not very large!\nRegarding concurrent execution; the graph is updated by all threads, so it needs to be a synchronised data structure. Since this is Python, you can make use of the Queue module to store new links still to be processed by your threads.\n", "Although you say that getting a friend list takes a lot of time (10 seconds or more), a variant of good-old Dijkstra's algorithm just might work:\n\nGet any node.\nGet a connection from any node you already loaded.\nIf the other end hasn't been loaded yet, add the node to the graph.\nGo to step 2.\n\nThe trick is to select the connection you load in step 2 in a smart way. A few short remarks about this:\n\nYou should somehow prevent the same connection to be loaded twice or more often. Selecting a random connection and discard it if it's been loaded already is very inefficient if you're after all connections.\nIf you want to load all connections eventually, load all connections of a node at the same time.\n\nIn order to really say something about efficiency, please provide more details about datastructure.\n" ]
[ 7, 2, 2, 0 ]
[]
[]
[ "algorithm", "graph_traversal", "language_agnostic", "performance", "python" ]
stackoverflow_0001320688_algorithm_graph_traversal_language_agnostic_performance_python.txt
Q: Multiple versions of Python on OS X Leopard I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea? A: There's nothing inherently wrong with having multiple versions of Python around. Sometimes it's a necessity when using applications with version dependencies. Probably the biggest issue is dealing with site-package dependencies which may vary from app to app. Tools like virtualenv can help there. One thing you should not do is attempt to remove the Apple-supplied Python in /System/Library/Frameworks and linked to from /usr/bin/python. (Note the recent discussion of multiple versions here.) A: Ian Bicking's virtualenv allows me to have isolated Pythons for each application I build, and lets me decide whether or not to include the global site-packages in the isolated Python environment. I haven't tried it with Zope, but I'm guessing that the following should work nicely: Using your Zope's Python, make a new virtualenv, either with or without --no-site-packages Drop your Zope into the virtualenv Activate the environment with $VENV/bin/activate Install any needed site-packages Run your Zope using the Python now at $VENV/bin/python This has worked brilliantly for managing Django projects with various versions of Python, Django, and add-ons. This article seems to go into more detail on the specifics of Grok and Virtualenv, but the generalities should apply to Zope as welll. A: +1 for virtualenv. Even if you don't need different Python versions, it's still good to keep your development dependencies seperate from your system Python. I'm not sure what OS you are using, but I find these instructions very useful for getting python development environments running on OSX. A: The approach I prefer which should work on every UNIX-like operating system: Create for each application which need an specific python version an user account. Install in each user count the corresponding python version with an user-local prefix (like ~/build/python) and add ~/build/bin/ to the PATH environment variable of the user. Install/use your python applications in their correct user. The advantage of this approach is the perfect isolation between the individual python installations and relatively convenient selection of the correct python environment (just su to the appropriate user). Also the operating system remains untouched.
Multiple versions of Python on OS X Leopard
I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea?
[ "There's nothing inherently wrong with having multiple versions of Python around. Sometimes it's a necessity when using applications with version dependencies. Probably the biggest issue is dealing with site-package dependencies which may vary from app to app. Tools like virtualenv can help there. One thing you should not do is attempt to remove the Apple-supplied Python in /System/Library/Frameworks and linked to from /usr/bin/python. (Note the recent discussion of multiple versions here.)\n", "Ian Bicking's virtualenv allows me to have isolated Pythons for each application I build, and lets me decide whether or not to include the global site-packages in the isolated Python environment.\nI haven't tried it with Zope, but I'm guessing that the following should work nicely:\n\nUsing your Zope's Python, make a new virtualenv, either with or without --no-site-packages\nDrop your Zope into the virtualenv\nActivate the environment with $VENV/bin/activate\nInstall any needed site-packages\nRun your Zope using the Python now at $VENV/bin/python\n\nThis has worked brilliantly for managing Django projects with various versions of Python, Django, and add-ons.\nThis article seems to go into more detail on the specifics of Grok and Virtualenv, but the generalities should apply to Zope as welll.\n", "+1 for virtualenv. \nEven if you don't need different Python versions, it's still good to keep your development dependencies seperate from your system Python.\nI'm not sure what OS you are using, but I find these instructions very useful for getting python development environments running on OSX.\n", "The approach I prefer which should work on every UNIX-like operating system:\nCreate for each application which need an specific python version an user account. Install in each user count the corresponding python version with an user-local prefix (like ~/build/python) and add ~/build/bin/ to the PATH environment variable of the user. Install/use your python applications in their correct user.\nThe advantage of this approach is the perfect isolation between the individual python installations and relatively convenient selection of the correct python environment (just su to the appropriate user). Also the operating system remains untouched.\n" ]
[ 20, 9, 2, 1 ]
[]
[]
[ "macos", "osx_leopard", "python", "zope" ]
stackoverflow_0001218891_macos_osx_leopard_python_zope.txt
Q: How to start a COM server implemented in python? I am using python for making a COM local server. In fact, the whole COM part is implemented in a dll and my python script is calling that dll thanks to ctypes. It works ok when I run the script manually. I would like to see my server automatically ran when a COM client request it. I know that it is possible by giving the command line as value of the LocalServer32 registry key. Unfortunatelly, I can't manage to see windows running my python script properly. I've tried to use python and pythonw with a similar problems. It seems that windows is adding a "-Embedding" flag to the command line and I guess that it can be a problem for python. Any ideas? Does anybody how to avoid the "-Embedding" flag in the command line? A: The "-embedding" flag is added by COM automatically, the purpose of which is so that the server application can parse this flag to determine that it was run by COM. COM appends the "-Embedding" flag to the string, so the application that uses flags will need to parse the whole string and check for the -Embedding flag. The following code example shows the setting for this value. You will need to handle this in your server code.
How to start a COM server implemented in python?
I am using python for making a COM local server. In fact, the whole COM part is implemented in a dll and my python script is calling that dll thanks to ctypes. It works ok when I run the script manually. I would like to see my server automatically ran when a COM client request it. I know that it is possible by giving the command line as value of the LocalServer32 registry key. Unfortunatelly, I can't manage to see windows running my python script properly. I've tried to use python and pythonw with a similar problems. It seems that windows is adding a "-Embedding" flag to the command line and I guess that it can be a problem for python. Any ideas? Does anybody how to avoid the "-Embedding" flag in the command line?
[ "The \"-embedding\" flag is added by COM automatically, the purpose of which is so that the server application can parse this flag to determine that it was run by COM.\n\nCOM appends the \"-Embedding\" flag to\n the string, so the application that\n uses flags will need to parse the\n whole string and check for the\n -Embedding flag. The following code example shows the setting for this\n value.\n\nYou will need to handle this in your server code.\n" ]
[ 0 ]
[]
[]
[ "com", "python", "windows" ]
stackoverflow_0001320954_com_python_windows.txt
Q: What is the most efficient way to add an element to a list only if isn't there yet? I have the following code in Python: def point_to_index(point): if point not in points: points.append(point) return points.index(point) This code is awfully inefficient, especially since I expect points to grow to hold a few million elements. If the point isn't in the list, I traverse the list 3 times: look for it and decide it isn't there go to the end of the list and add a new element go to the end of the list until I find the index If it is in the list, I traverse it twice: 1. look for it and decide it is there 2. go almost to the end of the list until I find the index Is there any more efficient way to do this? For instance, I know that: I'm more likely to call this function with a point that isn't in the list. If the point is in the list, it's likelier to be near the end than in the beginning. So if I could have the line: if point not in points: search the list from the end to the beginning it would improve performance when the point is already in the list. However, I don't want to do: if point not in reversed(points): because I imagine that reversed(points) itself will come at a huge cost. Nor do I want to add new points to the beginning of the list (assuming I knew how to do that in Python) because that would change the indices, which must remain constant for the algorithm to work. The only improvement I can think of is to implement the function with only one pass, if possible from the end to the beginning. The bottom line is: Is there a good way to do this? Is there a better way to optimize the function? Edit: I've gotten suggestions for implementing this with only one pass. Is there any way for index() to go from the end to the beginning? Edit: People have asked why the index is critical. I'm trying to describe a 3D surface using the OFF file format. This format describes a surface using its vertices and faces. First the vertices are listed, and the faces are described using a list of indices of vertices. That's why once I add a vortex to the list, its index must not change. Edit: There have been some suggestions (such as igor's) to use a dict. This is a good solution for scanning the list. However, when I'm done I need to print out the list in the same order it was created. If I use a dict, I need to print out its keys sorted by value. Is there a good way to do that? Edit: I implemented www.brool.com's suggestion. This was the simplest and fastest. It is essentially an ordered Dict, but without the overhead. The performance is great! A: You want to use a set: >>> x = set() >>> x set([]) >>> x.add(1) >>> x set([1]) >>> x.add(1) >>> x set([1]) A set contains only one instance of any item you add, and it will be a lot more efficient than iterating a list manually. This wikibooks page looks like a good primer if you haven't used sets in Python before. A: This will traverse at most once: def point_to_index(point): try: return points.index(point) except ValueError: points.append(point) return len(points)-1 You may also want to try this version, which takes into account that matches are likely to be near the end of the list. Note that reversed() has almost no cost even on very large lists - it does not create a copy and does not traverse the list more than once. def point_to_index(point): for index, this_point in enumerate(reversed(points)): if point == this_point: return len(points) - (index+1) else: points.append(point) return len(points)-1 You might also consider keeping a parallel dict or set of points to check for membership, since both of those types can do membership tests in O(1). There would be, of course, a substantial memory cost. Obviously, if the points were ordered somehow, you would have many other options for speeding this code up, notably using a binary search for membership tests. A: If you're worried about memory usage, but want to optimize the common case, keep a dictionary with the last n points and their indexes. points_dict = dictionary, max_cache = size of the cache. def point_to_index(point): try: return points_dict.get(point, points.index(point)) except: if len(points) >= max_cache: del points_dict[points[len(points)-max_cache]] points.append(point) points_dict[points] = len(points)-1 return len(points)-1 A: def point_to_index(point): try: return points.index(point) except: points.append(point) return len(points)-1 Update: Added in Nathan's exception code. A: As others said, consider using set or dict. You don't explain why you need the indices. If they are needed only to assign unique ids to the points (and I can't easily come up with another reason for using them), then dict will indeed work much better, e.g., points = {} def point_to_index(point): if point in points: return points[point] else: points[point] = len(points) return len(points) - 1 A: What you really want is an ordered dict (key insertion determines the order): Recipe: http://code.activestate.com/recipes/107747/ PEP: http://www.python.org/dev/peps/pep-0372/
What is the most efficient way to add an element to a list only if isn't there yet?
I have the following code in Python: def point_to_index(point): if point not in points: points.append(point) return points.index(point) This code is awfully inefficient, especially since I expect points to grow to hold a few million elements. If the point isn't in the list, I traverse the list 3 times: look for it and decide it isn't there go to the end of the list and add a new element go to the end of the list until I find the index If it is in the list, I traverse it twice: 1. look for it and decide it is there 2. go almost to the end of the list until I find the index Is there any more efficient way to do this? For instance, I know that: I'm more likely to call this function with a point that isn't in the list. If the point is in the list, it's likelier to be near the end than in the beginning. So if I could have the line: if point not in points: search the list from the end to the beginning it would improve performance when the point is already in the list. However, I don't want to do: if point not in reversed(points): because I imagine that reversed(points) itself will come at a huge cost. Nor do I want to add new points to the beginning of the list (assuming I knew how to do that in Python) because that would change the indices, which must remain constant for the algorithm to work. The only improvement I can think of is to implement the function with only one pass, if possible from the end to the beginning. The bottom line is: Is there a good way to do this? Is there a better way to optimize the function? Edit: I've gotten suggestions for implementing this with only one pass. Is there any way for index() to go from the end to the beginning? Edit: People have asked why the index is critical. I'm trying to describe a 3D surface using the OFF file format. This format describes a surface using its vertices and faces. First the vertices are listed, and the faces are described using a list of indices of vertices. That's why once I add a vortex to the list, its index must not change. Edit: There have been some suggestions (such as igor's) to use a dict. This is a good solution for scanning the list. However, when I'm done I need to print out the list in the same order it was created. If I use a dict, I need to print out its keys sorted by value. Is there a good way to do that? Edit: I implemented www.brool.com's suggestion. This was the simplest and fastest. It is essentially an ordered Dict, but without the overhead. The performance is great!
[ "You want to use a set:\n>>> x = set()\n>>> x\nset([])\n>>> x.add(1)\n>>> x\nset([1])\n>>> x.add(1)\n>>> x\nset([1])\n\nA set contains only one instance of any item you add, and it will be a lot more efficient than iterating a list manually.\nThis wikibooks page looks like a good primer if you haven't used sets in Python before.\n", "This will traverse at most once:\ndef point_to_index(point):\n try: \n return points.index(point)\n except ValueError:\n points.append(point)\n return len(points)-1\n\nYou may also want to try this version, which takes into account that matches are likely to be near the end of the list. Note that reversed() has almost no cost even on very large lists - it does not create a copy and does not traverse the list more than once.\ndef point_to_index(point):\n for index, this_point in enumerate(reversed(points)):\n if point == this_point:\n return len(points) - (index+1)\n else:\n points.append(point)\n return len(points)-1\n\nYou might also consider keeping a parallel dict or set of points to check for membership, since both of those types can do membership tests in O(1). There would be, of course, a substantial memory cost.\nObviously, if the points were ordered somehow, you would have many other options for speeding this code up, notably using a binary search for membership tests.\n", "If you're worried about memory usage, but want to optimize the common case, keep a dictionary with the last n points and their indexes. points_dict = dictionary, max_cache = size of the cache.\ndef point_to_index(point):\n try:\n return points_dict.get(point, points.index(point))\n except:\n if len(points) >= max_cache:\n del points_dict[points[len(points)-max_cache]]\n points.append(point)\n points_dict[points] = len(points)-1\n return len(points)-1\n\n", "def point_to_index(point):\n try:\n return points.index(point)\n except:\n points.append(point)\n return len(points)-1\n\nUpdate: Added in Nathan's exception code.\n", "As others said, consider using set or dict. You don't explain why you need the indices. If they are needed only to assign unique ids to the points (and I can't easily come up with another reason for using them), then dict will indeed work much better, e.g.,\npoints = {}\ndef point_to_index(point):\n if point in points:\n return points[point]\n else:\n points[point] = len(points)\n return len(points) - 1\n\n", "What you really want is an ordered dict (key insertion determines the order):\n\nRecipe: http://code.activestate.com/recipes/107747/\nPEP: http://www.python.org/dev/peps/pep-0372/\n\n" ]
[ 13, 10, 5, 2, 1, 1 ]
[]
[]
[ "list", "optimization", "python" ]
stackoverflow_0001319254_list_optimization_python.txt
Q: Creating a single exe file from Python code Possible Duplicate: py2exe - generate single executable file A friend of mine managed to pack some a Ruby script he wrote in a single exe file. When I tried to do the same thing for a Python script, with py2exe, I also got several pyd files and a dll. Is it possible to pack a Python script with all it's DLL's and pyd files into just one exe, and get rid of the other files? A: According to py2exe.org: The --bundle or -b command line switch will create less files because binary extensions, runtime dlls, and even the Python-dll itself is bundled into the executable itself, or inside the library-archive if you prefer that. ... Using a level of 1 includes the .pyd and .dll files into the zip-archive or the executable itself, and does the same for pythonXY.dll. The advantage is that you only need to distribute one file per exe, which will however be quite large. There's also another little tutorial on creating a single exe which will expand DLLs into a temporary directory at runtime, then delete the tempdir when Python exits. A: This page can probably help you. More specifically, it seems that you can achieve this by setting bundle_files to 1 and zipfile to None. I haven't tested it, and it may not work if you have additional DLL files. The other approach on that page seems clumsy: creating an installer that will expand the project into a temporary directory before running it, and removing the temporary directory after the application terminates.
Creating a single exe file from Python code
Possible Duplicate: py2exe - generate single executable file A friend of mine managed to pack some a Ruby script he wrote in a single exe file. When I tried to do the same thing for a Python script, with py2exe, I also got several pyd files and a dll. Is it possible to pack a Python script with all it's DLL's and pyd files into just one exe, and get rid of the other files?
[ "According to py2exe.org:\n\nThe --bundle or -b command line switch will create less files because binary extensions, runtime dlls, and even the Python-dll itself is bundled into the executable itself, or inside the library-archive if you prefer that.\n...\nUsing a level of 1 includes the .pyd and .dll files into the zip-archive or the executable itself, and does the same for pythonXY.dll. The advantage is that you only need to distribute one file per exe, which will however be quite large. \n\nThere's also another little tutorial on creating a single exe which will expand DLLs into a temporary directory at runtime, then delete the tempdir when Python exits.\n", "This page can probably help you. More specifically, it seems that you can achieve this by setting bundle_files to 1 and zipfile to None. I haven't tested it, and it may not work if you have additional DLL files.\nThe other approach on that page seems clumsy: creating an installer that will expand the project into a temporary directory before running it, and removing the temporary directory after the application terminates.\n" ]
[ 0, 0 ]
[]
[]
[ "executable", "python", "winapi" ]
stackoverflow_0001321708_executable_python_winapi.txt
Q: Vim python's buffer.append(line) switch window's focus I am trying to fill Vim's buffer from separate thread by using this python code. python << PYTHON_CODE import vim import time buffer_number = -1 class AppendLineTest( Thread ): def run(self): buffer = vim.buffers[buffer_number - 1] for i in range(10): buffer.append('Line number %s' % i) time.sleep(1) PYTHON_CODE function! s:test() split TestBuffer exec 'python buffer_number = '.bufnr('%') setlocal noswapfile setlocal bufhidden=delete setlocal buftype=nofile setlocal nobuflisted setlocal nonumber setlocal nowrap setlocal nocursorline python AppendLineTest().start() endfunction command! -nargs=0 PythonAppendTest call s:test() I am not sure that accessing Vim's buffers from separate threads are allowed and wonder is there some safe dispatch way. But if it is allowed, I would like to get rid of the cursor jumping when append line taking place. So to reproduce the behavior, save the code to some name.vim file and open it in the Vim. Then :source % And then PythonAppendTest Lines will be added 10 times to the temporary buffer. Just move cursor back to the window where you were and you will understand what I am talking about. If you wonder why can someone need to add lines to buffers from the separate thread, here is an example: unix top unix tail rss email While you work, they are appearing in some predefined buffer. Thanks in advance. A: I don't think Vim is very tolerant of multiple threads without patching. There's a lot more detail in the discussion at this link, but I suspect that what you want is far from trivial.
Vim python's buffer.append(line) switch window's focus
I am trying to fill Vim's buffer from separate thread by using this python code. python << PYTHON_CODE import vim import time buffer_number = -1 class AppendLineTest( Thread ): def run(self): buffer = vim.buffers[buffer_number - 1] for i in range(10): buffer.append('Line number %s' % i) time.sleep(1) PYTHON_CODE function! s:test() split TestBuffer exec 'python buffer_number = '.bufnr('%') setlocal noswapfile setlocal bufhidden=delete setlocal buftype=nofile setlocal nobuflisted setlocal nonumber setlocal nowrap setlocal nocursorline python AppendLineTest().start() endfunction command! -nargs=0 PythonAppendTest call s:test() I am not sure that accessing Vim's buffers from separate threads are allowed and wonder is there some safe dispatch way. But if it is allowed, I would like to get rid of the cursor jumping when append line taking place. So to reproduce the behavior, save the code to some name.vim file and open it in the Vim. Then :source % And then PythonAppendTest Lines will be added 10 times to the temporary buffer. Just move cursor back to the window where you were and you will understand what I am talking about. If you wonder why can someone need to add lines to buffers from the separate thread, here is an example: unix top unix tail rss email While you work, they are appearing in some predefined buffer. Thanks in advance.
[ "I don't think Vim is very tolerant of multiple threads without patching. There's a lot more detail in the discussion at this link, but I suspect that what you want is far from trivial.\n" ]
[ 2 ]
[]
[]
[ "plugins", "python", "vim" ]
stackoverflow_0001321936_plugins_python_vim.txt
Q: os.path.exists() for files in your Path? I commonly use os.path.exists() to check if a file is there before doing anything with it. I've run across a situation where I'm calling a executable that's in the configured env path, so it can be called without specifying the abspath. Is there something that can be done to check if the file exists before calling it? (I may fall back on try/except, but first I'm looking for a replacement for os.path.exists()) btw - I'm doing this on windows. A: You could get the PATH environment variable, and try "exists()" for the .exe in each dir in the path. But that could perform horribly. example for finding notepad.exe: import os for p in os.environ["PATH"].split(os.pathsep): print os.path.exists(os.path.join(p, 'notepad.exe')) more clever example: if not any([os.path.exists(os.path.join(p, executable) for p in os.environ["PATH"].split(os.pathsep)]): print "can't find %s" % executable Is there a specific reason you want to avoid exception? (besides dogma?) A: Extending Trey Stout's search with Carl Meyer's comment on PATHEXT: import os def exists_in_path(cmd): # can't search the path if a directory is specified assert not os.path.dirname(cmd) extensions = os.environ.get("PATHEXT", "").split(os.pathsep) for directory in os.environ.get("PATH", "").split(os.pathsep): base = os.path.join(directory, cmd) options = [base] + [(base + ext) for ext in extensions] for filename in options: if os.path.exists(filename): return True return False EDIT: Thanks to Aviv (on my blog) I now know there's a Twisted implementation: twisted.python.procutils.which EDIT: In Python 3.3 and up there's shutil.which() in the standard library. A: Please note that checking for existance and then opening is always open to race-conditions. The file can disappear between your program's check and its next access of the file, since other programs continue to run on the machine. Thus there might still be an exception being thrown, even though your code is "certain" that the file exists. This is, after all, why they're called exceptions. A: You generally shouldn't should os.path.exists to try to figure out if something is going to succeed. You should just try it and if you want you can handle the exception if it fails. A: On Unix you have to split the PATH var. if any([os.path.exists(os.path.join(p,progname)) for p in os.environ["PATH"].split(":")]): do_something()
os.path.exists() for files in your Path?
I commonly use os.path.exists() to check if a file is there before doing anything with it. I've run across a situation where I'm calling a executable that's in the configured env path, so it can be called without specifying the abspath. Is there something that can be done to check if the file exists before calling it? (I may fall back on try/except, but first I'm looking for a replacement for os.path.exists()) btw - I'm doing this on windows.
[ "You could get the PATH environment variable, and try \"exists()\" for the .exe in each dir in the path. But that could perform horribly.\nexample for finding notepad.exe:\nimport os\nfor p in os.environ[\"PATH\"].split(os.pathsep):\n print os.path.exists(os.path.join(p, 'notepad.exe'))\n\nmore clever example:\nif not any([os.path.exists(os.path.join(p, executable) for p in os.environ[\"PATH\"].split(os.pathsep)]):\n print \"can't find %s\" % executable\n\nIs there a specific reason you want to avoid exception? (besides dogma?)\n", "Extending Trey Stout's search with Carl Meyer's comment on PATHEXT:\nimport os\ndef exists_in_path(cmd):\n # can't search the path if a directory is specified\n assert not os.path.dirname(cmd)\n\n extensions = os.environ.get(\"PATHEXT\", \"\").split(os.pathsep)\n for directory in os.environ.get(\"PATH\", \"\").split(os.pathsep):\n base = os.path.join(directory, cmd)\n options = [base] + [(base + ext) for ext in extensions]\n for filename in options:\n if os.path.exists(filename):\n return True\n return False\n\nEDIT: Thanks to Aviv (on my blog) I now know there's a Twisted implementation: twisted.python.procutils.which\nEDIT: In Python 3.3 and up there's shutil.which() in the standard library.\n", "Please note that checking for existance and then opening is always open to race-conditions. The file can disappear between your program's check and its next access of the file, since other programs continue to run on the machine.\nThus there might still be an exception being thrown, even though your code is \"certain\" that the file exists. This is, after all, why they're called exceptions.\n", "You generally shouldn't should os.path.exists to try to figure out if something is going to succeed. You should just try it and if you want you can handle the exception if it fails.\n", "On Unix you have to split the PATH var.\nif any([os.path.exists(os.path.join(p,progname)) for p in os.environ[\"PATH\"].split(\":\")]):\n do_something()\n\n" ]
[ 17, 5, 3, 2, 0 ]
[]
[]
[ "python", "windows" ]
stackoverflow_0000775351_python_windows.txt
Q: to restrict parameter values strictly with in bounds I am trying to optimize a function using l_bfgs constraint optimization routine in scipy. But the optimization routine passes values to the function, which are not with in the Bounds. my full code looks like, def humpy(aParams): aParams = numpy.asarray(aParams) print aParams #### # connect to some other software for simulation # data[1] & data[2] are read ##### objective function val = sum(0.5*(data[1] - data[2])**2) print val return val #### def approx_fprime(): #### Initial = numpy.asarray([10.0, 15.0, 50.0, 10.0]) interval = [(5.0, 60000.0),(10.0, 50000.0),(26.0, 100000.0),(8.0, 50000.0)] opt = optimize.fmin_l_bfgs(humpy,Initial,fprime=approx_fprime, bounds=interval ,pgtol=1.0000000000001e-05,iprint=1, maxfun=50000) print 'optimized parameters',opt[0] print 'Optimized function value', opt[1] ####### the end #### based on the initial values(Initial) and bounds(interval) opt = optimize.fmin_l_bfgs() will pass values to my software for simulation, but the values passed should be with in 'bounds'. Thats not the case..see below the values passed at various iterations iter 1 = [ 10.23534209 15.1717302 50.5117245 10.28731118] iter 2 = [ 10.23534209 15.1717302 50.01160842 10.39018429] [ 11.17671043 15.85865102 50.05804208 11.43655591] [ 11.17671043 15.85865102 50.05804208 11.43655591] [ 11.28847754 15.85865102 50.05804208 11.43655591] [ 11.17671043 16.01723753 50.05804208 11.43655591] [ 11.17671043 15.85865102 50.5586225 11.43655591] ............... ............... ............... [ 49.84670071 -4.4139714 62.2536381 23.3155698847] at this iteration -4.4139714 is passed to my 2nd parameter but it should vary from (10.0, 50000.0), from where come -4.4139714 i don't know? where should i change in the code? so that it passed values which should be with in bounds A: You are trying to do bitwise exclusive or (the ^ operator) on floats, which makes no sense, so I don't think your code is actually the code you have problems with. However, I changed the ^ to ** assuming that was what you meant, and had no problems. The code worked fine for me with that change. The parameters are restricted exactly as defined. Python 2.5. A: Are you asking about doing something like this? def humpy(aParams): aParams = numpy.asarray(aParams) x = aParams[0] y = aParams[1] z = aParams[2] u = aParams[3] v = aParams[4] assert 2 <= x <= 50000 assert 1 <= y <= 35000 assert 1 <= z <= 45000 assert 2 <= u <= 50000 assert 2 <= v <= 60000 val=100.0*((y-x**2.0)^2.0+(z-y**2.0)^2.0+(u-z**2.0)^2.0+(v-u**2.0)^2.0)+(1-x)^2.0+(1-y)^2.0+(1-z)^2.0+(1-u)^2.0 return val
to restrict parameter values strictly with in bounds
I am trying to optimize a function using l_bfgs constraint optimization routine in scipy. But the optimization routine passes values to the function, which are not with in the Bounds. my full code looks like, def humpy(aParams): aParams = numpy.asarray(aParams) print aParams #### # connect to some other software for simulation # data[1] & data[2] are read ##### objective function val = sum(0.5*(data[1] - data[2])**2) print val return val #### def approx_fprime(): #### Initial = numpy.asarray([10.0, 15.0, 50.0, 10.0]) interval = [(5.0, 60000.0),(10.0, 50000.0),(26.0, 100000.0),(8.0, 50000.0)] opt = optimize.fmin_l_bfgs(humpy,Initial,fprime=approx_fprime, bounds=interval ,pgtol=1.0000000000001e-05,iprint=1, maxfun=50000) print 'optimized parameters',opt[0] print 'Optimized function value', opt[1] ####### the end #### based on the initial values(Initial) and bounds(interval) opt = optimize.fmin_l_bfgs() will pass values to my software for simulation, but the values passed should be with in 'bounds'. Thats not the case..see below the values passed at various iterations iter 1 = [ 10.23534209 15.1717302 50.5117245 10.28731118] iter 2 = [ 10.23534209 15.1717302 50.01160842 10.39018429] [ 11.17671043 15.85865102 50.05804208 11.43655591] [ 11.17671043 15.85865102 50.05804208 11.43655591] [ 11.28847754 15.85865102 50.05804208 11.43655591] [ 11.17671043 16.01723753 50.05804208 11.43655591] [ 11.17671043 15.85865102 50.5586225 11.43655591] ............... ............... ............... [ 49.84670071 -4.4139714 62.2536381 23.3155698847] at this iteration -4.4139714 is passed to my 2nd parameter but it should vary from (10.0, 50000.0), from where come -4.4139714 i don't know? where should i change in the code? so that it passed values which should be with in bounds
[ "You are trying to do bitwise exclusive or (the ^ operator) on floats, which makes no sense, so I don't think your code is actually the code you have problems with. However, I changed the ^ to ** assuming that was what you meant, and had no problems. The code worked fine for me with that change. The parameters are restricted exactly as defined.\nPython 2.5.\n", "Are you asking about doing something like this?\ndef humpy(aParams):\n aParams = numpy.asarray(aParams)\n x = aParams[0]\n y = aParams[1]\n z = aParams[2]\n u = aParams[3]\n v = aParams[4]\n assert 2 <= x <= 50000\n assert 1 <= y <= 35000\n assert 1 <= z <= 45000\n assert 2 <= u <= 50000\n assert 2 <= v <= 60000\n val=100.0*((y-x**2.0)^2.0+(z-y**2.0)^2.0+(u-z**2.0)^2.0+(v-u**2.0)^2.0)+(1-x)^2.0+(1-y)^2.0+(1-z)^2.0+(1-u)^2.0\n return val\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "scipy" ]
stackoverflow_0001322049_python_scipy.txt
Q: python time format check At python, I want to check if the input string is in "HH:MM" such as 01:16 or 23:16 or 24:00. Giving true or false by the result. How can I achieve this by using regular expression ? A: You can achieve this without regular expressions: import time def isTimeFormat(input): try: time.strptime(input, '%H:%M') return True except ValueError: return False >>>isTimeFormat('12:12') True >>>isTimeFormat('012:12') False A: import re time_re = re.compile(r'^(([01]\d|2[0-3]):([0-5]\d)|24:00)$') def is_time_format(s): return bool(time_re.match(s)) Matches everything from 00:00 to 24:00. A: This will give you the regexp object which will check it. However, depending on who you ask 24:00 might not be a valid time (it's 00:00). But I guess this is easy to modify to suit your needs. import re regexp = re.compile("(24:00|2[0-3]:[0-5][0-9]|[0-1][0-9]:[0-5][0-9])") A: This pattern should help you: http://regexlib.com/DisplayPatterns.aspx?cattabindex=4&categoryId=5
python time format check
At python, I want to check if the input string is in "HH:MM" such as 01:16 or 23:16 or 24:00. Giving true or false by the result. How can I achieve this by using regular expression ?
[ "You can achieve this without regular expressions:\nimport time\n\ndef isTimeFormat(input):\n try:\n time.strptime(input, '%H:%M')\n return True\n except ValueError:\n return False\n\n>>>isTimeFormat('12:12')\nTrue\n\n>>>isTimeFormat('012:12')\nFalse\n\n", "import re\n\ntime_re = re.compile(r'^(([01]\\d|2[0-3]):([0-5]\\d)|24:00)$')\ndef is_time_format(s):\n return bool(time_re.match(s))\n\nMatches everything from 00:00 to 24:00.\n", "This will give you the regexp object which will check it. However, depending on who you ask 24:00 might not be a valid time (it's 00:00). But I guess this is easy to modify to suit your needs.\nimport re\nregexp = re.compile(\"(24:00|2[0-3]:[0-5][0-9]|[0-1][0-9]:[0-5][0-9])\")\n\n", "This pattern should help you:\nhttp://regexlib.com/DisplayPatterns.aspx?cattabindex=4&categoryId=5\n" ]
[ 30, 5, 3, 3 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001322464_python_regex.txt
Q: What numbers can you pass as verbosity in running Python Unit Test Suites? The Python unittest framework has a concept of verbosity that I can't seem to find defined anywhere. For instance, I'm running test cases like this (like in the documentation): suite = unittest.TestLoader().loadTestsFromTestCase(MyAwesomeTest) unittest.TextTestRunner(verbosity=2).run(suite) The only number I've ever seen passed as verbosity is 2. What is this magic number, what does it mean, what what else can I pass? A: You only have 3 different levels: 0 (quiet): you just get the total numbers of tests executed and the global result 1 (default): you get the same plus a dot for every successful test or a F for every failure 2 (verbose): you get the help string of every test and the result You can use command line args rather than the verbosity argument: --quiet and --verbose which would do something similar to passing 0 or 2 to the runner.
What numbers can you pass as verbosity in running Python Unit Test Suites?
The Python unittest framework has a concept of verbosity that I can't seem to find defined anywhere. For instance, I'm running test cases like this (like in the documentation): suite = unittest.TestLoader().loadTestsFromTestCase(MyAwesomeTest) unittest.TextTestRunner(verbosity=2).run(suite) The only number I've ever seen passed as verbosity is 2. What is this magic number, what does it mean, what what else can I pass?
[ "You only have 3 different levels:\n\n0 (quiet): you just get the total numbers of tests executed and the global result\n1 (default): you get the same plus a dot for every successful test or a F for every failure\n2 (verbose): you get the help string of every test and the result\n\nYou can use command line args rather than the verbosity argument: --quiet and --verbose which would do something similar to passing 0 or 2 to the runner.\n" ]
[ 101 ]
[]
[]
[ "python", "unit_testing", "verbosity" ]
stackoverflow_0001322575_python_unit_testing_verbosity.txt
Q: Ipython common problems I love iPython's so many features, magic functions. I recently upgraded to the latest 0.10 version. But I face following common problems: %hist one of the most frequently used magic functions, doesn't exist. dreload doesn't seems to work (works only for modules?). run -d for debugging doesn't work At times, typed characters are not displayed on the console* By default even the ? and ?? didn't work. I had to hack for that to work* *The last 2 problems are true for the previous versions too. I am on Ubuntu 9.04 with Python 2.6.2 and IPython 0.10 A: sounds like an issue with your particular setup. ? and ?? have always worked on my machine, hist is still a magic function, and dreload has always only worked for modules--what else would it do? as for the debug thing, it's a known issue with python 2.6: https://bugs.launchpad.net/ipython/+bug/381069
Ipython common problems
I love iPython's so many features, magic functions. I recently upgraded to the latest 0.10 version. But I face following common problems: %hist one of the most frequently used magic functions, doesn't exist. dreload doesn't seems to work (works only for modules?). run -d for debugging doesn't work At times, typed characters are not displayed on the console* By default even the ? and ?? didn't work. I had to hack for that to work* *The last 2 problems are true for the previous versions too. I am on Ubuntu 9.04 with Python 2.6.2 and IPython 0.10
[ "sounds like an issue with your particular setup. ? and ?? have always worked on my machine, hist is still a magic function, and dreload has always only worked for modules--what else would it do?\nas for the debug thing, it's a known issue with python 2.6: https://bugs.launchpad.net/ipython/+bug/381069\n" ]
[ 1 ]
[]
[]
[ "command_line", "django", "ipython", "python" ]
stackoverflow_0001322569_command_line_django_ipython_python.txt
Q: Multiprocessing with renewable queue I'm trying to figure out how to write a program in python that uses the multiprocessing queue. I have multiple servers and one of them will provide the queue remotely with this: from multiprocessing.managers import BaseManager import Queue import daemonme queue = Queue.Queue() class QueueManager(BaseManager): pass daemonme.createDaemon() QueueManager.register('get_job', callable=lambda:queue) m = QueueManager(address=('', 50000), authkey='') s = m.get_server() s.serve_forever() Now I want to use my dual Xeon, quad core server to process jobs off of this remote queue. The jobs are totally independent of one another. So if I have 8 cores, I'd like to start 7 processes that pick a job off the queue, process it, then go back for the next one. Each of the 7 processes will do this, but I can't quite get my head wrapped around the structure of this program. Can anyone provide me some educated ideas about the basic structure of this? Thank you in advance. A: Look to the doc how to retreive a queue from the manager (paragraph 17.6.2.7) than with a pool (paragraph 17.6.2.9) of workers launch 7 jobs passing the queue to each one. in alternative you can think something like a producer/consumer problem: from multiprocessing.managers import BaseManager import random class Producer(): def __init__(self): BaseManager.register('queue') self.m = BaseManager(address=('hostname', 50000), authkey='jgsjgfdjs') self.m.connect() self.cm_queue = self.m.queue() while 1: time.sleep(random.randint(1,3)) self.cm_queue.put(<PUT-HERE-JOBS>) from multiprocessing.managers import BaseManager import time import random class Consumer(): def __init__(self): BaseManager.register('queue') self.m = BaseManager(address=('host', 50000), authkey='jgsjgfdjs') self.m.connect() self.queue = self.m.queue() while 1: <EXECUTE(job = self.queue.get())> from multiprocessing.managers import BaseManager, Queue class Manager(): def __init__(self): self.queue = QueueQueu() BaseManager.register('st_queue', callable=lambda:self.queue) self.m = BaseManager(address=('host', 50000), authkey='jgsjgfdjs') self.s = self.m.get_server() self.s.serve_forever() A: You should use the master-slave (aka. farmer-worker) pattern. The initial process would be the master and creates the jobs. It creates a Queue creates 7 slave processes, passing the queue as a parameter starts writing jobs into the queue The slave processes continuously read from the queue, and perform the jobs (perhaps until they receive a stop message from the queue). There is no need to use Manager objects in this scenario, AFAICT.
Multiprocessing with renewable queue
I'm trying to figure out how to write a program in python that uses the multiprocessing queue. I have multiple servers and one of them will provide the queue remotely with this: from multiprocessing.managers import BaseManager import Queue import daemonme queue = Queue.Queue() class QueueManager(BaseManager): pass daemonme.createDaemon() QueueManager.register('get_job', callable=lambda:queue) m = QueueManager(address=('', 50000), authkey='') s = m.get_server() s.serve_forever() Now I want to use my dual Xeon, quad core server to process jobs off of this remote queue. The jobs are totally independent of one another. So if I have 8 cores, I'd like to start 7 processes that pick a job off the queue, process it, then go back for the next one. Each of the 7 processes will do this, but I can't quite get my head wrapped around the structure of this program. Can anyone provide me some educated ideas about the basic structure of this? Thank you in advance.
[ "Look to the doc how to retreive a queue from the manager (paragraph 17.6.2.7)\nthan with a pool (paragraph 17.6.2.9) of workers launch 7 jobs passing the queue to each one.\nin alternative you can think something like a producer/consumer problem:\nfrom multiprocessing.managers import BaseManager\nimport random\n\nclass Producer():\ndef __init__(self):\n BaseManager.register('queue')\n self.m = BaseManager(address=('hostname', 50000), authkey='jgsjgfdjs')\n self.m.connect()\n self.cm_queue = self.m.queue()\n while 1:\n time.sleep(random.randint(1,3))\n self.cm_queue.put(<PUT-HERE-JOBS>)\n\nfrom multiprocessing.managers import BaseManager\nimport time\nimport random\nclass Consumer():\ndef __init__(self):\n BaseManager.register('queue')\n\n self.m = BaseManager(address=('host', 50000), authkey='jgsjgfdjs')\n self.m.connect()\n self.queue = self.m.queue()\n while 1:\n <EXECUTE(job = self.queue.get())>\n\n\nfrom multiprocessing.managers import BaseManager, Queue\nclass Manager():\n\ndef __init__(self):\n\n self.queue = QueueQueu()\n\n BaseManager.register('st_queue', callable=lambda:self.queue)\n\n self.m = BaseManager(address=('host', 50000), authkey='jgsjgfdjs')\n self.s = self.m.get_server()\n\n self.s.serve_forever()\n\n", "You should use the master-slave (aka. farmer-worker) pattern. The initial process would be the master and creates the jobs. It \n\ncreates a Queue\ncreates 7 slave processes, passing the queue as a parameter\nstarts writing jobs into the queue\n\nThe slave processes continuously read from the queue, and perform the jobs (perhaps until they receive a stop message from the queue). There is no need to use Manager objects in this scenario, AFAICT.\n" ]
[ 2, 0 ]
[]
[]
[ "multiprocessing", "python", "queue" ]
stackoverflow_0001323086_multiprocessing_python_queue.txt
Q: Finding "closest" strings in a Python list (alphabetically) I have a Python list of strings, e.g. initialized as follows: l = ['aardvark', 'cat', 'dog', 'fish', 'tiger', 'zebra'] I would like to test an input string against this list, and find the "closest string below it" and the "closest string above it", alphabetically and case-insensitively (i.e. no phonetics, just a<b etc). If the input exists in the list, both the "below" and "above" should return the input. Several examples: Input | Below | Above ------------------------------- bat | aardvark | cat aaa | None | aardvark ferret | dog | fish dog | dog | dog What's the neatest way to achieve this in Python? (currently I'm iterating over a sorted list using a for loop) To further clarify: I'm interested in simple dictionary alphabetical comparison, not anything fancy like Levenshtein or phonetics. Thanks A: This is exactly what the bisect module is for. It will be much faster than just iterating through large lists. import bisect def closest(haystack, needle): if len(haystack) == 0: return None, None index = bisect.bisect_left(haystack, needle) if index == 0: return None, haystack[0] if index == len(haystack): return haystack[index], None if haystack[index] == needle: return haystack[index], haystack[index] return haystack[index-1], haystack[index] The above code assumes you've sanitized the input and list to be all upper or lower case. Also, I wrote this on my iPhone, so please do check for typos. A: You can rephrase the problem to this: Given a sorted list of strings l and an input string s, find the index in l where s should be inserted so that l remains sorted after insertion. The elements of l at index-1 and index+1 (if they exist) are the ones you are looking for. In order to find the index, you can use binary search. A: A very naive implementation, good only for short lists: you can pretty easily iterate through the list and compare your choice against each one, then break the first time your choice is 'greater' than the item being compared. for i, item in enumerate(l): if lower(item) > lower(input): break print 'below: %s, above, %s' % (l[i-1], item) A: Are these relatively short lists, and do the contents change or are they fairly static? If you've got a large number of strings, and they're relatively fixed, you might want to look into storing your data in a Trie structure. Once you build it, then it's quick & easy to search through and find your nearest neighbors the way you'd like.
Finding "closest" strings in a Python list (alphabetically)
I have a Python list of strings, e.g. initialized as follows: l = ['aardvark', 'cat', 'dog', 'fish', 'tiger', 'zebra'] I would like to test an input string against this list, and find the "closest string below it" and the "closest string above it", alphabetically and case-insensitively (i.e. no phonetics, just a<b etc). If the input exists in the list, both the "below" and "above" should return the input. Several examples: Input | Below | Above ------------------------------- bat | aardvark | cat aaa | None | aardvark ferret | dog | fish dog | dog | dog What's the neatest way to achieve this in Python? (currently I'm iterating over a sorted list using a for loop) To further clarify: I'm interested in simple dictionary alphabetical comparison, not anything fancy like Levenshtein or phonetics. Thanks
[ "This is exactly what the bisect module is for. It will be much faster than just iterating through large lists. \nimport bisect\n\ndef closest(haystack, needle):\n if len(haystack) == 0: return None, None\n\n index = bisect.bisect_left(haystack, needle)\n if index == 0:\n return None, haystack[0]\n if index == len(haystack):\n return haystack[index], None\n if haystack[index] == needle:\n return haystack[index], haystack[index] \n return haystack[index-1], haystack[index]\n\nThe above code assumes you've sanitized the input and list to be all upper or lower case. Also, I wrote this on my iPhone, so please do check for typos. \n", "You can rephrase the problem to this:\nGiven a sorted list of strings l and an input string s, find the index in l where s should be inserted so that l remains sorted after insertion.\nThe elements of l at index-1 and index+1 (if they exist) are the ones you are looking for. In order to find the index, you can use binary search.\n", "A very naive implementation, good only for short lists: you can pretty easily iterate through the list and compare your choice against each one, then break the first time your choice is 'greater' than the item being compared.\nfor i, item in enumerate(l):\n if lower(item) > lower(input):\n break\n\nprint 'below: %s, above, %s' % (l[i-1], item)\n\n", "Are these relatively short lists, and do the contents change or are they fairly static? \nIf you've got a large number of strings, and they're relatively fixed, you might want to look into storing your data in a Trie structure. Once you build it, then it's quick & easy to search through and find your nearest neighbors the way you'd like.\n" ]
[ 16, 2, 1, 0 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001322934_python_string.txt
Q: Getting Wing IDE to stop catching the exceptions that wxPython catches I started using Wing IDE and it's great. I'm building a wxPython app, and I noticed that Wing IDE catches exceptions that are usually caught by wxPython and not really raised. This is usually useful, but I would like to disable this behavior occasionally. How do I do that? A: There is a Ignore this exception location check box in the window where the exception is reported in wing, or you could explicitly silence that specific exception in you code with a try except block.
Getting Wing IDE to stop catching the exceptions that wxPython catches
I started using Wing IDE and it's great. I'm building a wxPython app, and I noticed that Wing IDE catches exceptions that are usually caught by wxPython and not really raised. This is usually useful, but I would like to disable this behavior occasionally. How do I do that?
[ "There is a Ignore this exception location check box in the window where the exception is reported in wing, or you could explicitly silence that specific exception in you code with a try except block.\n" ]
[ 0 ]
[]
[]
[ "python", "wing_ide", "wxpython" ]
stackoverflow_0001323361_python_wing_ide_wxpython.txt
Q: Is it possible to use Panda3D inside a wxPython app? I'm developing a wxPython application. Will it be possible to embed a 3D animation controlled by Panda3D inside the gui? Bonus question: Do you think that Panda3D is the best choice? (My interest is physical simulations, and no, I don't need an engine that supports Physics, my program is responsible for calculating the physics, I just need an engine to show it well.) A: Yes - the Panda3D wiki has a mention of using wxPython to handle GUI duties. There's also some threads on the Panda3D forum (1, 2) which might help. Another popular choice for simulation visualization in Python is VPython; it is also dockable in wx.
Is it possible to use Panda3D inside a wxPython app?
I'm developing a wxPython application. Will it be possible to embed a 3D animation controlled by Panda3D inside the gui? Bonus question: Do you think that Panda3D is the best choice? (My interest is physical simulations, and no, I don't need an engine that supports Physics, my program is responsible for calculating the physics, I just need an engine to show it well.)
[ "Yes - the Panda3D wiki has a mention of using wxPython to handle GUI duties.\nThere's also some threads on the Panda3D forum (1, 2) which might help.\nAnother popular choice for simulation visualization in Python is VPython; it is also dockable in wx.\n" ]
[ 5 ]
[]
[]
[ "3d", "python", "wxpython" ]
stackoverflow_0001322041_3d_python_wxpython.txt
Q: Python - How to edit hexadecimal file byte by byte I want to be able to open up an image file and extra the hexadecimal values byte-by-byte. I have no idea how to do this and googling "python byte editing" and "python byte array" didn't come up with anything, surprisingly. Can someone point me towards the library i need to use, specific methods i can google, or tutorials/guides? A: Python standard library has mmap module, which can be used to do exactly this. Take a look on the documentation for further information. A: Depending on what you want to do it might be enough to open the file in binary mode and read the data with the normal file functions: # load it with open("somefile", 'rb') as f: data = f.read() # do something with data data.reverse() # save it with open("somefile.new", 'wb') as f: f.write(data) Python doesn't really care if the data string contains "binary" or "text" data. If you just want to do simple modifications to a file of reasonable size this is probably good enough. A: The Hachoir framework is a set of Python library and tools to parse and edit binary files: http://pypi.python.org/pypi/hachoir-core It has knowledge of common file types, so this could just be what you need. A: Check out the stuct module. This module performs conversions between Python values and C structs represented as Python strings. It uses format strings (explained below) as compact descriptions of the lay-out of the C structs and the intended conversion to/from Python values. This can be used in handling binary data stored in files or from network connections, among other sources.
Python - How to edit hexadecimal file byte by byte
I want to be able to open up an image file and extra the hexadecimal values byte-by-byte. I have no idea how to do this and googling "python byte editing" and "python byte array" didn't come up with anything, surprisingly. Can someone point me towards the library i need to use, specific methods i can google, or tutorials/guides?
[ "Python standard library has mmap module, which can be used to do exactly this. Take a look on the documentation for further information.\n", "Depending on what you want to do it might be enough to open the file in binary mode and read the data with the normal file functions:\n# load it\nwith open(\"somefile\", 'rb') as f:\n data = f.read()\n\n# do something with data\ndata.reverse()\n\n# save it\nwith open(\"somefile.new\", 'wb') as f:\n f.write(data)\n\nPython doesn't really care if the data string contains \"binary\" or \"text\" data. If you just want to do simple modifications to a file of reasonable size this is probably good enough.\n", "The Hachoir framework is a set of Python library and tools to parse and edit binary files:\nhttp://pypi.python.org/pypi/hachoir-core\nIt has knowledge of common file types, so this could just be what you need.\n", "Check out the stuct module.\n\nThis module performs conversions between Python values and C structs represented as Python strings. It uses format strings (explained below) as compact descriptions of the lay-out of the C structs and the intended conversion to/from Python values. This can be used in handling binary data stored in files or from network connections, among other sources.\n\n" ]
[ 13, 11, 5, 1 ]
[]
[]
[ "byte", "filereader", "hex", "python" ]
stackoverflow_0001322508_byte_filereader_hex_python.txt
Q: How to extend and modify PyUnit I'm about to embark upon extending and modifying PyUnit. For instance, I will add warnings to it, in addition to failures. I'm interested in hearing words of advice on how to start, for instance, subclass every PyUnit class? What to avoid and misc caveats. Looking for input from those that have extended PyUnit already. A: I recommend studying the nose project, a popular and well designed extension of PyUnit. You can browse its sources online here or get a copy on your machine via Mercurial, aka hg, a nice distributed version control system in which nose keeps its sources on Google Code Hosting. You may well disagree with some of nose's design decisions, but in general they have executed very well on those decisions, so the sources are worth studying anyway even if you decide that your extension will go in completely different directions.
How to extend and modify PyUnit
I'm about to embark upon extending and modifying PyUnit. For instance, I will add warnings to it, in addition to failures. I'm interested in hearing words of advice on how to start, for instance, subclass every PyUnit class? What to avoid and misc caveats. Looking for input from those that have extended PyUnit already.
[ "I recommend studying the nose project, a popular and well designed extension of PyUnit. You can browse its sources online here or get a copy on your machine via Mercurial, aka hg, a nice distributed version control system in which nose keeps its sources on Google Code Hosting.\nYou may well disagree with some of nose's design decisions, but in general they have executed very well on those decisions, so the sources are worth studying anyway even if you decide that your extension will go in completely different directions.\n" ]
[ 3 ]
[]
[]
[ "python", "python_unittest" ]
stackoverflow_0001323188_python_python_unittest.txt
Q: Running Panda3D on Python 2.6 I just got Panda3D for the first time. I deleted the included Python version. In my Python dir, I put a file panda.pth that looks like this: C:\Panda3D-1.6.2 C:\Panda3D-1.6.2\bin But when I run import direct.directbase.DirectStart, I get: Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import direct.directbase.DirectStart File "C:\Panda3D-1.6.2\direct\directbase\DirectStart.py", line 3, in <module> from direct.showbase import ShowBase File "C:\Panda3D-1.6.2\direct\showbase\ShowBase.py", line 10, in <module> from pandac.PandaModules import * File "C:\Panda3D-1.6.2\pandac\PandaModules.py", line 1, in <module> from libpandaexpressModules import * File "C:\Panda3D-1.6.2\pandac\libpandaexpressModules.py", line 1, in <module> from extension_native_helpers import * File "C:\Panda3D-1.6.2\pandac\extension_native_helpers.py", line 75, in <module> Dtool_PreloadDLL("libpandaexpress") File "C:\Panda3D-1.6.2\pandac\extension_native_helpers.py", line 73, in Dtool_PreloadDLL imp.load_dynamic(module, pathname) ImportError: Module use of python25.dll conflicts with this version of Python. I'm assuming this has something to do with me using Python 2.6. Any solutions? A: Python extensions aren't binary compatible across major releases. Your options are: A. Recompile panda3d for python 2.6. B. Use python 2.5. No way around it. A: If you can wait for the upcoming 1.7.0 release, it will be compiled against Python 2.6 - see this thread.
Running Panda3D on Python 2.6
I just got Panda3D for the first time. I deleted the included Python version. In my Python dir, I put a file panda.pth that looks like this: C:\Panda3D-1.6.2 C:\Panda3D-1.6.2\bin But when I run import direct.directbase.DirectStart, I get: Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import direct.directbase.DirectStart File "C:\Panda3D-1.6.2\direct\directbase\DirectStart.py", line 3, in <module> from direct.showbase import ShowBase File "C:\Panda3D-1.6.2\direct\showbase\ShowBase.py", line 10, in <module> from pandac.PandaModules import * File "C:\Panda3D-1.6.2\pandac\PandaModules.py", line 1, in <module> from libpandaexpressModules import * File "C:\Panda3D-1.6.2\pandac\libpandaexpressModules.py", line 1, in <module> from extension_native_helpers import * File "C:\Panda3D-1.6.2\pandac\extension_native_helpers.py", line 75, in <module> Dtool_PreloadDLL("libpandaexpress") File "C:\Panda3D-1.6.2\pandac\extension_native_helpers.py", line 73, in Dtool_PreloadDLL imp.load_dynamic(module, pathname) ImportError: Module use of python25.dll conflicts with this version of Python. I'm assuming this has something to do with me using Python 2.6. Any solutions?
[ "Python extensions aren't binary compatible across major releases. Your options are:\nA. Recompile panda3d for python 2.6.\nB. Use python 2.5.\nNo way around it.\n", "If you can wait for the upcoming 1.7.0 release, it will be compiled against Python 2.6 - see this thread.\n" ]
[ 3, 2 ]
[]
[]
[ "panda3d", "python" ]
stackoverflow_0001323887_panda3d_python.txt
Q: How to alphabetically sort the values in a many-to-many django-admin box? I have a simple model like this one: class Artist(models.Model): surname = models.CharField(max_length=200) name = models.CharField(max_length=200, blank=True) slug = models.SlugField(unique=True) photo = models.ImageField(upload_to='artists', blank=True) bio = models.TextField(blank=True) class Images(models.Model): title = models.CharField(max_length=200) artist = models.ManyToManyField(Artist) img = models.ImageField(upload_to='images') Well, I started to insert some Artists and then I went to the Images insert form. I found that the many-to-many artist box is unsorted: Mondino Aldo Aliprandi Bernardo Rotella Mimmo Corpora Antonio Instead of: Aliprandi Bernardo Corpora Antonio Mondino Aldo Rotella Mimmo How can I solve this issue? Any suggestion? Thank you in advance. Matteo A: Set ordering on the Article's inner Meta class. class Article(models.Model): .... class Meta: ordering = ['surname', 'name']
How to alphabetically sort the values in a many-to-many django-admin box?
I have a simple model like this one: class Artist(models.Model): surname = models.CharField(max_length=200) name = models.CharField(max_length=200, blank=True) slug = models.SlugField(unique=True) photo = models.ImageField(upload_to='artists', blank=True) bio = models.TextField(blank=True) class Images(models.Model): title = models.CharField(max_length=200) artist = models.ManyToManyField(Artist) img = models.ImageField(upload_to='images') Well, I started to insert some Artists and then I went to the Images insert form. I found that the many-to-many artist box is unsorted: Mondino Aldo Aliprandi Bernardo Rotella Mimmo Corpora Antonio Instead of: Aliprandi Bernardo Corpora Antonio Mondino Aldo Rotella Mimmo How can I solve this issue? Any suggestion? Thank you in advance. Matteo
[ "Set ordering on the Article's inner Meta class.\nclass Article(models.Model):\n ....\n\n class Meta:\n ordering = ['surname', 'name']\n\n" ]
[ 8 ]
[]
[]
[ "django", "django_admin", "many_to_many", "python", "sorting" ]
stackoverflow_0001324602_django_django_admin_many_to_many_python_sorting.txt
Q: What python modules are available to assist in daemonization in the standard library? I have a simple python program that I'd like to daemonize. Since the point of my doing this is not to demonstrate mastery over the spawn, fork, disconnect , etc, I'd like to find a module that would make it quick and simple for me. I've been looking in the std lib, but can not seem to find anything. Is there? A: Here's a library for making well behaved unix daemons: http://pypi.python.org/pypi/python-daemon/ And another one that appears more lightweight: http://code.activestate.com/recipes/278731/
What python modules are available to assist in daemonization in the standard library?
I have a simple python program that I'd like to daemonize. Since the point of my doing this is not to demonstrate mastery over the spawn, fork, disconnect , etc, I'd like to find a module that would make it quick and simple for me. I've been looking in the std lib, but can not seem to find anything. Is there?
[ "Here's a library for making well behaved unix daemons: http://pypi.python.org/pypi/python-daemon/\nAnd another one that appears more lightweight:\nhttp://code.activestate.com/recipes/278731/\n" ]
[ 4 ]
[ "subprocess\n\nis an (almost) platform-independent module to work with processes.\n" ]
[ -1 ]
[ "daemon", "python" ]
stackoverflow_0001324651_daemon_python.txt
Q: python time interval algorithm sum Assume I have 2 time intervals,such as 16:30 - 20:00 AND 15:00 - 19:00, I need to find the total time between these two intervals so the result is 5 hours (I add both intervals and subtract the intersecting interval), how can I write a generic function which also deals with all cases such as one interval inside other(so the result is the interval of the bigger one), no intersection (so the result is the sum of both intervals). My incoming data structure is primitive, simply string like "15:30" so a conversion may be needed. Thanks A: from datetime import datetime, timedelta START, END = xrange(2) def tparse(timestring): return datetime.strptime(timestring, '%H:%M') def sum_intervals(intervals): times = [] for interval in intervals: times.append((tparse(interval[START]), START)) times.append((tparse(interval[END]), END)) times.sort() started = 0 result = timedelta() for t, type in times: if type == START: if not started: start_time = t started += 1 elif type == END: started -= 1 if not started: result += (t - start_time) return result Testing with your times from the question: intervals = [ ('16:30', '20:00'), ('15:00', '19:00'), ] print sum_intervals(intervals) That prints: 5:00:00 Testing it together with data that doesn't overlap intervals = [ ('16:30', '20:00'), ('15:00', '19:00'), ('03:00', '04:00'), ('06:00', '08:00'), ('07:30', '11:00'), ] print sum_intervals(intervals) result: 11:00:00 A: I'll assume you can do the conversion to something like datetime on your own. Sum the two intervals, then subtract any overlap. You can get the overlap by comparing the min and max of each of the two ranges. A: Code for when there is an overlap, please add it to one of your solutions: def interval(i1, i2): minstart, minend = [min(*e) for e in zip(i1, i2)] maxstart, maxend = [max(*e) for e in zip(i1, i2)] if minend < maxstart: # no overlap return minend-minstart + maxend-maxstart else: # overlap return maxend-minstart A: You'll want to convert your strings into datetimes. You can do this with datetime.datetime.strptime. Given intervals of datetime.datetime objects, if the intervals are: int1 = (start1, end1) int2 = (start2, end2) Then isn't it just: if end1 < start2 or end2 < start1: # The intervals are disjoint. return (end1-start1) + (end2-start2) else: return max(end1, end2) - min(start1, start2)
python time interval algorithm sum
Assume I have 2 time intervals,such as 16:30 - 20:00 AND 15:00 - 19:00, I need to find the total time between these two intervals so the result is 5 hours (I add both intervals and subtract the intersecting interval), how can I write a generic function which also deals with all cases such as one interval inside other(so the result is the interval of the bigger one), no intersection (so the result is the sum of both intervals). My incoming data structure is primitive, simply string like "15:30" so a conversion may be needed. Thanks
[ "from datetime import datetime, timedelta\n\nSTART, END = xrange(2)\ndef tparse(timestring):\n return datetime.strptime(timestring, '%H:%M')\n\ndef sum_intervals(intervals):\n times = []\n for interval in intervals:\n times.append((tparse(interval[START]), START))\n times.append((tparse(interval[END]), END))\n times.sort()\n\n started = 0\n result = timedelta()\n for t, type in times:\n if type == START:\n if not started:\n start_time = t\n started += 1\n elif type == END:\n started -= 1\n if not started:\n result += (t - start_time) \n return result\n\nTesting with your times from the question:\nintervals = [\n ('16:30', '20:00'),\n ('15:00', '19:00'),\n ]\nprint sum_intervals(intervals)\n\nThat prints:\n5:00:00\n\nTesting it together with data that doesn't overlap\nintervals = [\n ('16:30', '20:00'),\n ('15:00', '19:00'),\n ('03:00', '04:00'),\n ('06:00', '08:00'),\n ('07:30', '11:00'),\n ]\nprint sum_intervals(intervals)\n\nresult:\n11:00:00\n\n", "I'll assume you can do the conversion to something like datetime on your own.\nSum the two intervals, then subtract any overlap. You can get the overlap by comparing the min and max of each of the two ranges.\n", "Code for when there is an overlap, please add it to one of your solutions:\ndef interval(i1, i2):\n minstart, minend = [min(*e) for e in zip(i1, i2)]\n maxstart, maxend = [max(*e) for e in zip(i1, i2)]\n\n if minend < maxstart: # no overlap\n return minend-minstart + maxend-maxstart\n else: # overlap\n return maxend-minstart\n\n", "You'll want to convert your strings into datetimes. You can do this with datetime.datetime.strptime.\nGiven intervals of datetime.datetime objects, if the intervals are:\nint1 = (start1, end1)\nint2 = (start2, end2)\n\nThen isn't it just:\nif end1 < start2 or end2 < start1:\n # The intervals are disjoint.\n return (end1-start1) + (end2-start2)\nelse:\n return max(end1, end2) - min(start1, start2)\n\n" ]
[ 4, 0, 0, 0 ]
[]
[]
[ "intervals", "python", "time" ]
stackoverflow_0001324748_intervals_python_time.txt
Q: Generate unique ID for python object based on its attributes Is there a way to generate a hash-like ID in for objects in python that is solely based on the objects' attribute values? For example, class test: def __init__(self, name): self.name = name obj1 = test('a') obj2 = test('a') hash1 = magicHash(obj1) hash2 = magicHash(obj2) What I'm looking for is something where hash1 == hash2. Does something like this exist in python? I know I can test if obj1.name == obj2.name, but I'm looking for something general I can use on any object. A: You mean something like this? Using the special method __hash__ class test: def __init__(self, name): self.name = name def __hash__(self): return hash(self.name) >>> hash(test(10)) == hash(test(20)) False >>> hash(test(10)) == hash(test(10)) True A: To get a unique comparison: To be unique you could serialize the data and then compare the serialized value to ensure it matches exactly. Example: import pickle class C: i = 1 j = 2 c1 = C() c2 = C() c3 = C() c1.i = 99 unique_hash1 = pickle.dumps(c1) unique_hash2 = pickle.dumps(c2) unique_hash3 = pickle.dumps(c3) unique_hash1 == unique_hash2 #False unique_hash2 == unique_hash3 #True If you don't need unique values for each object, but mostly unique: Note the same value will always reduce to the same hash, but 2 different values could reduce to the same hash. You cannot use something like the built-in hash() function (unless you override __hash__) hash(c1) == hash(c2) #False hash(c2) == hash(c3) #False <--- Wrong or something like serialize the data using pickle and then use zlib.crc32. import zlib crc1 = zlib.crc32(pickle.dumps(c1)) crc2 = zlib.crc32(pickle.dumps(c2)) crc3 = zlib.crc32(pickle.dumps(c3)) crc1 == crc2 #False crc2 == crc3 #True A: Have a lool at the hash() build in function and the __hash__() object method. These may be just what you are looking for. You will have to implement __hash__() for you own classes. A: I guess def hash_attr(ins): return hash(tuple(ins.__dict__.items())) hashes anything instance based on its attributes.
Generate unique ID for python object based on its attributes
Is there a way to generate a hash-like ID in for objects in python that is solely based on the objects' attribute values? For example, class test: def __init__(self, name): self.name = name obj1 = test('a') obj2 = test('a') hash1 = magicHash(obj1) hash2 = magicHash(obj2) What I'm looking for is something where hash1 == hash2. Does something like this exist in python? I know I can test if obj1.name == obj2.name, but I'm looking for something general I can use on any object.
[ "You mean something like this?\nUsing the special method __hash__\nclass test:\n def __init__(self, name):\n self.name = name\n def __hash__(self):\n return hash(self.name)\n\n>>> hash(test(10)) == hash(test(20))\nFalse\n>>> hash(test(10)) == hash(test(10))\nTrue\n\n", "To get a unique comparison:\nTo be unique you could serialize the data and then compare the serialized value to ensure it matches exactly.\nExample:\nimport pickle\n\nclass C:\n i = 1\n j = 2\n\nc1 = C()\nc2 = C()\nc3 = C()\nc1.i = 99\n\nunique_hash1 = pickle.dumps(c1) \nunique_hash2 = pickle.dumps(c2) \nunique_hash3 = pickle.dumps(c3) \n\nunique_hash1 == unique_hash2 #False\nunique_hash2 == unique_hash3 #True\n\nIf you don't need unique values for each object, but mostly unique:\nNote the same value will always reduce to the same hash, but 2 different values could reduce to the same hash. \nYou cannot use something like the built-in hash() function (unless you override __hash__)\nhash(c1) == hash(c2) #False\nhash(c2) == hash(c3) #False <--- Wrong\n\nor something like serialize the data using pickle and then use zlib.crc32.\nimport zlib\ncrc1 = zlib.crc32(pickle.dumps(c1))\ncrc2 = zlib.crc32(pickle.dumps(c2))\ncrc3 = zlib.crc32(pickle.dumps(c3))\ncrc1 == crc2 #False\ncrc2 == crc3 #True\n\n", "Have a lool at the hash() build in function and the __hash__() object method. These may be just what you are looking for. You will have to implement __hash__() for you own classes.\n", "I guess\ndef hash_attr(ins):\n return hash(tuple(ins.__dict__.items()))\n\nhashes anything instance based on its attributes.\n" ]
[ 7, 3, 2, 2 ]
[]
[]
[ "attributes", "object", "python" ]
stackoverflow_0001325195_attributes_object_python.txt
Q: Mutate an integer array using ctypes Currently I'm in the process of moving a performance bottleneck in my python code to c, to investigate peformance effects. This code will run a simulation, and report back the results to python via ctypes. However, I'm having problems getting my types to match up correctly. Although I'm looking to solve this particular problem, I'm also on the lookout for more general advice on working with ctypes, as the documentation and procedure seems a bit thin. I have the following c function: extern "C" { void f( int* array, int arraylen ) { for(int i = 0; i < arraylen; i++) { array[i] = g() // mutate the value array[i]; } } } And the following code in python: import ctypes plib = ctypes.cdll.LoadLibrary('./mylib.so') _f = plib.f _f.restype = None _f.argtypes = [ ctypes.POINTER(ctypes.c_int), ctypes.c_int ] seqlen = 50 buffer = ctypes.c_int * seqlen _f( buffer, seqlen ) However, this snippet dies with the following traceback: Traceback (most recent call last): File "particle.py", line 9, in <module> _f( buffer, seqlen ) ctypes.ArgumentError: argument 1: <type 'exceptions.TypeError'>: expected LP_c_int instance instead of _ctypes.ArrayType A: Looks like you want the cast function: The cast function can be used to cast a ctypes instance into a pointer to a different ctypes data type. cast takes two parameters, a ctypes object that is or can be converted to a pointer of some kind, and a ctypes pointer type. It returns an instance of the second argument, which references the same memory block as the first argument: >>> a = (c_byte * 4)() >>> a <__main__.c_byte_Array_4 object at 0xb7da2df4> >>> cast(a, POINTER(c_int)) <ctypes.LP_c_long object at ...> >>>
Mutate an integer array using ctypes
Currently I'm in the process of moving a performance bottleneck in my python code to c, to investigate peformance effects. This code will run a simulation, and report back the results to python via ctypes. However, I'm having problems getting my types to match up correctly. Although I'm looking to solve this particular problem, I'm also on the lookout for more general advice on working with ctypes, as the documentation and procedure seems a bit thin. I have the following c function: extern "C" { void f( int* array, int arraylen ) { for(int i = 0; i < arraylen; i++) { array[i] = g() // mutate the value array[i]; } } } And the following code in python: import ctypes plib = ctypes.cdll.LoadLibrary('./mylib.so') _f = plib.f _f.restype = None _f.argtypes = [ ctypes.POINTER(ctypes.c_int), ctypes.c_int ] seqlen = 50 buffer = ctypes.c_int * seqlen _f( buffer, seqlen ) However, this snippet dies with the following traceback: Traceback (most recent call last): File "particle.py", line 9, in <module> _f( buffer, seqlen ) ctypes.ArgumentError: argument 1: <type 'exceptions.TypeError'>: expected LP_c_int instance instead of _ctypes.ArrayType
[ "Looks like you want the cast function:\n\nThe cast function can be used to cast a ctypes instance into a pointer to a different ctypes data type. cast takes two parameters, a ctypes object that is or can be converted to a pointer of some kind, and a ctypes pointer type. It returns an instance of the second argument, which references the same memory block as the first argument:\n\n>>> a = (c_byte * 4)()\n>>> a\n<__main__.c_byte_Array_4 object at 0xb7da2df4>\n>>> cast(a, POINTER(c_int))\n<ctypes.LP_c_long object at ...>\n>>>\n\n" ]
[ 4 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0001325518_ctypes_python.txt
Q: What order does SQLAlchemy use for primary key columns? Let's say I create a table like this: table = Table('mytable', metadata, Column('a', Integer, primary_key=True), Column('b', Integer, primary_key=True), ) table.create() Is it guaranteed that the primary key will be (a,b) and not (b,a)? A: its guaranteed, yes, since Column objects in Table are ordered. or if you really want to be explicit, use PrimaryKeyContraint(). A: Yes. It will be a really bad thing if resulting DDL wasn't giving consistent results. A: USe echo=True and compare yours with a swapped version? That should give the answer.
What order does SQLAlchemy use for primary key columns?
Let's say I create a table like this: table = Table('mytable', metadata, Column('a', Integer, primary_key=True), Column('b', Integer, primary_key=True), ) table.create() Is it guaranteed that the primary key will be (a,b) and not (b,a)?
[ "its guaranteed, yes, since Column objects in Table are ordered. or if you really want to be explicit, use PrimaryKeyContraint().\n", "Yes.\nIt will be a really bad thing if resulting DDL wasn't giving consistent results.\n", "USe echo=True and compare yours with a swapped version? That should give the answer.\n" ]
[ 5, 0, 0 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001325018_python_sqlalchemy.txt
Q: Entity Framwework-like ORM NOT for .NET What I really like about Entity framework is its drag and drop way of making up the whole model layer of your application. You select the tables, it joins them and you're done. If you update the database scheda, right click -> update and you're done again. This seems to me miles ahead the competiting ORMs, like the mess of XML (n)Hibernate requires or the hard-to-update Django Models. Without concentrating on the fact that maybe sometimes more control over the mapping process may be good, are there similar one-click (or one-command) solutions for other (mainly open source like python or php) programming languages or frameworks? Thanks A: SQLAlchemy database reflection gets you half way there. You'll still have to declare your classes and relations between them. Actually you could easily autogenerate the classes too, but you'll still need to name the relations somehow so you might as well declare the classes manually. The code to setup your database would look something like this: from sqlalchemy import create_engine, MetaData from sqlalchemy.ext.declarative import declarative_base metadata = MetaData(create_engine(database_url), reflect=True) Base = declarative_base(metadata) class Order(Base): __table__ = metadata.tables['orders'] class OrderLine(Base): __table__ = metadata.tables['orderlines'] order = relation(Order, backref='lines') In production code, you'd probably want to cache the reflected database metadata somehow. Like for instance pickle it to a file: from cPickle import dump, load import os if os.path.exists('metadata.cache'): metadata = load(open('metadata.cache')) metadata.bind = create_engine(database_url) else: metadata = MetaData(create_engine(database_url), reflect=True) dump(metadata, open('metadata.cache', 'w')) A: I do not like “drag and drop” create of data access code. At first sight it seems easy, but then you make a change to the database and have to update the data access code. This is where it becomes hard, as you often have to redo what you have done before, or hand edit the code the drag/drop designer created. Often when you make a change to one field mapping with a drag/drop designer, the output file has unrelated lines changes, so you can not use your source code control system to confirm you have make the intended change (and not change anything else). However having to create/edit xml configuring files is not nice every time you refractor your code or change your database schema you have to update the mapping file. It is also very hard to get started with mapping files and tracking down what looks like simple problem can take ages. There are two other options: Use a code generator like CodeSmith that comes with templates for many ORM systems. When (not if) you need to customize the output you can edit the template, but the simple case are taken care of for you. That ways you just rerun the code generator every time you change the database schema and get a repeatable result. And/or use fluent interface (e.g Fluent NHibernate) to configure your ORM system, this avoids the need to the Xml config file and in most cases you can use naming conventions to link fields to columns etc. This will be harder to start with then a drag/drop designer but will pay of in the long term if you do match refactoring of the code or database. Another option is to use a model that you generate both your database and code from. The “model” is your source code and is kept under version control. This is called “Model Driven Development” and can be great if you have lots of classes that have simpler patterns, as you only need to create the template for each pattern once. A: I have heard iBattis is good. A few companies fall back to iBattis when their programmer teams are not capable of understanding Hibernate (time issue). Personally, I still like Linq2Sql. Yes, the first time someone needs to delete and redrag over a table seems like too much work, but it really is not. And the time that it doesn't update your class code when you save is really a pain, but you simply control-a your tables and drag them over again. Total remakes are very quick and painless. The classes it creates are extremely simple. You can even create multiple table entities if you like with SPs for CRUD. Linking SPs to CRUD is similar to EF: You simply setup your SP with the same parameters as your table, then drag it over your table, and poof, it matches the data types. A lot of people go out of their way to take IQueryable away from the repository, but you can limit what you link in linq2Sql, so IQueryable is not too bad. Come to think of it, I wonder if there is a way to restrict the relations (and foreign keys).
Entity Framwework-like ORM NOT for .NET
What I really like about Entity framework is its drag and drop way of making up the whole model layer of your application. You select the tables, it joins them and you're done. If you update the database scheda, right click -> update and you're done again. This seems to me miles ahead the competiting ORMs, like the mess of XML (n)Hibernate requires or the hard-to-update Django Models. Without concentrating on the fact that maybe sometimes more control over the mapping process may be good, are there similar one-click (or one-command) solutions for other (mainly open source like python or php) programming languages or frameworks? Thanks
[ "SQLAlchemy database reflection gets you half way there. You'll still have to declare your classes and relations between them. Actually you could easily autogenerate the classes too, but you'll still need to name the relations somehow so you might as well declare the classes manually.\nThe code to setup your database would look something like this:\nfrom sqlalchemy import create_engine, MetaData\nfrom sqlalchemy.ext.declarative import declarative_base\n\nmetadata = MetaData(create_engine(database_url), reflect=True)\nBase = declarative_base(metadata) \n\nclass Order(Base):\n __table__ = metadata.tables['orders']\n\nclass OrderLine(Base):\n __table__ = metadata.tables['orderlines']\n order = relation(Order, backref='lines')\n\nIn production code, you'd probably want to cache the reflected database metadata somehow. Like for instance pickle it to a file:\nfrom cPickle import dump, load\nimport os\n\nif os.path.exists('metadata.cache'):\n metadata = load(open('metadata.cache'))\n metadata.bind = create_engine(database_url)\nelse:\n metadata = MetaData(create_engine(database_url), reflect=True)\n dump(metadata, open('metadata.cache', 'w'))\n\n", "I do not like “drag and drop” create of data access code. \nAt first sight it seems easy, but then you make a change to the database and have to update the data access code. This is where it becomes hard, as you often have to redo what you have done before, or hand edit the code the drag/drop designer created. Often when you make a change to one field mapping with a drag/drop designer, the output file has unrelated lines changes, so you can not use your source code control system to confirm you have make the intended change (and not change anything else).\nHowever having to create/edit xml configuring files is not nice every time you refractor your code or change your database schema you have to update the mapping file. It is also very hard to get started with mapping files and tracking down what looks like simple problem can take ages.\nThere are two other options:\nUse a code generator like CodeSmith that comes with templates for many ORM systems. When (not if) you need to customize the output you can edit the template, but the simple case are taken care of for you. That ways you just rerun the code generator every time you change the database schema and get a repeatable result.\nAnd/or use fluent interface (e.g Fluent NHibernate) to configure your ORM system, this avoids the need to the Xml config file and in most cases you can use naming conventions to link fields to columns etc. This will be harder to start with then a drag/drop designer but will pay of in the long term if you do match refactoring of the code or database.\nAnother option is to use a model that you generate both your database and code from. The “model” is your source code and is kept under version control. This is called “Model Driven Development” and can be great if you have lots of classes that have simpler patterns, as you only need to create the template for each pattern once.\n", "I have heard iBattis is good. A few companies fall back to iBattis when their programmer teams are not capable of understanding Hibernate (time issue).\nPersonally, I still like Linq2Sql. Yes, the first time someone needs to delete and redrag over a table seems like too much work, but it really is not. And the time that it doesn't update your class code when you save is really a pain, but you simply control-a your tables and drag them over again. Total remakes are very quick and painless. The classes it creates are extremely simple. You can even create multiple table entities if you like with SPs for CRUD.\nLinking SPs to CRUD is similar to EF: You simply setup your SP with the same parameters as your table, then drag it over your table, and poof, it matches the data types.\nA lot of people go out of their way to take IQueryable away from the repository, but you can limit what you link in linq2Sql, so IQueryable is not too bad.\nCome to think of it, I wonder if there is a way to restrict the relations (and foreign keys).\n" ]
[ 2, 2, 0 ]
[]
[]
[ "entity_framework", "open_source", "php", "python" ]
stackoverflow_0001283646_entity_framework_open_source_php_python.txt
Q: Has anyone successfully configured NetBeans for Python (specifically Python 3.0) development? I was able to configure NetBeans for 2.6.1 by going to to the Python Platform Manager, creating a new platform, and pointing NetBeans at python.exe where I installed 2.6.1. However, when I follow the exact same steps for 3.0, I get an error in the NetBeans console that says "SyntaxError: invalid syntax". If it matters, Python is installed in this format: /Program Files /Python /2.6 python.exe and everything else /3.0 python.exe and everything else I'm wondering if anyone else has experienced this and what they did to correct the problem. A: Yep- it's actually very easy. The scripts in the plugin use 'print' as a keyword which has been changed in Python 3; you just have to convert all 'print' statements in the console.py and platform_ info.py files under the 'python1' folder in your NetBeans installation directory to use parenthesis. For instance, in platform_info.py the first print line says: print "platform.name="+ "Jython " + version Change it to: print("platform.name="+ "Jython " + version) And do this for all print statements. Then go into the NetBeans and import your Python30 directory into the Python Platform Manager; it will work just fine. I haven't run into any other issues yet, but there might be some other small syntax issues in the plugin; they should be very easy to fix. A: It doesn't let me comment back here so I'll answer your comment in an post. Yes, it will let you use Python 2.x as well; the 'print' method was both a keyword and function prior to Python 3, so the parenthesis were optional. As on 3 they are required, so this change is backwards compatible. A: There are some issues with debugging, btw- I'll let you all know when I successfully figure out what has to be updated here. A: Thank you Ben Flynn for the solution to integrate python30 with netbeans 6.71 However, this piece of code : def fib(n): # write Fibonacci series up to n """Print a Fibonacci series up to n.""" a, b = 0, 1 while b < n: print (b, end=' ') a, b = b, a+b fib(2000) Which is an example code from a help site, runs with out error from the IDE, but the editor complains: Internal parser error "no viable alternative at input'=' " Which suggests it is parsing against python2.5.1 A: Starting at version 3.0, the print statement has to be written as a function... your print (b, end=' ') becomes print("end= ", b)
Has anyone successfully configured NetBeans for Python (specifically Python 3.0) development?
I was able to configure NetBeans for 2.6.1 by going to to the Python Platform Manager, creating a new platform, and pointing NetBeans at python.exe where I installed 2.6.1. However, when I follow the exact same steps for 3.0, I get an error in the NetBeans console that says "SyntaxError: invalid syntax". If it matters, Python is installed in this format: /Program Files /Python /2.6 python.exe and everything else /3.0 python.exe and everything else I'm wondering if anyone else has experienced this and what they did to correct the problem.
[ "Yep- it's actually very easy. The scripts in the plugin use 'print' as a keyword which has been changed in Python 3; you just have to convert all 'print' statements in the console.py and platform_ info.py files under the 'python1' folder in your NetBeans installation directory to use parenthesis. For instance, in platform_info.py the first print line says:\nprint \"platform.name=\"+ \"Jython \" + version \n\nChange it to:\nprint(\"platform.name=\"+ \"Jython \" + version)\n\nAnd do this for all print statements. Then go into the NetBeans and import your Python30 directory into the Python Platform Manager; it will work just fine.\nI haven't run into any other issues yet, but there might be some other small syntax issues in the plugin; they should be very easy to fix.\n", "It doesn't let me comment back here so I'll answer your comment in an post.\nYes, it will let you use Python 2.x as well; the 'print' method was both a keyword and function prior to Python 3, so the parenthesis were optional. As on 3 they are required, so this change is backwards compatible.\n", "There are some issues with debugging, btw- I'll let you all know when I successfully figure out what has to be updated here.\n", "Thank you Ben Flynn for the solution to integrate python30 with netbeans 6.71\nHowever, this piece of code :\ndef fib(n): # write Fibonacci series up to n\n \"\"\"Print a Fibonacci series up to n.\"\"\"\n a, b = 0, 1\n while b < n:\n print (b, end=' ')\n a, b = b, a+b\n\nfib(2000)\n\nWhich is an example code from a help site, runs with out error from the IDE,\nbut the editor complains:\nInternal parser error\n\"no viable alternative at input'=' \"\n\nWhich suggests it is parsing against python2.5.1\n", "Starting at version 3.0, the print statement has to be written as a function...\nyour \nprint (b, end=' ')\nbecomes \nprint(\"end= \", b)\n" ]
[ 5, 2, 0, 0, 0 ]
[]
[]
[ "ide", "netbeans", "python" ]
stackoverflow_0000693459_ide_netbeans_python.txt
Q: Importing a text file into SQL Server in Python I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it. I will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to "save them up" and committing all 5000 at once would cause problems. I am fairly new to python, so I don't know all of these issues yet. Thanks. A: If I understand what you are doing, Python is not going to be a problem. Executing a statement inside a transaction does not create cumulative state in Python. It will do so only at the database server itself. When you commit you will need to make sure the commit occurred, since having a large batch commit may conflict with intervening changes in the database. If the commit fails, you will have to re-run the batch again. That's the only problem that I am aware of with large batches and Python/ODBC (and it's not even really a Python problem, since you would have that problem regardless.) Now, if you were creating all the SQL in memory, and then looping through the memory-representation, that might make more sense. Still, 5000 lines of text on a modern machine is really not that big of a deal. If you start needing to process two orders of magnitude more, you might need to rethink your process. A: Create a file and use BULK INSERT. It will be faster.
Importing a text file into SQL Server in Python
I am writing a python script that will be doing some processing on text files. As part of that process, i need to import each line of the tab-separated file into a local MS SQL Server (2008) table. I am using pyodbc and I know how to do this. However, I have a question about the best way to execute it. I will be looping through the file, creating a cursor.execute(myInsertSQL) for each line of the file. Does anyone see any problems waiting to commit the statements until all records have been looped (i.e. doing the commit() after the loop and not inside the loop after each individual execute)? The reason I ask is that some files will have upwards of 5000 lines. I didn't know if trying to "save them up" and committing all 5000 at once would cause problems. I am fairly new to python, so I don't know all of these issues yet. Thanks.
[ "If I understand what you are doing, Python is not going to be a problem. Executing a statement inside a transaction does not create cumulative state in Python. It will do so only at the database server itself.\nWhen you commit you will need to make sure the commit occurred, since having a large batch commit may conflict with intervening changes in the database. If the commit fails, you will have to re-run the batch again.\nThat's the only problem that I am aware of with large batches and Python/ODBC (and it's not even really a Python problem, since you would have that problem regardless.)\nNow, if you were creating all the SQL in memory, and then looping through the memory-representation, that might make more sense. Still, 5000 lines of text on a modern machine is really not that big of a deal. If you start needing to process two orders of magnitude more, you might need to rethink your process.\n", "Create a file and use BULK INSERT. It will be faster.\n" ]
[ 0, 0 ]
[]
[]
[ "bulkinsert", "commit", "database", "odbc", "python" ]
stackoverflow_0001325481_bulkinsert_commit_database_odbc_python.txt
Q: how to take file like object in a file in python filename = fileobject.read() i want to transfer/assign the whole data of a object within a file. A: You are almost doing it correctly already; the code should read filecontent = fileobject.read() read() with no arguments will read the whole data, i.e. the whole file content. The file name has nothing to do with that.
how to take file like object in a file in python
filename = fileobject.read() i want to transfer/assign the whole data of a object within a file.
[ "You are almost doing it correctly already; the code should read\nfilecontent = fileobject.read()\n\nread() with no arguments will read the whole data, i.e. the whole file content. The file name has nothing to do with that.\n" ]
[ 4 ]
[]
[]
[ "file_io", "python" ]
stackoverflow_0001326271_file_io_python.txt
Q: Help with MySQL LOAD DATA INFILE I want to load a CSV file that looks like this: Acct. No.,1-15 Days,16-30 Days,31-60 Days,61-90 Days,91-120 Days,Beyond 120 Days 2314134101,898.89,8372.16,5584.23,7744.41,9846.54,2896.25 2414134128,5457.61,7488.26,9594.02,6234.78,273.7,2356.13 2513918869,2059.59,7578.59,9395.51,7159.15,5827.48,3041.62 1687950783,4846.85,8364.22,9892.55,7213.45,8815.33,7603.4 2764856043,5250.11,9946.49,8042.03,6058.64,9194.78,8296.2 2865446086,596.22,7670.04,8564.08,3263.85,9662.46,7027.22 ,4725.99,1336.24,9356.03,1572.81,4942.11,6088.94 ,8248.47,956.81,8713.06,2589.14,5316.68,1543.67 ,538.22,1473.91,3292.09,6843.89,2687.07,9808.05 ,9885.85,2730.72,6876,8024.47,1196.87,1655.29 But if you notice, some of the fields are incomplete. I'm thinking MySQL will just skip the row where the first column is missing. When I run the command: LOAD DATA LOCAL INFILE 'test-long.csv' REPLACE INTO TABLE accounts FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (cf_535, cf_580, cf_568, cf_569, cf_571, cf_572); And the MySQL output is: Query OK, 41898 rows affected, 20948 warnings (0.78 sec) Records: 20949 Deleted: 20949 Skipped: 0 Warnings: 20948 The number of lines is only 20,949 but MySQL reports it as 41,898 rows affected. Why so? Also, nothing really changed in the table. I also couldn't see what the warnings generated is all about. I wanted to use the LOAD DATA INFILE because it takes python half a second to update each row which translates to 2.77 hours for a file with 20,000+ records. UPDATE: Modified the code to set auto-commit to 'False' and added a db.commit() statement: # Tell MySQLdb to turn off auto-commit db.autocommit(False) # Set count to 1 count = 1 while count < len(contents): if contents[count][0] != '': cursor.execute(""" UPDATE accounts SET cf_580 = %s, cf_568 = %s, cf_569 = %s, cf_571 = %s, cf_572 = %s WHERE cf_535 = %s""" % (contents[count][1], contents[count][2], contents[count][3], contents[count][4], contents[count][5], contents[count][0])) count += 1 try: db.commit() except: db.rollback() A: You have basically 3 issues here. In reverse order Are you doing your Python inserts in individual statements? You probably want to surround them all with a begin transaction/commit. 20,000 commits could easily take hours. Your import statement defines 6 fields, but the CSV has 7 fields. That would explain the double row count: every line of input results in 2 rows in the database, the 2nd one with fields 2-6 null. Incomplete rows will be inserted with null or default values for the missing columns. This may not be what you want with those malformed rows. If your python program can't perform fast enough even with a single transaction, you should at least have the python program edit/clean the data file before importing. If Acct. No. is the primary key, as seems reasonable, inserting rows with blank will either cause the whole import to fail, or if auto number is on, cause bogus data to be imported. A: If you use REPLACE keyword in LOAD DATA, then number after "Deleted: " shows how many rows were actually replaced
Help with MySQL LOAD DATA INFILE
I want to load a CSV file that looks like this: Acct. No.,1-15 Days,16-30 Days,31-60 Days,61-90 Days,91-120 Days,Beyond 120 Days 2314134101,898.89,8372.16,5584.23,7744.41,9846.54,2896.25 2414134128,5457.61,7488.26,9594.02,6234.78,273.7,2356.13 2513918869,2059.59,7578.59,9395.51,7159.15,5827.48,3041.62 1687950783,4846.85,8364.22,9892.55,7213.45,8815.33,7603.4 2764856043,5250.11,9946.49,8042.03,6058.64,9194.78,8296.2 2865446086,596.22,7670.04,8564.08,3263.85,9662.46,7027.22 ,4725.99,1336.24,9356.03,1572.81,4942.11,6088.94 ,8248.47,956.81,8713.06,2589.14,5316.68,1543.67 ,538.22,1473.91,3292.09,6843.89,2687.07,9808.05 ,9885.85,2730.72,6876,8024.47,1196.87,1655.29 But if you notice, some of the fields are incomplete. I'm thinking MySQL will just skip the row where the first column is missing. When I run the command: LOAD DATA LOCAL INFILE 'test-long.csv' REPLACE INTO TABLE accounts FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' IGNORE 1 LINES (cf_535, cf_580, cf_568, cf_569, cf_571, cf_572); And the MySQL output is: Query OK, 41898 rows affected, 20948 warnings (0.78 sec) Records: 20949 Deleted: 20949 Skipped: 0 Warnings: 20948 The number of lines is only 20,949 but MySQL reports it as 41,898 rows affected. Why so? Also, nothing really changed in the table. I also couldn't see what the warnings generated is all about. I wanted to use the LOAD DATA INFILE because it takes python half a second to update each row which translates to 2.77 hours for a file with 20,000+ records. UPDATE: Modified the code to set auto-commit to 'False' and added a db.commit() statement: # Tell MySQLdb to turn off auto-commit db.autocommit(False) # Set count to 1 count = 1 while count < len(contents): if contents[count][0] != '': cursor.execute(""" UPDATE accounts SET cf_580 = %s, cf_568 = %s, cf_569 = %s, cf_571 = %s, cf_572 = %s WHERE cf_535 = %s""" % (contents[count][1], contents[count][2], contents[count][3], contents[count][4], contents[count][5], contents[count][0])) count += 1 try: db.commit() except: db.rollback()
[ "You have basically 3 issues here. In reverse order\n\nAre you doing your Python inserts in individual statements? You probably want to surround them all with a begin transaction/commit. 20,000 commits could easily take hours. \nYour import statement defines 6 fields, but the CSV has 7 fields. That would explain the double row count: every line of input results in 2 rows in the database, the 2nd one with fields 2-6 null.\nIncomplete rows will be inserted with null or default values for the missing columns. This may not be what you want with those malformed rows.\n\nIf your python program can't perform fast enough even with a single transaction, you should at least have the python program edit/clean the data file before importing. If Acct. No. is the primary key, as seems reasonable, inserting rows with blank will either cause the whole import to fail, or if auto number is on, cause bogus data to be imported.\n", "If you use REPLACE keyword in LOAD DATA, then number after \"Deleted: \" shows how many rows were actually replaced\n" ]
[ 2, 0 ]
[]
[]
[ "load", "load_data_infile", "mysql", "python" ]
stackoverflow_0001236971_load_load_data_infile_mysql_python.txt
Q: Help me understand this traceback from the twisted.words msn sample I'm running the twisted.words msn protocol example from the twisted documentation located here: http://twistedmatrix.com/projects/words/documentation/examples/msn_example.py I am aware there is another question about this sample .py on stackoverflow, but this is an entirely different problem. When I run the example, it behaves as expected. Logs into the account and displays information about users on the buddylist, but after having done that it spits out this traceback > Traceback (most recent call last): > File > "c:\python26\lib\site-packages\twisted\python\log.py", > line 84, in callWithLogger > return callWithContext({"system": lp}, func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\log.py", > line 69, in callWithContext > return context.call({ILogContext: newCtx}, func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\context.py", > line 59, in callWithContext > return self.currentContext().callWithContext(ctx, > func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\context.py", > line 37, in callWithContext > return func(*args,**kw) > --- <exception caught here> --- File "c:\python26\lib\site-packages\twisted\internet\selectreactor.py", > line 146, in _doReadOrWrite > why = getattr(selectable, method)() File > "c:\python26\lib\site-packages\twisted\internet\tcp.py", > line 463, in doRead > return self.protocol.dataReceived(data) > File > "c:\python26\lib\site-packages\twisted\protocols\basic.py", line 239, indataReceived > return self.rawDataReceived(data) File > "c:\python26\lib\site-packages\twisted\words\protocols\msn.py", > line 676 in rawDataReceived > self.gotMessage(m) File "c:\python26\lib\site-packages\twisted\words\protocols\msn.py", > line 699, in gotMessage > raise NotImplementedError exceptions.NotImplementedError: could someone help me understand what that means? A: It looks like it's a change to the way the MSN server operates, although it doesn't really count as a change to the protocol. What's happening is the MSN server is sending a message to the client immediately after the client connects and the Twisted words example isn't expecting that. Assuming you're running the msn_example.py from http://twistedmatrix.com/projects/words/documentation/examples/, you can get the example working and see what's happening by adding the following code to the example (right after the end of the listSynchronized function): def gotMessage(self, message): print message.headers print message.getMessage() After making the changes, if you run the example, you should see the following: ... 2009-08-25 00:03:23-0700 [Notification,client] {'Content-Type': 'text/x-msmsgsinitialemailnotification; charset=UTF-8', 'MIME-Version': '1.0'} 2009-08-25 00:03:23-0700 [Notification,client] Inbox-Unread: 1 2009-08-25 00:03:23-0700 [Notification,client] Folders-Unread: 0 2009-08-25 00:03:23-0700 [Notification,client] Inbox-URL: /cgi-bin/HoTMaiL 2009-08-25 00:03:23-0700 [Notification,client] Folders-URL: /cgi-bin/folders 2009-08-25 00:03:23-0700 [Notification,client] Post-URL: http://www.hotmail.com 2009-08-25 00:03:23-0700 [Notification,client] We can see that the server is sending the client a message which specifies the number of unread email messages there are for that account. Hope that helps! A: The method gotMessage claims to not be implemented. That likely means that you have subclassed a class that needs gotMessage to be overridden in the subclass, but you haven't done the overriding.
Help me understand this traceback from the twisted.words msn sample
I'm running the twisted.words msn protocol example from the twisted documentation located here: http://twistedmatrix.com/projects/words/documentation/examples/msn_example.py I am aware there is another question about this sample .py on stackoverflow, but this is an entirely different problem. When I run the example, it behaves as expected. Logs into the account and displays information about users on the buddylist, but after having done that it spits out this traceback > Traceback (most recent call last): > File > "c:\python26\lib\site-packages\twisted\python\log.py", > line 84, in callWithLogger > return callWithContext({"system": lp}, func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\log.py", > line 69, in callWithContext > return context.call({ILogContext: newCtx}, func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\context.py", > line 59, in callWithContext > return self.currentContext().callWithContext(ctx, > func, *args, **kw) File > "c:\python26\lib\site-packages\twisted\python\context.py", > line 37, in callWithContext > return func(*args,**kw) > --- <exception caught here> --- File "c:\python26\lib\site-packages\twisted\internet\selectreactor.py", > line 146, in _doReadOrWrite > why = getattr(selectable, method)() File > "c:\python26\lib\site-packages\twisted\internet\tcp.py", > line 463, in doRead > return self.protocol.dataReceived(data) > File > "c:\python26\lib\site-packages\twisted\protocols\basic.py", line 239, indataReceived > return self.rawDataReceived(data) File > "c:\python26\lib\site-packages\twisted\words\protocols\msn.py", > line 676 in rawDataReceived > self.gotMessage(m) File "c:\python26\lib\site-packages\twisted\words\protocols\msn.py", > line 699, in gotMessage > raise NotImplementedError exceptions.NotImplementedError: could someone help me understand what that means?
[ "It looks like it's a change to the way the MSN server operates, although it doesn't really count as a change to the protocol. What's happening is the MSN server is sending a message to the client immediately after the client connects and the Twisted words example isn't expecting that.\nAssuming you're running the msn_example.py from http://twistedmatrix.com/projects/words/documentation/examples/, you can get the example working and see what's happening by adding the following code to the example (right after the end of the listSynchronized function):\ndef gotMessage(self, message):\n print message.headers\n print message.getMessage()\n\nAfter making the changes, if you run the example, you should see the following:\n...\n2009-08-25 00:03:23-0700 [Notification,client] {'Content-Type': 'text/x-msmsgsinitialemailnotification; charset=UTF-8', 'MIME-Version': '1.0'}\n2009-08-25 00:03:23-0700 [Notification,client] Inbox-Unread: 1\n2009-08-25 00:03:23-0700 [Notification,client] Folders-Unread: 0\n2009-08-25 00:03:23-0700 [Notification,client] Inbox-URL: /cgi-bin/HoTMaiL\n2009-08-25 00:03:23-0700 [Notification,client] Folders-URL: /cgi-bin/folders\n2009-08-25 00:03:23-0700 [Notification,client] Post-URL: http://www.hotmail.com\n2009-08-25 00:03:23-0700 [Notification,client]\n\nWe can see that the server is sending the client a message which specifies the number of unread email messages there are for that account.\nHope that helps!\n", "The method gotMessage claims to not be implemented. That likely means that you have subclassed a class that needs gotMessage to be overridden in the subclass, but you haven't done the overriding.\n" ]
[ 1, 0 ]
[]
[]
[ "msn", "python", "traceback", "twisted" ]
stackoverflow_0001244733_msn_python_traceback_twisted.txt
Q: Problem with shelve module? Using the shelve module has given me some surprising behavior. keys(), iter(), and iteritems() don't return all the entries in the shelf! Here's the code: cache = shelve.open('my.cache') # ... cache[url] = (datetime.datetime.today(), value) later: cache = shelve.open('my.cache') urls = ['accounts_with_transactions.xml', 'targets.xml', 'profile.xml'] try: print list(cache.keys()) # doesn't return all the keys! print [url for url in urls if cache.has_key(url)] print list(cache.keys()) finally: cache.close() and here's the output: ['targets.xml'] ['accounts_with_transactions.xml', 'targets.xml'] ['targets.xml', 'accounts_with_transactions.xml'] Has anyone run into this before, and is there a workaround without knowing all possible cache keys a priori? A: According to the python library reference: ...The database is also (unfortunately) subject to the limitations of dbm, if it is used — this means that (the pickled representation of) the objects stored in the database should be fairly small... This correctly reproduces the 'bug': import shelve a = 'trxns.xml' b = 'foobar.xml' c = 'profile.xml' urls = [a, b, c] cache = shelve.open('my.cache', 'c') try: cache[a] = a*1000 cache[b] = b*10000 finally: cache.close() cache = shelve.open('my.cache', 'c') try: print cache.keys() print [url for url in urls if cache.has_key(url)] print cache.keys() finally: cache.close() with the output: [] ['trxns.xml', 'foobar.xml'] ['foobar.xml', 'trxns.xml'] The answer, therefore, is don't store anything big—like raw xml—but rather results of calculations in a shelf. A: Seeing your examples, my first thought is that cache.has_key() has side effects, i.e. this call will add keys to the cache. What do you get for print cache.has_key('xxx') print list(cache.keys())
Problem with shelve module?
Using the shelve module has given me some surprising behavior. keys(), iter(), and iteritems() don't return all the entries in the shelf! Here's the code: cache = shelve.open('my.cache') # ... cache[url] = (datetime.datetime.today(), value) later: cache = shelve.open('my.cache') urls = ['accounts_with_transactions.xml', 'targets.xml', 'profile.xml'] try: print list(cache.keys()) # doesn't return all the keys! print [url for url in urls if cache.has_key(url)] print list(cache.keys()) finally: cache.close() and here's the output: ['targets.xml'] ['accounts_with_transactions.xml', 'targets.xml'] ['targets.xml', 'accounts_with_transactions.xml'] Has anyone run into this before, and is there a workaround without knowing all possible cache keys a priori?
[ "According to the python library reference:\n\n...The database is also (unfortunately) subject to the limitations of dbm, if it is used — this means that (the pickled representation of) the objects stored in the database should be fairly small...\n\nThis correctly reproduces the 'bug':\nimport shelve\n\na = 'trxns.xml'\nb = 'foobar.xml'\nc = 'profile.xml'\n\nurls = [a, b, c]\ncache = shelve.open('my.cache', 'c')\n\ntry:\n cache[a] = a*1000\n cache[b] = b*10000\nfinally:\n cache.close()\n\n\ncache = shelve.open('my.cache', 'c')\n\ntry:\n print cache.keys()\n print [url for url in urls if cache.has_key(url)]\n print cache.keys()\nfinally:\n cache.close()\n\nwith the output:\n[]\n['trxns.xml', 'foobar.xml']\n['foobar.xml', 'trxns.xml']\n\nThe answer, therefore, is don't store anything big—like raw xml—but rather results of calculations in a shelf.\n", "Seeing your examples, my first thought is that cache.has_key() has side effects, i.e. this call will add keys to the cache. What do you get for\nprint cache.has_key('xxx')\nprint list(cache.keys())\n\n" ]
[ 3, 0 ]
[]
[]
[ "python", "shelve" ]
stackoverflow_0001326459_python_shelve.txt
Q: Using sphinx to auto-document a python class, module I have installed Sphinx in order to document some Python modules and class I'm working on. While the markup language looks very nice, I haven't managed to auto-document a Python code. Basically, I have the following Python module: SegLib.py And A class called Seg in it. I would like to display the docstrings of the class and module within the generated Sphinx document, and add further formatted text to it. My index.rst looks like this: Contents: .. toctree:: :maxdepth: 2 chapter1.rst and chapter1.rst: This is a header ================ Some text, *italic text*, **bold text** * bulleted list. There needs to be a space right after the "*" * item 2 .. note:: This is a note. See :class:`Seg` But Seg is just printed in bold, and not linked to an auto-generated documentation of the class. Trying the following didn't help, either: See :class:`Seg` Module :mod:'SegLib' Module :mod:'SegLib.py' Edit: changed SegLib to segments (thanks, iElectric!), and changed chapter1.rst to: The :mod:`segments` Module -------------------------- .. automodule:: segments.segments .. autoclass:: segments.segments.Seg Still, can't get Sphinx to directly document functions within a class, or better - to automatically add all the functions within a class to the document. Tried: .. autofunction:: segments.segments.Seg.sid and got: autodoc can't import/find function 'segments.segments.Seg.sid', it reported error: "No module named Seg" Any ideas how to auto-document the functions and classes with a short command? A: Add to the beginning of the file: .. module:: SegLib Try using :autoclass: directive for class doc. BTW: module names should be lower_case. EDIT: I learned a lot from reading other source files.
Using sphinx to auto-document a python class, module
I have installed Sphinx in order to document some Python modules and class I'm working on. While the markup language looks very nice, I haven't managed to auto-document a Python code. Basically, I have the following Python module: SegLib.py And A class called Seg in it. I would like to display the docstrings of the class and module within the generated Sphinx document, and add further formatted text to it. My index.rst looks like this: Contents: .. toctree:: :maxdepth: 2 chapter1.rst and chapter1.rst: This is a header ================ Some text, *italic text*, **bold text** * bulleted list. There needs to be a space right after the "*" * item 2 .. note:: This is a note. See :class:`Seg` But Seg is just printed in bold, and not linked to an auto-generated documentation of the class. Trying the following didn't help, either: See :class:`Seg` Module :mod:'SegLib' Module :mod:'SegLib.py' Edit: changed SegLib to segments (thanks, iElectric!), and changed chapter1.rst to: The :mod:`segments` Module -------------------------- .. automodule:: segments.segments .. autoclass:: segments.segments.Seg Still, can't get Sphinx to directly document functions within a class, or better - to automatically add all the functions within a class to the document. Tried: .. autofunction:: segments.segments.Seg.sid and got: autodoc can't import/find function 'segments.segments.Seg.sid', it reported error: "No module named Seg" Any ideas how to auto-document the functions and classes with a short command?
[ "Add to the beginning of the file:\n.. module:: SegLib\n\nTry using :autoclass: directive for class doc.\nBTW: module names should be lower_case.\nEDIT: I learned a lot from reading other source files.\n" ]
[ 18 ]
[]
[]
[ "autodoc", "python", "python_sphinx" ]
stackoverflow_0001326796_autodoc_python_python_sphinx.txt
Q: How to organize a Data Base Access layer? I am using SqlAlchemy, a python ORM library. And I used to access database directly from business layer directly by calling SqlAlchemy API. But then I found that would cause too much time to run all my test cases and now I think maybe I should create a DB access layer, so I can use mock objects during test instead of access database directly. I think there are 2 choices to do that : use a single class which contains a DB connection and many methods like addUser/delUser/updateUser, addBook/delBook/updateBook. But this means this class will be very large. Another approach is create different manager classes like "UserManager", "BookManager". But that means I have to pass a list of managers to Business layer, which seems a little Cumbersome. How will you organize a database layer? A: That's a good question! The problem is not trivial, and may require several approaches to tackle it. For instance: Organize the code, so that you can test most of the application logic without accessing the database. This means that each class will have methods for accessing data, and methods for processing it, and the second ones may be tested easily. When you need to test database access, you may use a proxy (so, like solution #1); you can think of it as an engine for SqlAlchemy or as a drop-in replacement for the SA. In both cases, you may want to think to a self initializing fake. If the code does not involve stored procedures, think about using in-memory databases, like Lennart says (even if in this case, calling it "unit test" may sound a bit strange!). However, from my experience, everything is quite easy on word, and then falls abruptly when you go on the field. For instance, what to do when most of the logic is in the SQL statements? What if accessing data is strictly interleaved with its processing? Sometimes you may be able to refactor, sometimes (especially with large and legacy applications) not. In the end, I think it is mostly a matter of mindset. If you think you need to have unit tests, and you need to have them running fast, then you design your application in a certain way, that allow for easier unit testing. Unfortunately, this is not always true (many people see unit tests as something that can run overnight, so time is not an issue), and you get something that will not be really unit-testable. A: I would set up a database connection during testing that connects to a in memory database instead. Like so: sqlite_memory_db = create_engine('sqlite://') That will be pretty much as fast as you can get, you are also not connecting to a real database, but just a temporary one in memory, so you don't have to worry about the changes done by your tests remaining after the test, etc. And you don't have to mock anything. A: One way to capture modifications to the database, is to use the SQLAlchemy session extension mechanism and intercept flushes to the database using something like this: from sqlalchemy.orm.attributes import instance_state from sqlalchemy.orm import SessionExtension class MockExtension(SessionExtension): def __init__(self): self.clear() def clear(self): self.updates = set() self.inserts = set() self.deletes = set() def before_flush(self, session, flush_context, instances): for obj in session.dirty: self.updates.add(obj) state = instance_state(obj) state.commit_all({}) session.identity_map._mutable_attrs.discard(state) session.identity_map._modified.discard(state) for obj in session.deleted: self.deletes.add(obj) session.expunge(obj) self.inserts.update(session.new) session._new = {} Then for tests you can configure your session with that mock and see if it matches your expectations. mock = MockExtension() Session = sessionmaker(extension=[mock], expire_on_commit=False) def do_something(attr): session = Session() obj = session.query(Cls).first() obj.attr = attr session.commit() def test_something(): mock.clear() do_something('foobar') assert len(mock.updates) == 1 updated_obj = mock.updates.pop() assert updated_obj.attr == 'foobar' But you'll want to do at least some tests with a database anyway because you'll atleast want to know if your queries work as expected. And keep in mind that you can also have modifications to the database via session.update(), .delete() and .execute(). A: SQLAlchemy has some facilities for making mocking easier -- maybe that would be easier than trying to rewrite whole sections of your project?
How to organize a Data Base Access layer?
I am using SqlAlchemy, a python ORM library. And I used to access database directly from business layer directly by calling SqlAlchemy API. But then I found that would cause too much time to run all my test cases and now I think maybe I should create a DB access layer, so I can use mock objects during test instead of access database directly. I think there are 2 choices to do that : use a single class which contains a DB connection and many methods like addUser/delUser/updateUser, addBook/delBook/updateBook. But this means this class will be very large. Another approach is create different manager classes like "UserManager", "BookManager". But that means I have to pass a list of managers to Business layer, which seems a little Cumbersome. How will you organize a database layer?
[ "That's a good question!\nThe problem is not trivial, and may require several approaches to tackle it.\nFor instance:\n\nOrganize the code, so that you can test most of the application logic without accessing the database. This means that each class will have methods for accessing data, and methods for processing it, and the second ones may be tested easily.\nWhen you need to test database access, you may use a proxy (so, like solution #1); you can think of it as an engine for SqlAlchemy or as a drop-in replacement for the SA. In both cases, you may want to think to a self initializing fake.\nIf the code does not involve stored procedures, think about using in-memory databases, like Lennart says (even if in this case, calling it \"unit test\" may sound a bit strange!).\n\nHowever, from my experience, everything is quite easy on word, and then falls abruptly when you go on the field. For instance, what to do when most of the logic is in the SQL statements? What if accessing data is strictly interleaved with its processing? Sometimes you may be able to refactor, sometimes (especially with large and legacy applications) not.\nIn the end, I think it is mostly a matter of mindset.\nIf you think you need to have unit tests, and you need to have them running fast, then you design your application in a certain way, that allow for easier unit testing.\nUnfortunately, this is not always true (many people see unit tests as something that can run overnight, so time is not an issue), and you get something that will not be really unit-testable.\n", "I would set up a database connection during testing that connects to a in memory database instead. Like so:\nsqlite_memory_db = create_engine('sqlite://')\n\nThat will be pretty much as fast as you can get, you are also not connecting to a real database, but just a temporary one in memory, so you don't have to worry about the changes done by your tests remaining after the test, etc. And you don't have to mock anything.\n", "One way to capture modifications to the database, is to use the SQLAlchemy session extension mechanism and intercept flushes to the database using something like this:\nfrom sqlalchemy.orm.attributes import instance_state\nfrom sqlalchemy.orm import SessionExtension\n\nclass MockExtension(SessionExtension):\n def __init__(self):\n self.clear()\n\n def clear(self):\n self.updates = set()\n self.inserts = set()\n self.deletes = set()\n\n def before_flush(self, session, flush_context, instances):\n for obj in session.dirty:\n self.updates.add(obj)\n state = instance_state(obj)\n state.commit_all({})\n session.identity_map._mutable_attrs.discard(state)\n session.identity_map._modified.discard(state)\n\n for obj in session.deleted:\n self.deletes.add(obj)\n session.expunge(obj)\n\n self.inserts.update(session.new)\n session._new = {}\n\nThen for tests you can configure your session with that mock and see if it matches your expectations.\nmock = MockExtension()\nSession = sessionmaker(extension=[mock], expire_on_commit=False)\n\ndef do_something(attr):\n session = Session()\n obj = session.query(Cls).first()\n obj.attr = attr\n session.commit()\n\ndef test_something():\n mock.clear()\n do_something('foobar')\n assert len(mock.updates) == 1\n updated_obj = mock.updates.pop()\n assert updated_obj.attr == 'foobar'\n\nBut you'll want to do at least some tests with a database anyway because you'll atleast want to know if your queries work as expected. And keep in mind that you can also have modifications to the database via session.update(), .delete() and .execute().\n", "SQLAlchemy has some facilities for making mocking easier -- maybe that would be easier than trying to rewrite whole sections of your project? \n" ]
[ 6, 2, 2, 0 ]
[]
[]
[ "database", "mocking", "orm", "python", "testing" ]
stackoverflow_0001326243_database_mocking_orm_python_testing.txt
Q: How can I overload the assignment of a class member? I am writing a Player model class in Python with Django, and I've ran into a small problem with the password member. I'd like the password to be automatically hashed upon assignment, but I can't find anything about overloading the assignment operator or anything. Is there any way I can overload the assignment of password so as to automatically do hashlib.md5(password).hexdigest() on it? from django.db import models class Player(models.Model): name = models.CharField(max_length=30,unique=True) password = models.CharField(max_length=32) email = models.EmailField() A: Can't you use properties and override setter for the field? Citing from django documentation: from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) def _get_full_name(self): return "%s %s" % (self.first_name, self.last_name) def _set_full_name(self, combined_name): self.first_name, self.last_name = combined_name.split(' ', 1) full_name = property(_get_full_name) full_name_2 = property(_get_full_name, _set_full_name) A: You can use the HashedProperty class that I created for SQLAlchemy. You can use with Django like this: class Player(models.Model): name = models.CharField(max_length=30,unique=True) password_hash = models.CharField(max_length=32) password_salt = models.CharField(max_length=32) password = HashedProperty('password_hash', 'password_salt', hashfunc=salted_hexdigest(hashlib.md5), saltfunc=random_string(32)) email = models.EmailField()
How can I overload the assignment of a class member?
I am writing a Player model class in Python with Django, and I've ran into a small problem with the password member. I'd like the password to be automatically hashed upon assignment, but I can't find anything about overloading the assignment operator or anything. Is there any way I can overload the assignment of password so as to automatically do hashlib.md5(password).hexdigest() on it? from django.db import models class Player(models.Model): name = models.CharField(max_length=30,unique=True) password = models.CharField(max_length=32) email = models.EmailField()
[ "Can't you use properties and override setter for the field?\nCiting from django documentation:\nfrom django.db import models\n\nclass Person(models.Model):\n first_name = models.CharField(max_length=30)\n last_name = models.CharField(max_length=30)\n\n def _get_full_name(self):\n return \"%s %s\" % (self.first_name, self.last_name)\n\n def _set_full_name(self, combined_name):\n self.first_name, self.last_name = combined_name.split(' ', 1)\n\n full_name = property(_get_full_name)\n\n full_name_2 = property(_get_full_name, _set_full_name)\n\n", "You can use the HashedProperty class that I created for SQLAlchemy. You can use with Django like this:\nclass Player(models.Model):\n name = models.CharField(max_length=30,unique=True)\n password_hash = models.CharField(max_length=32)\n password_salt = models.CharField(max_length=32)\n password = HashedProperty('password_hash', 'password_salt',\n hashfunc=salted_hexdigest(hashlib.md5),\n saltfunc=random_string(32))\n email = models.EmailField()\n\n" ]
[ 6, 0 ]
[]
[]
[ "class", "django", "python", "variable_assignment" ]
stackoverflow_0001326978_class_django_python_variable_assignment.txt
Q: Append a tuple to a list Given a tuple (specifically, a functions varargs), I want to prepend a list containing one or more items, then call another function with the result as a list. So far, the best I've come up with is: def fn(*args): l = ['foo', 'bar'] l.extend(args) fn2(l) Which, given Pythons usual terseness when it comes to this sort of thing, seems like it takes 2 more lines than it should. Is there a more pythonic way? A: You can convert the tuple to a list, which will allow you to concatenate it to the other list. ie: def fn(*args): fn2(['foo', 'bar'] + list(args)) A: If your fn2 took varargs also, you wouldn't need to build the combined list: def fn2(*l): print l def fn(*args): fn2(1, 2, *args) fn(10, 9, 8) produces (1, 2, 10, 9, 8)
Append a tuple to a list
Given a tuple (specifically, a functions varargs), I want to prepend a list containing one or more items, then call another function with the result as a list. So far, the best I've come up with is: def fn(*args): l = ['foo', 'bar'] l.extend(args) fn2(l) Which, given Pythons usual terseness when it comes to this sort of thing, seems like it takes 2 more lines than it should. Is there a more pythonic way?
[ "You can convert the tuple to a list, which will allow you to concatenate it to the other list. ie:\ndef fn(*args):\n fn2(['foo', 'bar'] + list(args))\n\n", "If your fn2 took varargs also, you wouldn't need to build the combined list:\ndef fn2(*l):\n print l\n\ndef fn(*args):\n fn2(1, 2, *args)\n\nfn(10, 9, 8)\n\nproduces\n(1, 2, 10, 9, 8)\n\n" ]
[ 9, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001327204_python.txt
Q: python - problems with regular expression and unicode Hi I have a problem in python. I try to explain my problem with an example. I have this string: >>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ' >>> print string ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂà and i want, for example, replace charachters different from Ñ,Ã,ï with "" i have tried: >>> rePat = re.compile('[^ÑÃï]',re.UNICODE) >>> print rePat.sub("",string) �Ñ�����������������������������ï�������������������à I obtained this �. I think that it's happen because this type of characters in python are represented by two position in the vector: for example \xc3\x91 = Ñ. For this, when i make the regolar expression, all the \xc3 are not substitued. How I can do this type of sub????? Thanks Franco A: You need to make sure that your strings are unicode strings, not plain strings (plain strings are like byte arrays). Example: >>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ' >>> type(string) <type 'str'> # do this instead: # (note the u in front of the ', this marks the character sequence as a unicode literal) >>> string = u'\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff\xc0\xc1\xc2\xc3' # or: >>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ'.decode('utf-8') # ... but be aware that the latter will only work if the terminal (or source file) has utf-8 encoding # ... it is a best practice to use the \xNN form in unicode literals, as in the first example >>> type(string) <type 'unicode'> >>> print string ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂà >>> rePat = re.compile(u'[^\xc3\x91\xc3\x83\xc3\xaf]',re.UNICODE) >>> print rePat.sub("", string) à When reading from a file, string = open('filename.txt').read() reads a byte sequence. To get the unicode content, do: string = unicode(open('filename.txt').read(), 'encoding'). Or: string = open('filename.txt').read().decode('encoding'). The codecs module can decode unicode streams (such as files) on-the-fly. Do a google search for python unicode. Python unicode handling can be a bit hard to grasp at first, it pays to read up on it. I live by this rule: "Software should only work with Unicode strings internally, converting to a particular encoding on output." (from http://www.amk.ca/python/howto/unicode) I also recommend: http://www.joelonsoftware.com/articles/Unicode.html
python - problems with regular expression and unicode
Hi I have a problem in python. I try to explain my problem with an example. I have this string: >>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ' >>> print string ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂà and i want, for example, replace charachters different from Ñ,Ã,ï with "" i have tried: >>> rePat = re.compile('[^ÑÃï]',re.UNICODE) >>> print rePat.sub("",string) �Ñ�����������������������������ï�������������������à I obtained this �. I think that it's happen because this type of characters in python are represented by two position in the vector: for example \xc3\x91 = Ñ. For this, when i make the regolar expression, all the \xc3 are not substitued. How I can do this type of sub????? Thanks Franco
[ "You need to make sure that your strings are unicode strings, not plain strings (plain strings are like byte arrays).\nExample:\n>>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ'\n>>> type(string)\n<type 'str'>\n\n# do this instead:\n# (note the u in front of the ', this marks the character sequence as a unicode literal)\n>>> string = u'\\xd0\\xd1\\xd2\\xd3\\xd4\\xd5\\xd6\\xd7\\xd8\\xd9\\xda\\xdb\\xdc\\xdd\\xde\\xdf\\xe0\\xe1\\xe2\\xe3\\xe4\\xe5\\xe6\\xe7\\xe8\\xe9\\xea\\xeb\\xec\\xed\\xee\\xef\\xf0\\xf1\\xf2\\xf3\\xf4\\xf5\\xf6\\xf7\\xf8\\xf9\\xfa\\xfb\\xfc\\xfd\\xfe\\xff\\xc0\\xc1\\xc2\\xc3'\n# or:\n>>> string = 'ÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ'.decode('utf-8')\n# ... but be aware that the latter will only work if the terminal (or source file) has utf-8 encoding\n# ... it is a best practice to use the \\xNN form in unicode literals, as in the first example\n\n>>> type(string)\n<type 'unicode'>\n>>> print string\nÐÑÒÓÔÕÖרÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿÀÁÂÃ\n\n>>> rePat = re.compile(u'[^\\xc3\\x91\\xc3\\x83\\xc3\\xaf]',re.UNICODE)\n>>> print rePat.sub(\"\", string)\nÃ\n\n\nWhen reading from a file, string = open('filename.txt').read() reads a byte sequence.\nTo get the unicode content, do: string = unicode(open('filename.txt').read(), 'encoding'). Or: string = open('filename.txt').read().decode('encoding').\nThe codecs module can decode unicode streams (such as files) on-the-fly.\nDo a google search for python unicode. Python unicode handling can be a bit hard to grasp at first, it pays to read up on it.\nI live by this rule: \"Software should only work with Unicode strings internally, converting to a particular encoding on output.\" (from http://www.amk.ca/python/howto/unicode)\nI also recommend: http://www.joelonsoftware.com/articles/Unicode.html\n" ]
[ 14 ]
[]
[]
[ "python", "regex", "unicode" ]
stackoverflow_0001327731_python_regex_unicode.txt
Q: Having instance-like behaviour in databases Sorry for the bad title, but I have no idea how to put this in short. The Problem is the following: I have a generic item that represents a group, lets call it Car. Now this Car has attributes, that range within certain limits, lets say for example speed is between 0 and 180 for a usual Car. Imagine some more attributes with ranges here, for example Color is between 0 and 255 whatever that value might stand for. So in my table GenericItems I have: ID Name 1 Car And in my Attributes I have: ID Name Min_Value Max Value 1 Speed 0 180 2 Color 0 255 The relation between Car and Attributes is thus 1:n. Now I start having very specific instances of my Car for example a FordMustang, A FerrariF40, and a DodgeViper. These are specific instances and now I want to give them specific values for their attributes. So in my table SpecificItem I have: ID Name GenericItem_ID 1 FordMustang 1 2 DodgeViper 1 3 FerrariF40 1 Now I need a third table SpecificAttributes2SpecificItems, to match attributes to SpecificItems: ID SpecificItem_ID Attribute_ID Value 1 1 1 120 ;Ford Mustang goes 120 only 2 1 2 123 ;Ford Mustang is red 3 2 1 150 ;Dodge Viper goes 150 4 2 2 255 ;Dodge Viper is white 5 3 1 180 ;FerrariF40 goes 180 6 3 2 0 ;FerrariF40 is black The problem with this design is, as you can see, that I am basically always copying over all rows of attributes, and I feel like this is bad design, inconsistent etc. How can I achieve this logic in a correct, normalized way? I want to be able to have multiple generic items, with multiple attributes with min/max values as interval, that can be "instantiated" with specific values A: The easiest way to use inheritance in database models is to use an ORM tool. For Python there is SQLAlchemy, Django and others. Now you should wonder whether e.g. a Ford Mustang is a kind of Car, or an instance of Car. In the former case, you should create a ford_mustang table defining the ford_mustang attributes. The ford_mustang table should then also have a foreign key to the car table, where the generic attributes of each FordMustang are specified. In the latter case, each kind of car is just a row in the Car table. Either way, each attribute should be represented in a single column. Validation of the attributes is typically done in the business logic of the application. A: It looks like you're trying to replicate Entity Atribute Value as a design, which leads to a lot of ugly tables (well, usually it is one single table for everything). http://en.wikipedia.org/wiki/Entity-attribute-value_model http://ycmi.med.yale.edu/nadkarni/Introduction%20to%20EAV%20systems.htm Discussing EAV tends to lead to "religious wars" as there are very few good places to use it (many folks say there are zero good places) and there are other folks who think that since it is so very flexible, it should be used everywhere. If I can find the reference I'm looking for, I'll add it to this. A: There is a school of thought which holds that any attempt to build an EAV model in an RDBMS constitutes "bad design" but we won't go there. Ooops, looks like somebody else already has done. I'm not certain what worries you. SpecificAttributes2SpecificItems is an intersection table (the clue is in the name). Necessarily it includes links to the Attributes and the SpecificItems. How could it not? You probably need to have a MinVal and a MaxVal on SpecificAttributes2SpecificItems, as certain items will have a more limited range than that permitted by the GenericItems. For instance, everybody knows that Ferraris should only be available in red. A: Couple of ideas: First, you should consider making your "genericgroups" table an "attribute" rather than something hovering above the rest of the data. Second, you may have an easier time having each attribute table actually holding the attributes of the items, not simply the idea of the attributes. If you want to have a range, consider either an enum type (for item names) or simply an integer with a set maximum (so the value of the color_value column can't be above 255). This way you would end up with something more like: Item Table ID Name 1 FordMustang 2 DodgeViper 3 FerrariF40 ItemType Table: ItemID Type 1 Car 2 Car 3 Car ItemColor Table: ItemID ColorID 1 123 2 255 3 0 MaxSpeed Table ItemID MaxSpeedID 1 120 2 150 3 180
Having instance-like behaviour in databases
Sorry for the bad title, but I have no idea how to put this in short. The Problem is the following: I have a generic item that represents a group, lets call it Car. Now this Car has attributes, that range within certain limits, lets say for example speed is between 0 and 180 for a usual Car. Imagine some more attributes with ranges here, for example Color is between 0 and 255 whatever that value might stand for. So in my table GenericItems I have: ID Name 1 Car And in my Attributes I have: ID Name Min_Value Max Value 1 Speed 0 180 2 Color 0 255 The relation between Car and Attributes is thus 1:n. Now I start having very specific instances of my Car for example a FordMustang, A FerrariF40, and a DodgeViper. These are specific instances and now I want to give them specific values for their attributes. So in my table SpecificItem I have: ID Name GenericItem_ID 1 FordMustang 1 2 DodgeViper 1 3 FerrariF40 1 Now I need a third table SpecificAttributes2SpecificItems, to match attributes to SpecificItems: ID SpecificItem_ID Attribute_ID Value 1 1 1 120 ;Ford Mustang goes 120 only 2 1 2 123 ;Ford Mustang is red 3 2 1 150 ;Dodge Viper goes 150 4 2 2 255 ;Dodge Viper is white 5 3 1 180 ;FerrariF40 goes 180 6 3 2 0 ;FerrariF40 is black The problem with this design is, as you can see, that I am basically always copying over all rows of attributes, and I feel like this is bad design, inconsistent etc. How can I achieve this logic in a correct, normalized way? I want to be able to have multiple generic items, with multiple attributes with min/max values as interval, that can be "instantiated" with specific values
[ "The easiest way to use inheritance in database models is to use an ORM tool. For Python there is SQLAlchemy, Django and others.\nNow you should wonder whether e.g. a Ford Mustang is a kind of Car, or an instance of Car. In the former case, you should create a ford_mustang table defining the ford_mustang attributes. The ford_mustang table should then also have a foreign key to the car table, where the generic attributes of each FordMustang are specified. In the latter case, each kind of car is just a row in the Car table. Either way, each attribute should be represented in a single column.\nValidation of the attributes is typically done in the business logic of the application.\n", "It looks like you're trying to replicate Entity Atribute Value as a design, which leads to a lot of ugly tables (well, usually it is one single table for everything). \nhttp://en.wikipedia.org/wiki/Entity-attribute-value_model\nhttp://ycmi.med.yale.edu/nadkarni/Introduction%20to%20EAV%20systems.htm \nDiscussing EAV tends to lead to \"religious wars\" as there are very few good places to use it (many folks say there are zero good places) and there are other folks who think that since it is so very flexible, it should be used everywhere. If I can find the reference I'm looking for, I'll add it to this.\n", "There is a school of thought which holds that any attempt to build an EAV model in an RDBMS constitutes \"bad design\" but we won't go there. Ooops, looks like somebody else already has done.\nI'm not certain what worries you. SpecificAttributes2SpecificItems is an intersection table (the clue is in the name). Necessarily it includes links to the Attributes and the SpecificItems. How could it not? \nYou probably need to have a MinVal and a MaxVal on SpecificAttributes2SpecificItems, as certain items will have a more limited range than that permitted by the GenericItems. For instance, everybody knows that Ferraris should only be available in red. \n", "Couple of ideas:\nFirst, you should consider making your \"genericgroups\" table an \"attribute\" rather than something hovering above the rest of the data.\nSecond, you may have an easier time having each attribute table actually holding the attributes of the items, not simply the idea of the attributes. If you want to have a range, consider either an enum type (for item names) or simply an integer with a set maximum (so the value of the color_value column can't be above 255). This way you would end up with something more like:\n Item Table\n ID Name \n\n 1 FordMustang \n 2 DodgeViper \n 3 FerrariF40\n\n ItemType Table:\n\n ItemID Type\n 1 Car\n 2 Car\n 3 Car\n\n\n ItemColor Table:\n\n ItemID ColorID\n 1 123\n 2 255\n 3 0\n\n MaxSpeed Table\n\n ItemID MaxSpeedID\n\n 1 120\n 2 150\n 3 180\n\n" ]
[ 1, 1, 1, 1 ]
[]
[]
[ "database", "database_design", "mysql", "python" ]
stackoverflow_0001327848_database_database_design_mysql_python.txt
Q: Should I forward arguments as *args & **kwargs? I have a class that handles command line arguments in my program using python's optparse module. It is also inherited by several classes to create subsets of parameters. To encapsulate the option parsing mechanism I want to reveal only a function add_option to inheriting classes. What this function does is then call optparse.make_option. Is it a good practice to simply have my add_option method say that it accepts the same arguments as optparse.make_option in the documentation, and forward the arguments as *args and **kwargs? Should I do some parameter checking beforehand? In a way I want to avoid this to decouple that piece of code as much from a specific version of optparse. A: It seems that you want your subclasses to have awareness of the command line stuff, which is often not a good idea. You want to encapsulate the whole config input portion of your program so that you can drive it with a command line, config file, other python program, whatever. So, I would remove any call to add_option from your subclasses. If you want to discover what your config requirements look like at runtime, I would simply add that data to your subclasses; let each one have a member or method that can be used to figure out what kind of inputs it needs. Then, you can have an input organizer class walk over them, pull this data out, and use it to drive a command line, config file, or what have you. But honestly, I've never needed to do this at run time. I usually pull all that config stuff out to it's own separate thing which answers the question "What does the user need to tell the tool?", and then the subclasses go looking in the config data structure for what they need. A: Are you sure that subclassing is what you want to do? Your overriding behavior could just be implemented in a function.
Should I forward arguments as *args & **kwargs?
I have a class that handles command line arguments in my program using python's optparse module. It is also inherited by several classes to create subsets of parameters. To encapsulate the option parsing mechanism I want to reveal only a function add_option to inheriting classes. What this function does is then call optparse.make_option. Is it a good practice to simply have my add_option method say that it accepts the same arguments as optparse.make_option in the documentation, and forward the arguments as *args and **kwargs? Should I do some parameter checking beforehand? In a way I want to avoid this to decouple that piece of code as much from a specific version of optparse.
[ "It seems that you want your subclasses to have awareness of the command line stuff, which is often not a good idea.\nYou want to encapsulate the whole config input portion of your program so that you can drive it with a command line, config file, other python program, whatever.\nSo, I would remove any call to add_option from your subclasses. \nIf you want to discover what your config requirements look like at runtime, I would simply add that data to your subclasses; let each one have a member or method that can be used to figure out what kind of inputs it needs.\nThen, you can have an input organizer class walk over them, pull this data out, and use it to drive a command line, config file, or what have you.\nBut honestly, I've never needed to do this at run time. I usually pull all that config stuff out to it's own separate thing which answers the question \"What does the user need to tell the tool?\", and then the subclasses go looking in the config data structure for what they need.\n", "Are you sure that subclassing is what you want to do? Your overriding behavior could just be implemented in a function.\n" ]
[ 1, 0 ]
[]
[]
[ "optparse", "python" ]
stackoverflow_0001328248_optparse_python.txt
Q: Templating+scripting reverse proxy? Thinking through an idea, wanted to get feedback/suggestions: Having had great success with url rewriting and nginx, I'm now thinking of a more capable reverse proxy/router that would do the following: Map requests to handlers based on regex matching (ala Django) Certain requests would simply be routed to backend servers - eg. static media, memcached, etc Other requests would render templates that pull in data from several backend servers For example, a template could consist of: <body> <div>{% remote http://someserver/somepage %}</div> <div>{% remote http://otherserver/otherpage %}</div> </body> The reverse proxy would make the http requests to someserver/somepage and otherserver/otherpage and pull the results into the template. Questions: Does the idea make sense or is it a bad idea? Is there an existing package that implements something like this? How about an existing server+scripting for implementing this - eg. lighttpd+lua, nginx+?? How about nginx+SSI? Looks pretty capable, if you have experience / recommendations please comment. How about something like a scripting language+eventlet ? Twisted? My preferences are python for scripting and jinja/django style templates, but I'm open to alternatives. A: This already exist an is called Deliverance: http://deliverance.openplans.org/ A: So instead of doing something an AJAXy call into an iframe or something, you're doing it on the server side. I think it's something I'd only do if the external site was totally under my control, purely for the security implications. It'd also hit your response times quite a bit. Am I missing the point completely, or would this be quite simple to do with some functions & urllib?
Templating+scripting reverse proxy?
Thinking through an idea, wanted to get feedback/suggestions: Having had great success with url rewriting and nginx, I'm now thinking of a more capable reverse proxy/router that would do the following: Map requests to handlers based on regex matching (ala Django) Certain requests would simply be routed to backend servers - eg. static media, memcached, etc Other requests would render templates that pull in data from several backend servers For example, a template could consist of: <body> <div>{% remote http://someserver/somepage %}</div> <div>{% remote http://otherserver/otherpage %}</div> </body> The reverse proxy would make the http requests to someserver/somepage and otherserver/otherpage and pull the results into the template. Questions: Does the idea make sense or is it a bad idea? Is there an existing package that implements something like this? How about an existing server+scripting for implementing this - eg. lighttpd+lua, nginx+?? How about nginx+SSI? Looks pretty capable, if you have experience / recommendations please comment. How about something like a scripting language+eventlet ? Twisted? My preferences are python for scripting and jinja/django style templates, but I'm open to alternatives.
[ "This already exist an is called Deliverance: http://deliverance.openplans.org/\n", "So instead of doing something an AJAXy call into an iframe or something, you're doing it on the server side.\nI think it's something I'd only do if the external site was totally under my control, purely for the security implications. It'd also hit your response times quite a bit.\nAm I missing the point completely, or would this be quite simple to do with some functions & urllib?\n" ]
[ 1, 0 ]
[]
[]
[ "proxy", "python", "reverse_proxy", "twisted" ]
stackoverflow_0001202430_proxy_python_reverse_proxy_twisted.txt