content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
What is the best design for polling a modem for incoming data?
I have a GSM modem connected to my computer, i want to receive text messages sent to it using a python program i have written, am just wondering what is the best technique to poll for data.
Should i write a program that has a infinite loop that continuously checks for incoming sms's i.e within the loop the program sends the AT commands and reads the input data. or do modems have a way of signaling an application of an incoming data(sms).
Am trying to imagine a cellphone is just a GSM modem, and when an sms is received, the phone alerts you of the event, or does the phone software have an infinite loop that polls for incoming data.
A:
I have written something similar before. There is a way using AT commands to tell the modem to signal you each time an SMS is received.
For reference, I was using a Maestro 100 GSM Modem in an embedded application.
First you have to initialize the modem properly. I was using text mode for the SMS, but you might be using something different. Pick from these what you want. AT+CNMI is the most important.
AT&F0 # Restore factory defaults
ATE0 # Disable command echo
AT+CMGF=1 # Set message format to text mode
AT+CNMI=1,1,0,1,0 # Set new message indicator
AT+CPMS="SM","SM","SM" # Set preferred message storage to SIM
You would then wait for a message notification, that will look like this. (Don't match on the index number, that might differ between notifications)
+CMTI: "SM",0 # Message notification with index
When you get that notification, retrieve the unread SMS's:
AT+CMGL="REC UNREAD" # Retrieve unread messages
I would recommend you also add a poll, maybe every 5 minutes or so, just in case you miss a notification. With serial comms you can never be sure!
A:
I find I can't remember much of the AT command set related to SMS. Andre Miller's answer seems to ring a few bells. Anyway you should read the documentation very carefully, I'm sure there were a few gotchas.
My recommentation for polling is at least every 5 seconds - this is just for robustness and responsiveness in the face of disconnection.
I used a state machine to navigate between initialisation, reading and deleting messages.
|
What is the best design for polling a modem for incoming data?
|
I have a GSM modem connected to my computer, i want to receive text messages sent to it using a python program i have written, am just wondering what is the best technique to poll for data.
Should i write a program that has a infinite loop that continuously checks for incoming sms's i.e within the loop the program sends the AT commands and reads the input data. or do modems have a way of signaling an application of an incoming data(sms).
Am trying to imagine a cellphone is just a GSM modem, and when an sms is received, the phone alerts you of the event, or does the phone software have an infinite loop that polls for incoming data.
|
[
"I have written something similar before. There is a way using AT commands to tell the modem to signal you each time an SMS is received.\nFor reference, I was using a Maestro 100 GSM Modem in an embedded application.\nFirst you have to initialize the modem properly. I was using text mode for the SMS, but you might be using something different. Pick from these what you want. AT+CNMI is the most important.\nAT&F0 # Restore factory defaults\nATE0 # Disable command echo\nAT+CMGF=1 # Set message format to text mode\nAT+CNMI=1,1,0,1,0 # Set new message indicator\nAT+CPMS=\"SM\",\"SM\",\"SM\" # Set preferred message storage to SIM\n\nYou would then wait for a message notification, that will look like this. (Don't match on the index number, that might differ between notifications)\n+CMTI: \"SM\",0 # Message notification with index\n\nWhen you get that notification, retrieve the unread SMS's:\nAT+CMGL=\"REC UNREAD\" # Retrieve unread messages\n\nI would recommend you also add a poll, maybe every 5 minutes or so, just in case you miss a notification. With serial comms you can never be sure!\n",
"I find I can't remember much of the AT command set related to SMS. Andre Miller's answer seems to ring a few bells. Anyway you should read the documentation very carefully, I'm sure there were a few gotchas.\nMy recommentation for polling is at least every 5 seconds - this is just for robustness and responsiveness in the face of disconnection.\nI used a state machine to navigate between initialisation, reading and deleting messages.\n"
] |
[
3,
0
] |
[] |
[] |
[
"at_command",
"gsm",
"modem",
"python"
] |
stackoverflow_0001423308_at_command_gsm_modem_python.txt
|
Q:
Komodo Edit - code-completion for Django?
I've been using Komodo Edit for a small project in Django.
The code completion features seem to work pretty well for standard python modules, however, it doesn't know anything about Django modules. Is there any way to configure Komodo Edit to use Django modules for autocomplete as well?
A:
o to Edit > Preferences. Expand the
"Languages" group by clicking the [+]
symbol. Click "Python". Click the
little "Add..." button under
"Additional Python Import
Directories". Add the directory ABOVE
your project and you should have
intellisense enabled.
This has always worked for me for both Django and my individual projects.
A:
By sure Django is on your python path and Komodo should pick it up. Alternatively you can add the location of Django to where Komodo looks for its autocomplete.
A:
Hmmm. It's installed by default so my answer probably isn't the right solution. :-)
But here goes...
You can install a Django extension in Komodo edit. I haven't tested it myself but you can test it.
Tools -> Add-ons -> Extensions
It's name is "Django Language".
Check if it works.
|
Komodo Edit - code-completion for Django?
|
I've been using Komodo Edit for a small project in Django.
The code completion features seem to work pretty well for standard python modules, however, it doesn't know anything about Django modules. Is there any way to configure Komodo Edit to use Django modules for autocomplete as well?
|
[
"\no to Edit > Preferences. Expand the\n \"Languages\" group by clicking the [+]\n symbol. Click \"Python\". Click the\n little \"Add...\" button under \n \"Additional Python Import\n Directories\". Add the directory ABOVE\n your project and you should have\n intellisense enabled.\n\nThis has always worked for me for both Django and my individual projects. \n",
"By sure Django is on your python path and Komodo should pick it up. Alternatively you can add the location of Django to where Komodo looks for its autocomplete.\n",
"Hmmm. It's installed by default so my answer probably isn't the right solution. :-)\nBut here goes...\nYou can install a Django extension in Komodo edit. I haven't tested it myself but you can test it.\n\nTools -> Add-ons -> Extensions\n\nIt's name is \"Django Language\".\nCheck if it works.\n"
] |
[
8,
4,
2
] |
[] |
[] |
[
"code_completion",
"django",
"ide",
"komodo",
"python"
] |
stackoverflow_0001424392_code_completion_django_ide_komodo_python.txt
|
Q:
Easy, Robust IPC between Python and PHP
I have a python program which starts up a PHP script using the subprocess.Popen() function. The PHP script needs to communicate back-and-forth with Python, and I am trying to find an easy but robust way to manage the message sending/receiving.
I have already written a working protocol using basic sockets, but it doesn't feel very robust - I don't have any logic to handle dropped messages, and I don't even fully understand how sockets work which leaves me uncertain about what else could go wrong.
Are there any generic libraries or IPC frameworks which are easier than raw sockets?
ATM I need something which supports Python and PHP, but in the future I may want to be able to use C, Perl and Ruby also.
I am looking for something robust, i.e. when the server or client crashes, the other party needs to be able to recover gracefully.
A:
It sounds like you want a generic RPC framework.
You should take a look at:
Thrift http://incubator.apache.org/thrift/
XML RPC http://docs.python.org/library/xmlrpclib.html and http://phpxmlrpc.sourceforge.net/
AMQP e.g. http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol
Thrift is probably more what you're looking for. It's used by Facebook internally.
A:
You could look at shared memory or named pipes, but I think there are two more likely options, assuming at least one of these languages is being used for a webapp:
A. Use your database's atomicity. In python, begin a transaction, put a message into a table, and end the transaction. From php, begin a transaction, take a message out of the table or mark it "read", and end the transaction. Make your PHP and/or python self-aware enough not to post the same messages twice. Voila; reliable (and scaleable) IPC, using existing web architecture.
B. Make your webserver (assuming as webapp) capable of running both php and python, locking down any internal processes to just localhost access, and then call them using xmlrpc or soap from your other language using standard libraries. This is also scalable, as you can change your URLs and security lock-downs later.
|
Easy, Robust IPC between Python and PHP
|
I have a python program which starts up a PHP script using the subprocess.Popen() function. The PHP script needs to communicate back-and-forth with Python, and I am trying to find an easy but robust way to manage the message sending/receiving.
I have already written a working protocol using basic sockets, but it doesn't feel very robust - I don't have any logic to handle dropped messages, and I don't even fully understand how sockets work which leaves me uncertain about what else could go wrong.
Are there any generic libraries or IPC frameworks which are easier than raw sockets?
ATM I need something which supports Python and PHP, but in the future I may want to be able to use C, Perl and Ruby also.
I am looking for something robust, i.e. when the server or client crashes, the other party needs to be able to recover gracefully.
|
[
"It sounds like you want a generic RPC framework.\nYou should take a look at:\n\nThrift http://incubator.apache.org/thrift/\nXML RPC http://docs.python.org/library/xmlrpclib.html and http://phpxmlrpc.sourceforge.net/\nAMQP e.g. http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol\n\nThrift is probably more what you're looking for. It's used by Facebook internally.\n",
"You could look at shared memory or named pipes, but I think there are two more likely options, assuming at least one of these languages is being used for a webapp:\nA. Use your database's atomicity. In python, begin a transaction, put a message into a table, and end the transaction. From php, begin a transaction, take a message out of the table or mark it \"read\", and end the transaction. Make your PHP and/or python self-aware enough not to post the same messages twice. Voila; reliable (and scaleable) IPC, using existing web architecture.\nB. Make your webserver (assuming as webapp) capable of running both php and python, locking down any internal processes to just localhost access, and then call them using xmlrpc or soap from your other language using standard libraries. This is also scalable, as you can change your URLs and security lock-downs later.\n"
] |
[
2,
0
] |
[] |
[] |
[
"ipc",
"php",
"python"
] |
stackoverflow_0001424593_ipc_php_python.txt
|
Q:
Resources for developing Python and Google App Engine
I would like to ask about some sources for developing applications with Python and Google App Engine.
For example, some controls to generate automatically pages with the insert/update/delete of a database table, or any other useful resources are welcome.
Thank you!
A:
The Python community tends to look askance at code generation; so, @Hoang, if you think code generation is THE way to go, I suggest you try just about any other language BUT Python.
@Dominic has already suggested some excellent resources, I could point you to more (App Engine Fan, App Engine Utilities, etc, etc) but they're all based on the Pythonic mindset: understand what you need and what you could be doing, wrap as much of it as feasible into reusable components, reuse those components from your own sources.
You want magic, wizards and code generation that basically excused you (in theory) from STUDYING and UNDERSTANDING: give up on Python, it's SO not the language for that,
A:
The google app engine "Getting Started" tutorial is very good. The django documentation is also really detailed.
Take a look at GoogleIO on youtube and watch some of the tutorials.
A:
App Engine Documentation
http://code.google.com/appengine/docs/
App Engine Google Group
http://groups.google.com/group/google-appengine
Google I/O conference videos
http://code.google.com/events/io/
App Engine Cookbook
http://appengine-cookbook.appspot.com/
and, of course, stackoverflow
|
Resources for developing Python and Google App Engine
|
I would like to ask about some sources for developing applications with Python and Google App Engine.
For example, some controls to generate automatically pages with the insert/update/delete of a database table, or any other useful resources are welcome.
Thank you!
|
[
"The Python community tends to look askance at code generation; so, @Hoang, if you think code generation is THE way to go, I suggest you try just about any other language BUT Python.\n@Dominic has already suggested some excellent resources, I could point you to more (App Engine Fan, App Engine Utilities, etc, etc) but they're all based on the Pythonic mindset: understand what you need and what you could be doing, wrap as much of it as feasible into reusable components, reuse those components from your own sources.\nYou want magic, wizards and code generation that basically excused you (in theory) from STUDYING and UNDERSTANDING: give up on Python, it's SO not the language for that,\n",
"The google app engine \"Getting Started\" tutorial is very good. The django documentation is also really detailed. \nTake a look at GoogleIO on youtube and watch some of the tutorials.\n",
"App Engine Documentation\nhttp://code.google.com/appengine/docs/\nApp Engine Google Group\nhttp://groups.google.com/group/google-appengine\nGoogle I/O conference videos \nhttp://code.google.com/events/io/ \nApp Engine Cookbook\nhttp://appengine-cookbook.appspot.com/\nand, of course, stackoverflow\n"
] |
[
7,
3,
2
] |
[] |
[] |
[
"controls",
"google_app_engine",
"python",
"user_controls"
] |
stackoverflow_0001415208_controls_google_app_engine_python_user_controls.txt
|
Q:
What python web frameworks work well with CGI (e.g. on nearlyfreespeech.net)?
From nearlyfreespeech's website, they state that the following don't work well:
mod_python Web application
frameworks that depend on persistent processes, including: Ruby On Rails, Django, Zope, and others (some of these will run under CGI, but will run slowly and are suitable only for development purposes)
Are there any Python web frameworks that work well on NearlyFreeSpeech?
A:
WSGI can run on top of CGI, and popular frameworks typically run on top of WSGI, but performance is quite another issue -- since a CGI service starts afresh on each hit, any framework you may be using will need to reload from scratch each and every time, and that (in addition to opening a new connection to a DB, etc, which is basically inevitable with CGI) will make things pretty sluggish on anything but the tiniest, lightest frameworks.
Maybe something like WebOb might be tolerable, but you'll need to do some tests to check even that (how loaded those servers are is, of course, a big part of the puzzle, and you just can't tell except by testing).
A:
I got web.py to work on nearly free speech a few years ago by fooling with its WSGI stuff to run on CGI. It was just slightly too slow to be usable though.
I've made a few Python web applications hosted on nearly free speech just using the CGI module, and they are actually plenty fast even with high traffic. Example: www.gigbayes.com.
A:
By the things they reject. I think that twisted.web is still an option there, but I don't have any experience with nearlyfreespeech.net
|
What python web frameworks work well with CGI (e.g. on nearlyfreespeech.net)?
|
From nearlyfreespeech's website, they state that the following don't work well:
mod_python Web application
frameworks that depend on persistent processes, including: Ruby On Rails, Django, Zope, and others (some of these will run under CGI, but will run slowly and are suitable only for development purposes)
Are there any Python web frameworks that work well on NearlyFreeSpeech?
|
[
"WSGI can run on top of CGI, and popular frameworks typically run on top of WSGI, but performance is quite another issue -- since a CGI service starts afresh on each hit, any framework you may be using will need to reload from scratch each and every time, and that (in addition to opening a new connection to a DB, etc, which is basically inevitable with CGI) will make things pretty sluggish on anything but the tiniest, lightest frameworks.\nMaybe something like WebOb might be tolerable, but you'll need to do some tests to check even that (how loaded those servers are is, of course, a big part of the puzzle, and you just can't tell except by testing).\n",
"I got web.py to work on nearly free speech a few years ago by fooling with its WSGI stuff to run on CGI. It was just slightly too slow to be usable though.\nI've made a few Python web applications hosted on nearly free speech just using the CGI module, and they are actually plenty fast even with high traffic. Example: www.gigbayes.com.\n",
"By the things they reject. I think that twisted.web is still an option there, but I don't have any experience with nearlyfreespeech.net\n"
] |
[
5,
2,
0
] |
[
"Well, if what you really need is just free hosting for a Python web app, Google AppEngine is a nice alternative and you won't be as limited on choice of frameworks.\n"
] |
[
-3
] |
[
"cgi",
"frameworks",
"nearlyfreespeech",
"python"
] |
stackoverflow_0001423041_cgi_frameworks_nearlyfreespeech_python.txt
|
Q:
talking between python tcp server and a c++ client
I am having an issue trying to communicate between a python TCP server and a c++ TCP client.
After the first call, which works fine, the subsequent calls cause issues.
As far as WinSock is concerned, the send() function worked properly, it returns the proper length and WSAGetLastError() does not return anything of significance.
However, when watching the packets using wireshark, i notice that the first call sends two packets, a PSH,ACK with all of the data in it, and an ACK right after, but the subsequent calls, which don't work, only send the PSH,ACK packet, and not a subsequent ACK packet
the receiving computers wireshark corroborates this, and the python server does nothing, it doesnt have any data coming out of the socket, and i cannot debug deeper, since socket is a native class
when i run a c++ client and a c++ server (a hacked replica of what the python one would do), the client faithfully sends both the PSH,ACk and ACK packets the whole time, even after the first call.
Is the winsock send function supposed to always send a PSH,ACK and an ACK?
If so, why would it do so when connected to my C++ server and not the python server?
Has anyone had any issues similar to this?
A:
client sends a PSH,ACK and then the
server sends a PSH,ACK and a
FIN,PSH,ACK
There is a FIN, so could it be that the Python version of your server is closing the connection immediately after the initial read?
If you are not explicitly closing the server's socket, it's probable that the server's remote socket variable is going out of scope, thus closing it (and that this bug is not present in your C++ version)?
Assuming that this is the case, I can cause a very similar TCP sequence with this code for the server:
# server.py
import socket
from time import sleep
def f(s):
r,a = s.accept()
print r.recv(100)
s = socket.socket()
s.bind(('localhost',1234))
s.listen(1)
f(s)
# wait around a bit for the client to send it's second packet
sleep(10)
and this for the client:
# client.py
import socket
from time import sleep
s = socket.socket()
s.connect(('localhost',1234))
s.send('hello 1')
# wait around for a while so that the socket in server.py goes out of scope
sleep(5)
s.send('hello 2')
Start your packet sniffer, then run server.py and then, client.py. Here is the outout of tcpdump -A -i lo, which matches your observations:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 96 bytes
12:42:37.683710 IP localhost:33491 > localhost.1234: S 1129726741:1129726741(0) win 32792 <mss 16396,sackOK,timestamp 640881101 0,nop,wscale 7>
E..<R.@[email protected]|....@....
&3..........
12:42:37.684049 IP localhost.1234 > localhost:33491: S 1128039653:1128039653(0) ack 1129726742 win 32768 <mss 16396,sackOK,timestamp 640881101 640881101,nop,wscale 7>
E..<..@.@.<.............C<..CVC.....Ia....@....
&3..&3......
12:42:37.684087 IP localhost:33491 > localhost.1234: . ack 1 win 257 <nop,nop,timestamp 640881102 640881101>
E..4R.@[email protected]<......1......
&3..&3..
12:42:37.684220 IP localhost:33491 > localhost.1234: P 1:8(7) ack 1 win 257 <nop,nop,timestamp 640881102 640881101>
E..;R.@[email protected]<......./.....
&3..&3..hello 1
12:42:37.684271 IP localhost.1234 > localhost:33491: . ack 8 win 256 <nop,nop,timestamp 640881102 640881102>
E..4.(@[email protected]<..CVC.....1}.....
&3..&3..
12:42:37.684755 IP localhost.1234 > localhost:33491: F 1:1(0) ack 8 win 256 <nop,nop,timestamp 640881103 640881102>
E..4.)@[email protected]<..CVC.....1{.....
&3..&3..
12:42:37.685639 IP localhost:33491 > localhost.1234: . ack 2 win 257 <nop,nop,timestamp 640881104 640881103>
E..4R.@[email protected]<......1x.....
&3..&3..
12:42:42.683367 IP localhost:33491 > localhost.1234: P 8:15(7) ack 2 win 257 <nop,nop,timestamp 640886103 640881103>
E..;R.@[email protected]<......./.....
&3%W&3..hello 2
12:42:42.683401 IP localhost.1234 > localhost:33491: R 1128039655:1128039655(0) win 0
E..(..@.@.<.............C<......P...b...
9 packets captured
27 packets received by filter
0 packets dropped by kernel
A:
What size of packets do you send?
If they are small - may be Nagle's Algorith & Delayed ACK Algorithm is your headache? From what you described think Delayed ACK is involved...
|
talking between python tcp server and a c++ client
|
I am having an issue trying to communicate between a python TCP server and a c++ TCP client.
After the first call, which works fine, the subsequent calls cause issues.
As far as WinSock is concerned, the send() function worked properly, it returns the proper length and WSAGetLastError() does not return anything of significance.
However, when watching the packets using wireshark, i notice that the first call sends two packets, a PSH,ACK with all of the data in it, and an ACK right after, but the subsequent calls, which don't work, only send the PSH,ACK packet, and not a subsequent ACK packet
the receiving computers wireshark corroborates this, and the python server does nothing, it doesnt have any data coming out of the socket, and i cannot debug deeper, since socket is a native class
when i run a c++ client and a c++ server (a hacked replica of what the python one would do), the client faithfully sends both the PSH,ACk and ACK packets the whole time, even after the first call.
Is the winsock send function supposed to always send a PSH,ACK and an ACK?
If so, why would it do so when connected to my C++ server and not the python server?
Has anyone had any issues similar to this?
|
[
"\nclient sends a PSH,ACK and then the\n server sends a PSH,ACK and a\n FIN,PSH,ACK\n\nThere is a FIN, so could it be that the Python version of your server is closing the connection immediately after the initial read?\nIf you are not explicitly closing the server's socket, it's probable that the server's remote socket variable is going out of scope, thus closing it (and that this bug is not present in your C++ version)?\nAssuming that this is the case, I can cause a very similar TCP sequence with this code for the server:\n# server.py\nimport socket\nfrom time import sleep\n\ndef f(s):\n r,a = s.accept()\n print r.recv(100)\n\ns = socket.socket()\ns.bind(('localhost',1234))\ns.listen(1)\n\nf(s)\n# wait around a bit for the client to send it's second packet\nsleep(10)\n\nand this for the client:\n# client.py\nimport socket\nfrom time import sleep\n\ns = socket.socket()\ns.connect(('localhost',1234))\n\ns.send('hello 1')\n# wait around for a while so that the socket in server.py goes out of scope\nsleep(5)\ns.send('hello 2')\n\nStart your packet sniffer, then run server.py and then, client.py. Here is the outout of tcpdump -A -i lo, which matches your observations:\ntcpdump: verbose output suppressed, use -v or -vv for full protocol decode\nlistening on lo, link-type EN10MB (Ethernet), capture size 96 bytes\n12:42:37.683710 IP localhost:33491 > localhost.1234: S 1129726741:1129726741(0) win 32792 <mss 16396,sackOK,timestamp 640881101 0,nop,wscale 7>\nE..<R.@[email protected]|....@....\n&3..........\n12:42:37.684049 IP localhost.1234 > localhost:33491: S 1128039653:1128039653(0) ack 1129726742 win 32768 <mss 16396,sackOK,timestamp 640881101 640881101,nop,wscale 7>\nE..<..@.@.<.............C<..CVC.....Ia....@....\n&3..&3......\n12:42:37.684087 IP localhost:33491 > localhost.1234: . ack 1 win 257 <nop,nop,timestamp 640881102 640881101>\nE..4R.@[email protected]<......1......\n&3..&3..\n12:42:37.684220 IP localhost:33491 > localhost.1234: P 1:8(7) ack 1 win 257 <nop,nop,timestamp 640881102 640881101>\nE..;R.@[email protected]<......./.....\n&3..&3..hello 1\n12:42:37.684271 IP localhost.1234 > localhost:33491: . ack 8 win 256 <nop,nop,timestamp 640881102 640881102>\nE..4.(@[email protected]<..CVC.....1}.....\n&3..&3..\n12:42:37.684755 IP localhost.1234 > localhost:33491: F 1:1(0) ack 8 win 256 <nop,nop,timestamp 640881103 640881102>\nE..4.)@[email protected]<..CVC.....1{.....\n&3..&3..\n12:42:37.685639 IP localhost:33491 > localhost.1234: . ack 2 win 257 <nop,nop,timestamp 640881104 640881103>\nE..4R.@[email protected]<......1x.....\n&3..&3..\n12:42:42.683367 IP localhost:33491 > localhost.1234: P 8:15(7) ack 2 win 257 <nop,nop,timestamp 640886103 640881103>\nE..;R.@[email protected]<......./.....\n&3%W&3..hello 2\n12:42:42.683401 IP localhost.1234 > localhost:33491: R 1128039655:1128039655(0) win 0\nE..(..@.@.<.............C<......P...b...\n\n9 packets captured\n27 packets received by filter\n0 packets dropped by kernel\n\n",
"What size of packets do you send? \nIf they are small - may be Nagle's Algorith & Delayed ACK Algorithm is your headache? From what you described think Delayed ACK is involved...\n"
] |
[
2,
1
] |
[] |
[] |
[
"c++",
"networking",
"python",
"sockets",
"winsock"
] |
stackoverflow_0001423251_c++_networking_python_sockets_winsock.txt
|
Q:
exposing or hiding objects of dependencies?
Common scenario: I have a library that uses other libraries. For example, a math library (let's call it foo) that uses numpy.
Functions of foo can either:
return a numpy object (either pure or an inherited reimplementation)
return a list
return a foo-implemented object that behaves like numpy (performing delegation)
The three solutions can be also restated as:
foo passes through the internally used object, clearly stating that its library dependency is also a API dependency (since it returns objects obeying the interface of the numpy library)
foo makes use of a common subset of objects that are part of the basis of the language.
foo completely hides what it uses internally. Nothing about the underlying libraries escapes from the foo library to the client code.
We are of course in a pros-cons scenario. Transparent or opaque? strong coupling with the underlying tools or not? I know the drill but I am in the process of having to do this choice, and I want to share opinions before taking a decision. Suggestions, ideas, personal experience are greatly appreciated.
A:
Since you're talking about return values, that's not really about "internal objects" -- you should just document the interfaces your returned objects will support (it's OK if that's a subset of numpy.array or whatever;-). I recommend against returning a reference to your internal mutable attributes and documenting that mutators work to alter your own object indirectly (and NOT documenting it is not much better) -- that leads to way-too-strong coupling down the road.
If you WERE talking about actual internal objects, I'd recommend the Law of Demeter -- in a simplistic reading, if the client's coding a.b.c.d.e.f(), then something is very wrong ("just one dot" may be sometimes extreme, but, "four are Right Out"). Again, the problem is strong coupling -- making it impossible for you to change your internal implementation in even minor ways without breaking a million clients...!
A:
The main question I would think about is how much of your library would return numpy objects? If its pervasive I would go with directly returning a numpy as your so tied to numpy you might as well make it explicit. Plus it will probably make it easier to use other numpy based libraries. If on the other hand you only have a few methods that would return numpy I would either go with the numpy like object or a list, probably the numpy like object.
|
exposing or hiding objects of dependencies?
|
Common scenario: I have a library that uses other libraries. For example, a math library (let's call it foo) that uses numpy.
Functions of foo can either:
return a numpy object (either pure or an inherited reimplementation)
return a list
return a foo-implemented object that behaves like numpy (performing delegation)
The three solutions can be also restated as:
foo passes through the internally used object, clearly stating that its library dependency is also a API dependency (since it returns objects obeying the interface of the numpy library)
foo makes use of a common subset of objects that are part of the basis of the language.
foo completely hides what it uses internally. Nothing about the underlying libraries escapes from the foo library to the client code.
We are of course in a pros-cons scenario. Transparent or opaque? strong coupling with the underlying tools or not? I know the drill but I am in the process of having to do this choice, and I want to share opinions before taking a decision. Suggestions, ideas, personal experience are greatly appreciated.
|
[
"Since you're talking about return values, that's not really about \"internal objects\" -- you should just document the interfaces your returned objects will support (it's OK if that's a subset of numpy.array or whatever;-). I recommend against returning a reference to your internal mutable attributes and documenting that mutators work to alter your own object indirectly (and NOT documenting it is not much better) -- that leads to way-too-strong coupling down the road.\nIf you WERE talking about actual internal objects, I'd recommend the Law of Demeter -- in a simplistic reading, if the client's coding a.b.c.d.e.f(), then something is very wrong (\"just one dot\" may be sometimes extreme, but, \"four are Right Out\"). Again, the problem is strong coupling -- making it impossible for you to change your internal implementation in even minor ways without breaking a million clients...!\n",
"The main question I would think about is how much of your library would return numpy objects? If its pervasive I would go with directly returning a numpy as your so tied to numpy you might as well make it explicit. Plus it will probably make it easier to use other numpy based libraries. If on the other hand you only have a few methods that would return numpy I would either go with the numpy like object or a list, probably the numpy like object.\n"
] |
[
3,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001424534_python.txt
|
Q:
How do you mix old-style and new-style Python classes?
I've seen a few questions on this topic, but I haven't been able to find a definitive answer.
I would like to know the proper way to use old-style classes in a new Python code base. Let's say for example that I have two fixed classes, A and B. If I want to subclass A and B, and convert to new-style classes (A2 and B2), this works. However there is an issue if I want to create a new class C, from A2 and B2.
Therefore, is it possible to continue with this method, or do all classes have to conform to the old-style if any base class is defined as old-style?
See the example code for clarification:
class A:
def __init__(self):
print 'class A'
class B:
def __init__(self):
print 'class B'
class A2(A,object):
def __init__(self):
super(A2, self).__init__()
print 'class A2'
class B2(B,object):
def __init__(self):
super(B2, self).__init__()
print 'class B2'
class C(A2, B2):
def __init__(self):
super(C,self).__init__()
print 'class C'
A2()
print '---'
B2()
print '---'
C()
The output of this code:
class A
class A2
---
class B
class B2
---
class A
class A2
class C
As you can see, the problem is that in the call to C(), class B2 was never initialized.
Update - New-Style Class Example
I guess it is not clear what the correct initialization sequence should be when using super. Here is a working example where a call to super does initialize all base classes, not just the first one it finds.
class A(object):
def __init__(self):
super(A, self).__init__()
print 'class A'
class B(object):
def __init__(self):
super(B, self).__init__()
print 'class B'
class A2(A):
def __init__(self):
super(A2, self).__init__()
print 'class A2'
class B2(B):
def __init__(self):
super(B2, self).__init__()
print 'class B2'
class C(A2, B2):
def __init__(self):
super(C, self).__init__()
print 'class C'
C()
and produces the output:
class B
class B2
class A
class A2
class C
A:
This is not a issue of mixing old and new style classes. super() does not call all base classes functions, it calls the first one it finds according the method resolution order. In this case A2, which in turn calls A.
If you want to call both, do so explicitly:
class C(A2, B2):
def __init__(self):
A2.__init__(self)
B2.__init__(self)
print 'class C'
That should solve it.
Update:
The diamond inheritance problem as you refer to, is the question of which class to call in a diamond inheritance situation, like this:
class A:
def method1(self):
print 'class A'
def method2(self):
print 'class A'
class B(A):
def method1(self):
print 'class B'
class C(A):
def method1(self):
print 'class C'
def method2(self):
print 'class C'
class D(B, C):
pass
Now test this out:
>>> D().method1()
'class B'
This is correct. It calls the first class' implementation.
However, let's try this with method2:
>>> D().method2()
'class A'
Oups, WRONG! It should have called class C.method2() here, because even though class B does not override method2, class C does. Now make class A a newstyle class:
class A(object):
def method1(self):
print 'class A'
And try again:
>>> D().method1()
'class B'
>>> D().method2()
'class C'
and hey presto, it works. This is the method resolution order difference between new and old-style classes, and this is what sometimes makes it confusing to mix them.
Notice how at no point both B and C gets called. This is true even if we call super.
class D(B, C):
def method1(self):
super(D, self).method1()
def method2(self):
super(D, self).method2()
>>> D().method1()
'class B'
>>> D().method2()
'class C'
If you want to call both B and C, you MUST call both explicitly.
Now if you unbreak the diamond, like in your example having separate base classes, the result is different:
class A1(object):
def method1(self):
print 'class A1'
def method2(self):
print 'class A1'
class A2(object):
def method1(self):
print 'class A2'
def method2(self):
print 'class A2'
class B(A1):
def method1(self):
print 'class B'
class C(A2):
def method1(self):
print 'class C'
def method2(self):
print 'class C'
class D(B, C):
def method1(self):
super(D, self).method1()
def method2(self):
super(D, self).method2()
>>> D().method1()
'class B'
>>> D().method2()
'class A1'
This is also per design. Still nowhere two base classes gets called. If you want that to happen you still have to call both explicitly.
|
How do you mix old-style and new-style Python classes?
|
I've seen a few questions on this topic, but I haven't been able to find a definitive answer.
I would like to know the proper way to use old-style classes in a new Python code base. Let's say for example that I have two fixed classes, A and B. If I want to subclass A and B, and convert to new-style classes (A2 and B2), this works. However there is an issue if I want to create a new class C, from A2 and B2.
Therefore, is it possible to continue with this method, or do all classes have to conform to the old-style if any base class is defined as old-style?
See the example code for clarification:
class A:
def __init__(self):
print 'class A'
class B:
def __init__(self):
print 'class B'
class A2(A,object):
def __init__(self):
super(A2, self).__init__()
print 'class A2'
class B2(B,object):
def __init__(self):
super(B2, self).__init__()
print 'class B2'
class C(A2, B2):
def __init__(self):
super(C,self).__init__()
print 'class C'
A2()
print '---'
B2()
print '---'
C()
The output of this code:
class A
class A2
---
class B
class B2
---
class A
class A2
class C
As you can see, the problem is that in the call to C(), class B2 was never initialized.
Update - New-Style Class Example
I guess it is not clear what the correct initialization sequence should be when using super. Here is a working example where a call to super does initialize all base classes, not just the first one it finds.
class A(object):
def __init__(self):
super(A, self).__init__()
print 'class A'
class B(object):
def __init__(self):
super(B, self).__init__()
print 'class B'
class A2(A):
def __init__(self):
super(A2, self).__init__()
print 'class A2'
class B2(B):
def __init__(self):
super(B2, self).__init__()
print 'class B2'
class C(A2, B2):
def __init__(self):
super(C, self).__init__()
print 'class C'
C()
and produces the output:
class B
class B2
class A
class A2
class C
|
[
"This is not a issue of mixing old and new style classes. super() does not call all base classes functions, it calls the first one it finds according the method resolution order. In this case A2, which in turn calls A.\nIf you want to call both, do so explicitly:\nclass C(A2, B2):\n def __init__(self):\n A2.__init__(self)\n B2.__init__(self)\n print 'class C'\n\nThat should solve it.\nUpdate: \nThe diamond inheritance problem as you refer to, is the question of which class to call in a diamond inheritance situation, like this:\nclass A:\n def method1(self):\n print 'class A'\n\n def method2(self):\n print 'class A'\n\nclass B(A):\n def method1(self):\n print 'class B'\n\nclass C(A):\n def method1(self):\n print 'class C'\n\n def method2(self):\n print 'class C'\n\nclass D(B, C):\n pass\n\nNow test this out:\n>>> D().method1()\n'class B'\n\nThis is correct. It calls the first class' implementation.\nHowever, let's try this with method2:\n>>> D().method2()\n'class A'\n\nOups, WRONG! It should have called class C.method2() here, because even though class B does not override method2, class C does. Now make class A a newstyle class:\nclass A(object):\n def method1(self):\n print 'class A'\n\nAnd try again:\n>>> D().method1()\n'class B'\n>>> D().method2()\n'class C'\n\nand hey presto, it works. This is the method resolution order difference between new and old-style classes, and this is what sometimes makes it confusing to mix them.\nNotice how at no point both B and C gets called. This is true even if we call super.\nclass D(B, C):\n def method1(self):\n super(D, self).method1()\n\n def method2(self):\n super(D, self).method2()\n\n>>> D().method1()\n'class B'\n>>> D().method2()\n'class C'\n\nIf you want to call both B and C, you MUST call both explicitly.\nNow if you unbreak the diamond, like in your example having separate base classes, the result is different:\nclass A1(object):\n def method1(self):\n print 'class A1'\n\n def method2(self):\n print 'class A1'\n\nclass A2(object):\n def method1(self):\n print 'class A2'\n\n def method2(self):\n print 'class A2'\n\nclass B(A1):\n def method1(self):\n print 'class B'\n\nclass C(A2):\n def method1(self):\n print 'class C'\n\n def method2(self):\n print 'class C'\n\nclass D(B, C):\n def method1(self):\n super(D, self).method1()\n\n def method2(self):\n super(D, self).method2()\n\n\n>>> D().method1()\n'class B'\n>>> D().method2()\n'class A1'\n\nThis is also per design. Still nowhere two base classes gets called. If you want that to happen you still have to call both explicitly.\n"
] |
[
7
] |
[] |
[] |
[
"class",
"instantiation",
"multiple_inheritance",
"python"
] |
stackoverflow_0001425714_class_instantiation_multiple_inheritance_python.txt
|
Q:
Method assignment and objects
i've got a problem with python:
I want to assign a method to an object form another class, but in this method use its own attributes. Since i have many container with different use methods in my project (not in that example) i dont want to use inheritance, thad would force me to create a custom class for each instance.
class container():
def __init__(self):
self.info = "undefiend info attribute"
def use(self):
print self.info
class tree():
def __init__(self):
# create container instance
b = container()
# change b's info attribute
b.info = "b's info attribute"
# bound method test is set as use of b and in this case unbound, i think
b.use = self.test
# should read b's info attribute and print it
# should output: test: b's info attribute but test is bound in some way to the tree object
print b.use()
# bound method test
def test(self):
return "test: "+self.info
if __name__ == "__main__":
b = tree()
Thank you very much for reading this, and perhaps helping me! :)
A:
Here you go. You should know that self.test is already bound since by the time you are in __init__ the instance has already been created and its methods are bound. Therefore you must access the unbound member by using the im_func member, and binding it with MethodType.
import types
class container():
def __init__(self):
self.info = "undefiend info attribute"
def use(self):
print self.info
class tree():
def __init__(self):
# create container instance
b = container()
# change b's info attribute
b.info = "b's info attribute"
# bound method test is set as use of b and in this case unbound, i think
b.use = types.MethodType(self.test.im_func, b, b.__class__)
# should read b's info attribute and print it
# should output: test: b's info attribute but test is bound in some way to the tree object
print b.use()
# bound method test
def test(self):
return "test: "+self.info
if __name__ == "__main__":
b = tree()
A:
Looks like you are trying to use inheritance? The tree inherits from the container?
A:
Use tree.test instead of self.test. The method attributes of an instance are bound to that instance.
A:
Do not move methods around dynamically.
Just Use Delegation. Avoid Magic.
Pass the "Tree" object to the Container. It saves trying to move methods around.
class Container( object ):
def use( self, context ):
print context.info
context.test()
class Tree( object ):
def __init__( self, theContainerToUse ):
b= theContinerToUse( self )
print b.use()
def test( self ):
print "test"+self.info
|
Method assignment and objects
|
i've got a problem with python:
I want to assign a method to an object form another class, but in this method use its own attributes. Since i have many container with different use methods in my project (not in that example) i dont want to use inheritance, thad would force me to create a custom class for each instance.
class container():
def __init__(self):
self.info = "undefiend info attribute"
def use(self):
print self.info
class tree():
def __init__(self):
# create container instance
b = container()
# change b's info attribute
b.info = "b's info attribute"
# bound method test is set as use of b and in this case unbound, i think
b.use = self.test
# should read b's info attribute and print it
# should output: test: b's info attribute but test is bound in some way to the tree object
print b.use()
# bound method test
def test(self):
return "test: "+self.info
if __name__ == "__main__":
b = tree()
Thank you very much for reading this, and perhaps helping me! :)
|
[
"Here you go. You should know that self.test is already bound since by the time you are in __init__ the instance has already been created and its methods are bound. Therefore you must access the unbound member by using the im_func member, and binding it with MethodType.\nimport types\n\nclass container():\n def __init__(self):\n self.info = \"undefiend info attribute\"\n\n def use(self):\n print self.info\n\n\nclass tree():\n def __init__(self):\n\n # create container instance\n b = container()\n\n # change b's info attribute\n b.info = \"b's info attribute\"\n\n # bound method test is set as use of b and in this case unbound, i think\n b.use = types.MethodType(self.test.im_func, b, b.__class__)\n\n # should read b's info attribute and print it\n # should output: test: b's info attribute but test is bound in some way to the tree object\n print b.use()\n\n # bound method test\n def test(self):\n return \"test: \"+self.info\n\n\nif __name__ == \"__main__\":\n b = tree()\n\n",
"Looks like you are trying to use inheritance? The tree inherits from the container?\n",
"Use tree.test instead of self.test. The method attributes of an instance are bound to that instance.\n",
"Do not move methods around dynamically.\nJust Use Delegation. Avoid Magic.\nPass the \"Tree\" object to the Container. It saves trying to move methods around.\nclass Container( object ):\n def use( self, context ):\n print context.info\n context.test()\n\nclass Tree( object ):\n def __init__( self, theContainerToUse ):\n b= theContinerToUse( self )\n print b.use()\n def test( self ):\n print \"test\"+self.info\n\n"
] |
[
2,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001425414_python.txt
|
Q:
Problem configparser in python
Actually I am stuck in my work. I want to import a txt file into my python program which should have two lists of intergers.
The following program is working fine but I need to import the list 'a' and 'b' with the help of configparser.
It will be so nice if some one help me with it!
I am a begineer in python so please try to answer in an easy way...!
The program is as follow:
a=[5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]
p=[0.0,0.001,0.002,0.003,0.004,0.005,0.006,0.007,0.008,0.009,0.01,0.015,0.05,0.1,0.15,0.2]
b=0
x=0
while b<=10:
c=a[b]
x=0
print '\there is the outer loop\n',c
while x<=15:
k=p[x]
print'here is the inner loop\n',k
x=x+1
b=b+1
A:
Seems like ConfigParser is not the best tool for the job. You may implement the parsing logic youself something like:
a, b = [], []
with open('myfile', 'r') as f:
for num, line in enumerate(f.readlines()):
if num >= 10:
b.push(line)
else:
a.push(line)
or you can make up some other logic to devide the lists in your file. It depends on the way you want to represent it in you file
A:
The json module provides better support for lists in configuration files.
Instead of the ConfigParser (no list support) format, try using JSON for this purpose.
JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.
Since your question smells like homework, I'll suggest an ugly hack. Use str.split() and float() to parse a list from a configuration file. Suppose the file x.conf contains:
[sect1]
a=[5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]
You can parse it with:
>>> import ConfigParser
>>> cf=ConfigParser.ConfigParser()
>>> cf.read(['x.conf'])
['x.conf']
>>> [float(s) for s in cf.get('sect1','a')[1:-1].split(',')]
[5000000.0, 6000000.0, 7000000.0, 8000000.0, 8500000.0, 9000000.0, 9500000.0, 10000000.0, 11000000.0, 12000000.0]
>>>
(The brackets around the list could be dropped from the configuration file, making the [1:-1] hack unnecessary )
A:
Yea, the config parser probably isn't the best choice...but if you really want to, try this:
import unittest
from ConfigParser import SafeConfigParser
from cStringIO import StringIO
def _parse_float_list(string_value):
return [float(v.strip()) for v in string_value.split(',')]
def _generate_float_list(float_values):
return ','.join(str(value) for value in float_values)
def get_float_list(parser, section, option):
string_value = parser.get(section, option)
return _parse_float_list(string_value)
def set_float_list(parser, section, option, float_values):
string_value = _generate_float_list(float_values)
parser.set(section, option, string_value)
class TestConfigParser(unittest.TestCase):
def setUp(self):
self.a = [5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]
self.p = [0.0,0.001,0.002,0.003,0.004,0.005,0.006,0.007,0.008,0.009,0.01,0.015,0.05,0.1,0.15,0.2]
def testRead(self):
parser = SafeConfigParser()
f = StringIO('''[values]
a: 5e6, 6e6, 7e6, 8e6,
8.5e6, 9e6, 9.5e6, 10e6,
11e6, 12e6
p: 0.0 , 0.001, 0.002,
0.003, 0.004, 0.005,
0.006, 0.007, 0.008,
0.009, 0.01 , 0.015,
0.05 , 0.1 , 0.15 ,
0.2
''')
parser.readfp(f)
self.assertEquals(self.a, get_float_list(parser, 'values', 'a'))
self.assertEquals(self.p, get_float_list(parser, 'values', 'p'))
def testRoundTrip(self):
parser = SafeConfigParser()
parser.add_section('values')
set_float_list(parser, 'values', 'a', self.a)
set_float_list(parser, 'values', 'p', self.p)
self.assertEquals(self.a, get_float_list(parser, 'values', 'a'))
self.assertEquals(self.p, get_float_list(parser, 'values', 'p'))
if __name__ == '__main__':
unittest.main()
|
Problem configparser in python
|
Actually I am stuck in my work. I want to import a txt file into my python program which should have two lists of intergers.
The following program is working fine but I need to import the list 'a' and 'b' with the help of configparser.
It will be so nice if some one help me with it!
I am a begineer in python so please try to answer in an easy way...!
The program is as follow:
a=[5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]
p=[0.0,0.001,0.002,0.003,0.004,0.005,0.006,0.007,0.008,0.009,0.01,0.015,0.05,0.1,0.15,0.2]
b=0
x=0
while b<=10:
c=a[b]
x=0
print '\there is the outer loop\n',c
while x<=15:
k=p[x]
print'here is the inner loop\n',k
x=x+1
b=b+1
|
[
"Seems like ConfigParser is not the best tool for the job. You may implement the parsing logic youself something like:\na, b = [], []\nwith open('myfile', 'r') as f:\n for num, line in enumerate(f.readlines()):\n if num >= 10: \n b.push(line)\n else:\n a.push(line)\n\nor you can make up some other logic to devide the lists in your file. It depends on the way you want to represent it in you file\n",
"The json module provides better support for lists in configuration files.\nInstead of the ConfigParser (no list support) format, try using JSON for this purpose.\n\nJSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.\n\nSince your question smells like homework, I'll suggest an ugly hack. Use str.split() and float() to parse a list from a configuration file. Suppose the file x.conf contains:\n[sect1]\na=[5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]\n\nYou can parse it with:\n>>> import ConfigParser\n>>> cf=ConfigParser.ConfigParser()\n>>> cf.read(['x.conf'])\n['x.conf']\n>>> [float(s) for s in cf.get('sect1','a')[1:-1].split(',')]\n[5000000.0, 6000000.0, 7000000.0, 8000000.0, 8500000.0, 9000000.0, 9500000.0, 10000000.0, 11000000.0, 12000000.0]\n>>> \n\n(The brackets around the list could be dropped from the configuration file, making the [1:-1] hack unnecessary )\n",
"Yea, the config parser probably isn't the best choice...but if you really want to, try this:\nimport unittest\nfrom ConfigParser import SafeConfigParser\nfrom cStringIO import StringIO\n\ndef _parse_float_list(string_value):\n return [float(v.strip()) for v in string_value.split(',')]\n\ndef _generate_float_list(float_values):\n return ','.join(str(value) for value in float_values)\n\ndef get_float_list(parser, section, option):\n string_value = parser.get(section, option)\n return _parse_float_list(string_value)\n\ndef set_float_list(parser, section, option, float_values):\n string_value = _generate_float_list(float_values)\n parser.set(section, option, string_value)\n\nclass TestConfigParser(unittest.TestCase):\n def setUp(self):\n self.a = [5e6,6e6,7e6,8e6,8.5e6,9e6,9.5e6,10e6,11e6,12e6]\n self.p = [0.0,0.001,0.002,0.003,0.004,0.005,0.006,0.007,0.008,0.009,0.01,0.015,0.05,0.1,0.15,0.2]\n\n def testRead(self):\n parser = SafeConfigParser()\n f = StringIO('''[values]\na: 5e6, 6e6, 7e6, 8e6,\n 8.5e6, 9e6, 9.5e6, 10e6,\n 11e6, 12e6\np: 0.0 , 0.001, 0.002,\n 0.003, 0.004, 0.005,\n 0.006, 0.007, 0.008,\n 0.009, 0.01 , 0.015,\n 0.05 , 0.1 , 0.15 ,\n 0.2\n''')\n parser.readfp(f)\n self.assertEquals(self.a, get_float_list(parser, 'values', 'a'))\n self.assertEquals(self.p, get_float_list(parser, 'values', 'p'))\n\n def testRoundTrip(self):\n parser = SafeConfigParser()\n parser.add_section('values')\n set_float_list(parser, 'values', 'a', self.a)\n set_float_list(parser, 'values', 'p', self.p)\n\n self.assertEquals(self.a, get_float_list(parser, 'values', 'a'))\n self.assertEquals(self.p, get_float_list(parser, 'values', 'p'))\n\nif __name__ == '__main__':\n unittest.main()\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"configparser",
"python"
] |
stackoverflow_0001426241_configparser_python.txt
|
Q:
How to save a configuration file / python file IO
I have this python code for opening a .cfg file, writing to it and saving it:
import ConfigParser
def get_lock_file():
cf = ConfigParser.ConfigParser()
cf.read("svn.lock")
return cf
def save_lock_file(configurationParser):
cf = configurationParser
config_file = open('svn.lock', 'w')
cf.write(config_file)
config_file.close()
Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files?
I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save_lock_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point.
def used_like_this():
cf = get_lock_file()
cf.set('some_prop_section', 'some_prop', 'some_value')
save_lock_file(cf)
A:
Just to note that configuration file handling is simpler with ConfigObj.
To read and then write a config file:
from configobj import ConfigObj
config = ConfigObj(filename)
value = config['entry']
config['entry'] = newvalue
config.write()
A:
Looks good to me.
If both places call get_lock_file, then cf.set(...), and then save_lock_file, and no exceptions are raised, this should work.
If you have different threads or processes accessing the same file you could have a race condition:
thread/process A reads the file
thread/process B reads the file
thread/process A updates the file
thread/process B updates the file
Now the file only contains B's updates, not A's.
Also, for safe file writing, don't forget the with statement (Python 2.5 and up), it'll save you a try/finally (which you should be using if you're not using with). From ConfigParser's docs:
with open('example.cfg', 'wb') as configfile:
config.write(configfile)
A:
Works for me.
C:\temp>type svn.lock
[some_prop_section]
Hello=World
C:\temp>python
ActivePython 2.6.2.2 (ActiveState Software Inc.) based on
Python 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigParser
>>> def get_lock_file():
... cf = ConfigParser.ConfigParser()
... cf.read("svn.lock")
... return cf
...
>>> def save_lock_file(configurationParser):
... cf = configurationParser
... config_file = open('svn.lock', 'w')
... cf.write(config_file)
... config_file.close()
...
>>> def used_like_this():
... cf = get_lock_file()
... cf.set('some_prop_section', 'some_prop', 'some_value')
... save_lock_file(cf)
...
>>> used_like_this()
>>> ^Z
C:\temp>type svn.lock
[some_prop_section]
hello = World
some_prop = some_value
C:\temp>
|
How to save a configuration file / python file IO
|
I have this python code for opening a .cfg file, writing to it and saving it:
import ConfigParser
def get_lock_file():
cf = ConfigParser.ConfigParser()
cf.read("svn.lock")
return cf
def save_lock_file(configurationParser):
cf = configurationParser
config_file = open('svn.lock', 'w')
cf.write(config_file)
config_file.close()
Does this seem normal or am I missing something about how to open-write-save files? Is there a more standard way to read and write config files?
I ask because I have two methods that seem to do the same thing, they get the config file handle ('cf') call cf.set('blah', 'foo' bar) then use the save_lock_file(cf) call above. For one method it works and for the other method the write never takes place, unsure why at this point.
def used_like_this():
cf = get_lock_file()
cf.set('some_prop_section', 'some_prop', 'some_value')
save_lock_file(cf)
|
[
"Just to note that configuration file handling is simpler with ConfigObj.\nTo read and then write a config file:\nfrom configobj import ConfigObj\nconfig = ConfigObj(filename)\n\nvalue = config['entry']\nconfig['entry'] = newvalue\nconfig.write()\n\n",
"Looks good to me.\nIf both places call get_lock_file, then cf.set(...), and then save_lock_file, and no exceptions are raised, this should work.\nIf you have different threads or processes accessing the same file you could have a race condition:\n\nthread/process A reads the file\nthread/process B reads the file\nthread/process A updates the file\nthread/process B updates the file\n\nNow the file only contains B's updates, not A's.\nAlso, for safe file writing, don't forget the with statement (Python 2.5 and up), it'll save you a try/finally (which you should be using if you're not using with). From ConfigParser's docs:\nwith open('example.cfg', 'wb') as configfile:\n config.write(configfile)\n\n",
"Works for me.\n\nC:\\temp>type svn.lock\n[some_prop_section]\nHello=World\n\nC:\\temp>python\nActivePython 2.6.2.2 (ActiveState Software Inc.) based on\nPython 2.6.2 (r262:71600, Apr 21 2009, 15:05:37) [MSC v.1500 32 bit (Intel)] on\nwin32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import ConfigParser\n>>> def get_lock_file():\n... cf = ConfigParser.ConfigParser()\n... cf.read(\"svn.lock\")\n... return cf\n...\n>>> def save_lock_file(configurationParser):\n... cf = configurationParser\n... config_file = open('svn.lock', 'w')\n... cf.write(config_file)\n... config_file.close()\n...\n>>> def used_like_this():\n... cf = get_lock_file()\n... cf.set('some_prop_section', 'some_prop', 'some_value')\n... save_lock_file(cf)\n...\n>>> used_like_this()\n>>> ^Z\n\n\nC:\\temp>type svn.lock\n[some_prop_section]\nhello = World\nsome_prop = some_value\n\n\nC:\\temp>\n\n"
] |
[
13,
1,
1
] |
[] |
[] |
[
"configuration_files",
"file",
"file_io",
"python"
] |
stackoverflow_0001423214_configuration_files_file_file_io_python.txt
|
Q:
How do scripting languages use sockets?
Python, Perl and PHP, all support TCP stream sockets. But exactly how do I use sockets in a script file that is run by a webserver (eg Apache), assuming I only have FTP access and not root access to the machine?
When a client connects to a specific port, how does the script file get invoked?
Does the script stay "running" for the duration of the connection? (could be hours)
So will multiple "instances" of the script be running simultaneously?
Then how can method calls be made from one instance of the script to another?
A:
Scripting languages utilize sockets exactly the same way as compiled languages.
1) The script typically opens and uses the socket. It's not "run" or "invoked" by the socket, but directly controls it via libraries (typically calling into the native C API for the OS).
2) Yes.
3) Not necessarily. Most modern scripting langauges can handle multiple sockets in one "script" application.
4) N/A, see 3)
Edit in response to change in question and comments:
This is now obvious that you are trying to run this in the context of a hosted server. Typically, if you're using scripting within Apache or a similar server, things work a bit differently. A socket is opened up and maintained by Apache, and it executes your script, passing the relevant data (POST/GET results, etc.) to your script to process. Sockets usually don't come into play when you're dealing with scripting for CGI, etc.
However, this typically happens using the same concepts as mod_cgi. This pretty much means that the script running is nothing but an executable as far as the server is concerned, and the executable's output is what gets returned to the client. In this case, (provided you have permissions and the correct libraries on the server), your python script can actually launch a separate script that does its own socket work completely outside of Apache's context.
It's (usually) not a good idea to run a full socket implementation directly inside of the CGI script, however. CGI will expect the executable to run to completion before it returns results to the client. Apache will sit there and "hang" a bit waiting for this to complete. If you're launching a full server (especially if it's a long running process, which they tend to be), Apache will think the script is locked, and probably abort, potentially killing the process (configuration specific, but most hosting companies do this to prevent scripts from taking over CPU on a shared system).
However, if you execute a new script from within your script, and then return (shutting down the CGI executable), the other script can be left running, working as a server. This would be something like (python example, using the subprocess library):
newProccess = Popen("python MyScript", shell=True)
Note that all of the above really depends a bit on server configuration, though. Many hosting companies don't include some of the socket or shell libraries in their scripting implementations specifically to prevent this, so you often have to revert to making the executable in C. In addition, this is often against terms of service for most hosting companies - you'd have to check yours.
A:
As a prior answer notes, scripting languages have operate in this regard in exactly the same way as compiled programs. Where they differ (potentially) is in the API that they use. The operating system (Windows or Unix-based) offers an API (e.g., BSD sockets) that compiled programs will call directly (typically). Interpreted languages like PHP or Python may offer a different API such as Python's socket API which may simplify some parts of the underlying API.
Given any of these APIs, there are many ways in which the actual handling of an incoming TCP connection can be structured. A great and detailed overview of such approaches is available on the c10k webpage: http://www.kegel.com/c10k.html -- in particular, the section on IO strategies. In short, the choice of answers to your question is up to the programmer and may affect how the resulting program performs under load.
To focus on your specific questions:
Many server programs are started before the connection and are running to listen for incoming connections. A special case is inetd which is a superserver: it listens for connections and then hands off those connections to programs that it starts (specified in a config file).
Typically, yes, the script remains running for the duration of the connection. However, depending on the larger system architecture, the script could conceivably pass the connection off to another program for handling and then exit.
This is a choice, again as enumerated on the c10k page.
This is another choice; operating systems offer a variety of Interprocess Communication (IPC) mechanisms to programs.
A:
The only way I can make sense of what you're asking is if you use inetd or a similar meta-server, which is configured to invoke your "service a single client" program for a specific listening port, forwarding your "single client servicer" program's stdin/stdout to the remote client.
If that's the case:
1) inetd runs it
2) yes
3) yes
4) named pipes are one possibility
A:
When a client connects to a specific
port, how does the script file get
invoked?
The script should be already invoked in order to receive any connects from any client. You will need script to be hanging on there forever (infinie loop) and setup Apache not to kill it on timeout. Basically, PHP is not a good choice for writting server applications. Why do you need this?
|
How do scripting languages use sockets?
|
Python, Perl and PHP, all support TCP stream sockets. But exactly how do I use sockets in a script file that is run by a webserver (eg Apache), assuming I only have FTP access and not root access to the machine?
When a client connects to a specific port, how does the script file get invoked?
Does the script stay "running" for the duration of the connection? (could be hours)
So will multiple "instances" of the script be running simultaneously?
Then how can method calls be made from one instance of the script to another?
|
[
"Scripting languages utilize sockets exactly the same way as compiled languages.\n1) The script typically opens and uses the socket. It's not \"run\" or \"invoked\" by the socket, but directly controls it via libraries (typically calling into the native C API for the OS).\n2) Yes.\n3) Not necessarily. Most modern scripting langauges can handle multiple sockets in one \"script\" application.\n4) N/A, see 3)\n\nEdit in response to change in question and comments:\nThis is now obvious that you are trying to run this in the context of a hosted server. Typically, if you're using scripting within Apache or a similar server, things work a bit differently. A socket is opened up and maintained by Apache, and it executes your script, passing the relevant data (POST/GET results, etc.) to your script to process. Sockets usually don't come into play when you're dealing with scripting for CGI, etc.\nHowever, this typically happens using the same concepts as mod_cgi. This pretty much means that the script running is nothing but an executable as far as the server is concerned, and the executable's output is what gets returned to the client. In this case, (provided you have permissions and the correct libraries on the server), your python script can actually launch a separate script that does its own socket work completely outside of Apache's context.\nIt's (usually) not a good idea to run a full socket implementation directly inside of the CGI script, however. CGI will expect the executable to run to completion before it returns results to the client. Apache will sit there and \"hang\" a bit waiting for this to complete. If you're launching a full server (especially if it's a long running process, which they tend to be), Apache will think the script is locked, and probably abort, potentially killing the process (configuration specific, but most hosting companies do this to prevent scripts from taking over CPU on a shared system).\nHowever, if you execute a new script from within your script, and then return (shutting down the CGI executable), the other script can be left running, working as a server. This would be something like (python example, using the subprocess library):\nnewProccess = Popen(\"python MyScript\", shell=True)\n\nNote that all of the above really depends a bit on server configuration, though. Many hosting companies don't include some of the socket or shell libraries in their scripting implementations specifically to prevent this, so you often have to revert to making the executable in C. In addition, this is often against terms of service for most hosting companies - you'd have to check yours.\n",
"As a prior answer notes, scripting languages have operate in this regard in exactly the same way as compiled programs. Where they differ (potentially) is in the API that they use. The operating system (Windows or Unix-based) offers an API (e.g., BSD sockets) that compiled programs will call directly (typically). Interpreted languages like PHP or Python may offer a different API such as Python's socket API which may simplify some parts of the underlying API.\nGiven any of these APIs, there are many ways in which the actual handling of an incoming TCP connection can be structured. A great and detailed overview of such approaches is available on the c10k webpage: http://www.kegel.com/c10k.html -- in particular, the section on IO strategies. In short, the choice of answers to your question is up to the programmer and may affect how the resulting program performs under load.\nTo focus on your specific questions:\n\nMany server programs are started before the connection and are running to listen for incoming connections. A special case is inetd which is a superserver: it listens for connections and then hands off those connections to programs that it starts (specified in a config file).\nTypically, yes, the script remains running for the duration of the connection. However, depending on the larger system architecture, the script could conceivably pass the connection off to another program for handling and then exit.\nThis is a choice, again as enumerated on the c10k page.\nThis is another choice; operating systems offer a variety of Interprocess Communication (IPC) mechanisms to programs.\n\n",
"The only way I can make sense of what you're asking is if you use inetd or a similar meta-server, which is configured to invoke your \"service a single client\" program for a specific listening port, forwarding your \"single client servicer\" program's stdin/stdout to the remote client.\nIf that's the case:\n1) inetd runs it\n2) yes\n3) yes\n4) named pipes are one possibility\n",
"\nWhen a client connects to a specific\n port, how does the script file get\n invoked?\n\nThe script should be already invoked in order to receive any connects from any client. You will need script to be hanging on there forever (infinie loop) and setup Apache not to kill it on timeout. Basically, PHP is not a good choice for writting server applications. Why do you need this?\n"
] |
[
6,
2,
1,
1
] |
[] |
[] |
[
"perl",
"php",
"python",
"scripting",
"sockets"
] |
stackoverflow_0001424511_perl_php_python_scripting_sockets.txt
|
Q:
Problem with import in Python
[Closing NOTE]
Thank you everyone that trying to help me.
I've found the problem and it have nothing to do with python understanding of mine (which is little). :p
The problem is that I edit the wrong branch of the same project, Main.py in one branch and XWinInfos.py in another branch.
Thanks anyway.
[Original Question]
I am a Java/PHP/Delphi programmer and only use Python when hack someone else program -- never to write a complex Python myself. Since I have a short free time this week, I determine to write something non-trivia with Python and here is my problem
First I have python files like this:
src/
main.py
SomeUtils.py
In "SomeUtils.py, I have a few functions and one class:
...
def funct1 ...
def funct2 ...
class MyClass1:
__init__(self):
self. ....
...
Then in "main.py", I use the function and class:
from SomeUtils import *;
def main():
funct1(); # Use funct1 without problem;
aMyObj1 = MyClass1(); # Use MyClass1 with error
if (__name__ == "__main__"):
main();
The problem is that the functions are used without any problem what so ever but I cannot use the class.
The error is:
NameError: global name 'MyClass1' is not defined
What is the problem here? and What can I do?
EDIT: Thanks for answers for I still have problem. :(
When I change the import statements to:
from SomeUtils import funct1
from SomeUtils import MyClass1
I have this error
ImportError: cannot import name MyClass1
EDIT 2:----------------------------------------------------------
Thanks you guys.
I think, it may be better to post the actual code, so here it is:
NOTE: I am aware about ";" and "(...)" but I like it this way.
Here is the dir structure.
DIRS http://dl.getdropbox.com/u/1961549/images/Python_import_prolem_dir_.png
as you see, I just add an empty init.py but it seems to make no different.
Here is main.py:
from XWinInfos import GetCurrentWindowTitle;
from XWinInfos import XWinInfo;
def main():
print GetCurrentWindowTitle();
aXWinInfo = XWinInfo();
if (__name__ == "__main__"):
main();
Here is XWinInfos.py:
from subprocess import Popen;
from subprocess import PIPE;
from RegExUtils import GetTail_ofLine_withPrefix;
def GetCurrentWindowID():
aXProp = Popen(["xprop", "-root"], stdout=PIPE).communicate()[0];
aLine = GetTail_ofLine_withPrefix("_NET_ACTIVE_WINDOW\(WINDOW\): window id # 0x", aXProp);
return aLine;
def GetCurrentWindowTitle():
aWinID = GetCurrentWindowID();
aWinTitle = GetWindowTitle(aWinID);
return aWinTitle;
def GetWindowTitle(pWinID):
if (aWinID == None): return None
aWMCtrlList = Popen(["wmctrl", "-l"], stdout=PIPE).communicate()[0];
aWinTitle = GetTail_ofLine_withPrefix("0x[0-9a-fA-F]*" + aWinID + "[ ]+[\-]?[0-9]+[ ]+[^\ ]+[ ]+", aWMCtrlList);
return aWinTitle;
class XWinInfo:
def __init__(self):
aWinID = GetCurrentWindowID();
self.WinID = pWinID;
self.Title = GetWindowTitle(pWinID);
The file RegExUtils.py holds a function "GetTail_ofLine_withPrefix" which work fine so.
If I use "from XWinInfos import *;", the error goes "NameError: global name 'XWinInfo' is not defined".
If I use "from XWinInfos import XWinInfo;", the error goes "ImportError: cannot import name XWinInfo".
Please helps.
Thanks in advance.
A:
Hmm... there's several typos in your example, so I wonder if your actual code has some typos as well. Here's the complete source from a quick test that does work fine without import errors.
SomeUtils.py:
def funct1():
print('Function 1')
def funct2():
print('Function 2')
class MyClass1(object):
def __init__(self):
print('MyClass')
main.py:
from SomeUtils import *
def main():
funct1()
aObj = MyClass1()
if (__name__ == "__main__"):
main()
[EDIT Based on OP additional info]
I still can't recreate the same error, but the code you posted won't initially work for at least a couple of errors in the XWinInfox.py init method:
self.WinID = pWinID #change to 'aWinID' since pWinID is not defined
self.Title = GetWindowTitle(pWinID) #change to 'aWinID'since pWinID is not defined
so a corrected version would read:
self.WinID = aWinID
self.Title = GetWindowTitle(aWinID)
Also, you have a typo in your init file name, there should be two underscores before AND after the 'init' word. Right now you have '__init_.py' and it should be '__init__.py', however this shouldn't keep your code from working.
Because I don't have the RegExUtils.py code, I just stubbed out the methods that rely on that file. With the stubbed methods and correcting the aforementioned typos, the code you post now works.
A:
why are you importing from XWinInfos? you should be importing from SomeUtils. Not to mention that *-style imports are discouraged.
Edit: your error
ImportError: cannot import name MyClass1
basically tells you that there is no MyClass1 defined in the SomeUtils. It could be because you have another SomeUtils.py file somewhere on the system path and it being imported instead. If that file doesn't have MyClass1, you'd get this error.
Again: it's irrelevant whether you class MyClass1 exist. What might be the case is that you have another XWinInfos.p(y|o|w) somewhere on your system and it's being imported. Otherwise: norepro.
A:
You may want to rewrite main.py as follows:
import SomeUtils as util
def main():
util.funct1() # Use funct1 without problem;
aMyObj1 = util.MyClass1() # Use MyClass1 with error
if __name__ == "__main__":
main()
A few quick notes:
There is no need for semicolons in
Python unless you have more than one
statement on a line
There is no need
to wrap conditional tests in
parentheses except for grouping
from
module import * is discouraged as it
pollutes the global namespace
A:
I suppose you mean
from SomeUtils import *
however, that does not trigger the error for me. This works fine for me:
SomeUtils.py
def funct1():
print 4
class MyClass1:
def __init__(self):
print 8
main.py
from SomeUtils import *
def main():
funct1() # Use funct1 without problem;
aMyObj1 = MyClass1() # Use MyClass1 without error
if (__name__ == "__main__"):
main()
A:
Your question is naturally linked to a lot of SO older one.
See, just for reference, SO1342128 and SO1057843
|
Problem with import in Python
|
[Closing NOTE]
Thank you everyone that trying to help me.
I've found the problem and it have nothing to do with python understanding of mine (which is little). :p
The problem is that I edit the wrong branch of the same project, Main.py in one branch and XWinInfos.py in another branch.
Thanks anyway.
[Original Question]
I am a Java/PHP/Delphi programmer and only use Python when hack someone else program -- never to write a complex Python myself. Since I have a short free time this week, I determine to write something non-trivia with Python and here is my problem
First I have python files like this:
src/
main.py
SomeUtils.py
In "SomeUtils.py, I have a few functions and one class:
...
def funct1 ...
def funct2 ...
class MyClass1:
__init__(self):
self. ....
...
Then in "main.py", I use the function and class:
from SomeUtils import *;
def main():
funct1(); # Use funct1 without problem;
aMyObj1 = MyClass1(); # Use MyClass1 with error
if (__name__ == "__main__"):
main();
The problem is that the functions are used without any problem what so ever but I cannot use the class.
The error is:
NameError: global name 'MyClass1' is not defined
What is the problem here? and What can I do?
EDIT: Thanks for answers for I still have problem. :(
When I change the import statements to:
from SomeUtils import funct1
from SomeUtils import MyClass1
I have this error
ImportError: cannot import name MyClass1
EDIT 2:----------------------------------------------------------
Thanks you guys.
I think, it may be better to post the actual code, so here it is:
NOTE: I am aware about ";" and "(...)" but I like it this way.
Here is the dir structure.
DIRS http://dl.getdropbox.com/u/1961549/images/Python_import_prolem_dir_.png
as you see, I just add an empty init.py but it seems to make no different.
Here is main.py:
from XWinInfos import GetCurrentWindowTitle;
from XWinInfos import XWinInfo;
def main():
print GetCurrentWindowTitle();
aXWinInfo = XWinInfo();
if (__name__ == "__main__"):
main();
Here is XWinInfos.py:
from subprocess import Popen;
from subprocess import PIPE;
from RegExUtils import GetTail_ofLine_withPrefix;
def GetCurrentWindowID():
aXProp = Popen(["xprop", "-root"], stdout=PIPE).communicate()[0];
aLine = GetTail_ofLine_withPrefix("_NET_ACTIVE_WINDOW\(WINDOW\): window id # 0x", aXProp);
return aLine;
def GetCurrentWindowTitle():
aWinID = GetCurrentWindowID();
aWinTitle = GetWindowTitle(aWinID);
return aWinTitle;
def GetWindowTitle(pWinID):
if (aWinID == None): return None
aWMCtrlList = Popen(["wmctrl", "-l"], stdout=PIPE).communicate()[0];
aWinTitle = GetTail_ofLine_withPrefix("0x[0-9a-fA-F]*" + aWinID + "[ ]+[\-]?[0-9]+[ ]+[^\ ]+[ ]+", aWMCtrlList);
return aWinTitle;
class XWinInfo:
def __init__(self):
aWinID = GetCurrentWindowID();
self.WinID = pWinID;
self.Title = GetWindowTitle(pWinID);
The file RegExUtils.py holds a function "GetTail_ofLine_withPrefix" which work fine so.
If I use "from XWinInfos import *;", the error goes "NameError: global name 'XWinInfo' is not defined".
If I use "from XWinInfos import XWinInfo;", the error goes "ImportError: cannot import name XWinInfo".
Please helps.
Thanks in advance.
|
[
"Hmm... there's several typos in your example, so I wonder if your actual code has some typos as well. Here's the complete source from a quick test that does work fine without import errors.\nSomeUtils.py:\ndef funct1():\n print('Function 1')\n\ndef funct2():\n print('Function 2')\n\nclass MyClass1(object):\n def __init__(self):\n print('MyClass')\n\nmain.py:\nfrom SomeUtils import *\n\ndef main():\n funct1()\n aObj = MyClass1()\n\nif (__name__ == \"__main__\"):\n main()\n\n[EDIT Based on OP additional info]\nI still can't recreate the same error, but the code you posted won't initially work for at least a couple of errors in the XWinInfox.py init method:\nself.WinID = pWinID #change to 'aWinID' since pWinID is not defined\nself.Title = GetWindowTitle(pWinID) #change to 'aWinID'since pWinID is not defined\n\nso a corrected version would read:\nself.WinID = aWinID\nself.Title = GetWindowTitle(aWinID)\n\nAlso, you have a typo in your init file name, there should be two underscores before AND after the 'init' word. Right now you have '__init_.py' and it should be '__init__.py', however this shouldn't keep your code from working.\nBecause I don't have the RegExUtils.py code, I just stubbed out the methods that rely on that file. With the stubbed methods and correcting the aforementioned typos, the code you post now works.\n",
"why are you importing from XWinInfos? you should be importing from SomeUtils. Not to mention that *-style imports are discouraged.\nEdit: your error\n\nImportError: cannot import name MyClass1\n\nbasically tells you that there is no MyClass1 defined in the SomeUtils. It could be because you have another SomeUtils.py file somewhere on the system path and it being imported instead. If that file doesn't have MyClass1, you'd get this error.\nAgain: it's irrelevant whether you class MyClass1 exist. What might be the case is that you have another XWinInfos.p(y|o|w) somewhere on your system and it's being imported. Otherwise: norepro.\n",
"You may want to rewrite main.py as follows:\nimport SomeUtils as util\n\ndef main():\n util.funct1() # Use funct1 without problem;\n aMyObj1 = util.MyClass1() # Use MyClass1 with error\n\nif __name__ == \"__main__\":\n main()\n\nA few quick notes:\n\nThere is no need for semicolons in\nPython unless you have more than one\nstatement on a line \nThere is no need\nto wrap conditional tests in\nparentheses except for grouping \nfrom\nmodule import * is discouraged as it\npollutes the global namespace\n\n",
"I suppose you mean \nfrom SomeUtils import *\n\nhowever, that does not trigger the error for me. This works fine for me:\nSomeUtils.py\ndef funct1():\n print 4\n\nclass MyClass1:\n def __init__(self):\n print 8\n\nmain.py\nfrom SomeUtils import *\n\ndef main():\n funct1() # Use funct1 without problem;\n aMyObj1 = MyClass1() # Use MyClass1 without error\n\nif (__name__ == \"__main__\"):\n main()\n\n",
"Your question is naturally linked to a lot of SO older one.\nSee, just for reference, SO1342128 and SO1057843\n"
] |
[
3,
2,
1,
1,
0
] |
[] |
[] |
[
"import",
"python"
] |
stackoverflow_0001427855_import_python.txt
|
Q:
How do you make a PDF searchable with text in the sidebar?
I'm looking to create some PDF's from Python.
I've noticed that some pdf's have sidebar text that allows you to see the context of occurrences of search terms.
e.g. search for "dictionary"
View in Sidebar:
Page 10 Assigning a value to an existing dictionary key simply replaces the old value with a new one.
How is that done?
Is there anyway to convert existing PDFs to render this sidebar text?
A:
If you use Reportlab to generate your pdfs, then there are facilities in the library to bookmark as you want. Checkout the bookmarkPage method on page 54 of the documentation.
A:
I believe what you're referring to are bookmarks. The first hit on Google indicates that you can put them in by hand with Acrobat Pro.
The DocBook XSL templates when used with Apache FOP
A:
The PyQt gui toolkit has support for creating PDF's. See for example: Printing Rich Text with Qt
|
How do you make a PDF searchable with text in the sidebar?
|
I'm looking to create some PDF's from Python.
I've noticed that some pdf's have sidebar text that allows you to see the context of occurrences of search terms.
e.g. search for "dictionary"
View in Sidebar:
Page 10 Assigning a value to an existing dictionary key simply replaces the old value with a new one.
How is that done?
Is there anyway to convert existing PDFs to render this sidebar text?
|
[
"If you use Reportlab to generate your pdfs, then there are facilities in the library to bookmark as you want. Checkout the bookmarkPage method on page 54 of the documentation.\n",
"I believe what you're referring to are bookmarks. The first hit on Google indicates that you can put them in by hand with Acrobat Pro.\nThe DocBook XSL templates when used with Apache FOP\n",
"The PyQt gui toolkit has support for creating PDF's. See for example: Printing Rich Text with Qt\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"pdf",
"python"
] |
stackoverflow_0001351510_pdf_python.txt
|
Q:
python html integration
this is a complete n00b question and i understand i may get voted down for asking this but i am totally confused over python's html integration.
as i understand one way to integrate python with html code is by using mod_python.
now, is there any other way or method that is more effective for using python with html?
please advise me on this as i am new to learning python and could use some help.
some pointers to code samples would be highly appreciated.
thanks a lot.
EDIT: also what i would like to know is, how does PyHP and mod_python compare with regards to each other. i mean how are they different? and Django? what is Django all about?
A:
I would suggest you to start with web.py
A:
You can read a tutorial on how to use Python in the web.
http://docs.python.org/howto/webservers.html
In few words, mod_python keeps python interpreter in memory ready to execute python scripts, which is faster than launching it every time. It doesn't let you integrate python in html like PHP. For this you need to use a special application, like PyHP (http://www.pyhp.org) or another (there are several of them). Read Python tutorial and documentation pages, there's plenty of info and links to many template and html-embedding engines.
Such engines as PyHP require some overhead to run. Without them, your python application must output HTTP response headers and the page as strings. Mod_wsgi and fastcgi facilitate this process. The page I linked in the beginning gives a good overview on that.
Also you may try Tornado, a python web server, if you don't need to stick to Apache.
A:
The standard way for Python web apps to talk to a webserver is WSGI. Also check out WebOb.
http://www.wsgi.org/wsgi/
http://pythonpaste.org/webob/
But for a complete noob I'd start with a complete web-framework (in which case you typically can ignore the links above). Django or Grok are both full-stack framworks that are easy to use and learn. Django is more popular, but Grok is built on 13 years of Web application publishing experience, and is seriously cool. The difference is a matter of taste.
http://django.org/
http://grok.zope.org/
If you want something more minimalistic, the worlds your oyster, there are an infinite amount of web frameworks for Python, from BFG to Turbogears.
|
python html integration
|
this is a complete n00b question and i understand i may get voted down for asking this but i am totally confused over python's html integration.
as i understand one way to integrate python with html code is by using mod_python.
now, is there any other way or method that is more effective for using python with html?
please advise me on this as i am new to learning python and could use some help.
some pointers to code samples would be highly appreciated.
thanks a lot.
EDIT: also what i would like to know is, how does PyHP and mod_python compare with regards to each other. i mean how are they different? and Django? what is Django all about?
|
[
"I would suggest you to start with web.py\n",
"You can read a tutorial on how to use Python in the web.\nhttp://docs.python.org/howto/webservers.html\nIn few words, mod_python keeps python interpreter in memory ready to execute python scripts, which is faster than launching it every time. It doesn't let you integrate python in html like PHP. For this you need to use a special application, like PyHP (http://www.pyhp.org) or another (there are several of them). Read Python tutorial and documentation pages, there's plenty of info and links to many template and html-embedding engines.\nSuch engines as PyHP require some overhead to run. Without them, your python application must output HTTP response headers and the page as strings. Mod_wsgi and fastcgi facilitate this process. The page I linked in the beginning gives a good overview on that.\nAlso you may try Tornado, a python web server, if you don't need to stick to Apache.\n",
"The standard way for Python web apps to talk to a webserver is WSGI. Also check out WebOb.\n\nhttp://www.wsgi.org/wsgi/\nhttp://pythonpaste.org/webob/\n\nBut for a complete noob I'd start with a complete web-framework (in which case you typically can ignore the links above). Django or Grok are both full-stack framworks that are easy to use and learn. Django is more popular, but Grok is built on 13 years of Web application publishing experience, and is seriously cool. The difference is a matter of taste.\n\nhttp://django.org/\nhttp://grok.zope.org/\n\nIf you want something more minimalistic, the worlds your oyster, there are an infinite amount of web frameworks for Python, from BFG to Turbogears.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"html",
"python"
] |
stackoverflow_0001428260_html_python.txt
|
Q:
Is it possible to make re find the smallest match while using greedy characters
Disclaimer: I'm not a regex expert.
I'm using Python re module to perform regex matching on many htm files. One of the patterns is something like this:
<bla><blabla>87765.*</blabla><bla>
The problem I've encountered is that instead of finding all (say) five occurrences of the pattern, it will find only one. Because it welds all the occurrences into one, using the <bla><blabla>87765 part of the first occurrence and the </blabla><bla> part of the last occurrence in the page.
Is there any way to tell re to find the smallest match?
A:
You can use a reluctant qualifier in your pattern (for more details, reference the python documentation on the *?, +?, and ?? operators):
<bla><blabla>87765.*?</blabla><bla>
Or, exclude < from the possible matched characters:
<bla><blabla>87765[^<]*</blabla><bla>
only if there are no children tags between <blabla> and </blabla>.
A:
The Python re module supports nongreedy matching. You just add a ? to the end of the wildcard pattern, such as .*?. You can learn more at this HOWTO.
A:
I believe the regex
<bla><blabla>87765.*?</blabla><bla>
can produce catastrophic backtracking.
Instead, use:
<bla><blabla>87765[^<]*</blabla><bla>
Using atomic grouping (I'm not sure Python supports this),
the above regex becomes
<bla><blabla>(?>(.*?<))/blabla><bla>
Everything between (?> ... ) is treated as one single token by the regex engine, once the regex engine leaves the group. Because the entire group is one token, no backtracking can take place once the regex engine has found a match for the group. If backtracking is required, the engine has to backtrack to the regex token before the group (the caret in our example). If there is no token before the group, the regex must retry the entire regex at the next position in the string. Note that I needed to include the "<" in the group to ensure atomicity. Close enough.
A:
Um... there is a way to tell re to find the smallest match, and it's precisely by using non-greedy quantifiers.
<bla><blabla>87765.*?</blabla><bla>
I can't imagine why you would want to do it while using greedy quantifiers.
|
Is it possible to make re find the smallest match while using greedy characters
|
Disclaimer: I'm not a regex expert.
I'm using Python re module to perform regex matching on many htm files. One of the patterns is something like this:
<bla><blabla>87765.*</blabla><bla>
The problem I've encountered is that instead of finding all (say) five occurrences of the pattern, it will find only one. Because it welds all the occurrences into one, using the <bla><blabla>87765 part of the first occurrence and the </blabla><bla> part of the last occurrence in the page.
Is there any way to tell re to find the smallest match?
|
[
"You can use a reluctant qualifier in your pattern (for more details, reference the python documentation on the *?, +?, and ?? operators):\n<bla><blabla>87765.*?</blabla><bla>\n\nOr, exclude < from the possible matched characters:\n<bla><blabla>87765[^<]*</blabla><bla>\n\nonly if there are no children tags between <blabla> and </blabla>.\n",
"The Python re module supports nongreedy matching. You just add a ? to the end of the wildcard pattern, such as .*?. You can learn more at this HOWTO.\n",
"I believe the regex\n<bla><blabla>87765.*?</blabla><bla>\ncan produce catastrophic backtracking.\n\nInstead, use:\n<bla><blabla>87765[^<]*</blabla><bla>\n\nUsing atomic grouping (I'm not sure Python supports this), \nthe above regex becomes \n<bla><blabla>(?>(.*?<))/blabla><bla>\n\nEverything between (?> ... ) is treated as one single token by the regex engine, once the regex engine leaves the group. Because the entire group is one token, no backtracking can take place once the regex engine has found a match for the group. If backtracking is required, the engine has to backtrack to the regex token before the group (the caret in our example). If there is no token before the group, the regex must retry the entire regex at the next position in the string. Note that I needed to include the \"<\" in the group to ensure atomicity. Close enough.\n",
"Um... there is a way to tell re to find the smallest match, and it's precisely by using non-greedy quantifiers.\n<bla><blabla>87765.*?</blabla><bla>\n\nI can't imagine why you would want to do it while using greedy quantifiers.\n"
] |
[
19,
5,
1,
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001428780_python_regex.txt
|
Q:
Extending Jython Syntax
I would like to add syntax to Jython to enable a nicer API for users. For instance, matrix libraries like NumPy would benefit from having both matrix and elementwise operations like Matlab's :* vs. * infix operators.
You can create a matrix in Octave using:
A = [ 1, 1, 2; 3, 5, 8; 13, 21, 34 ]
which is considerably nicer than NumPy's:
b = array( [ (1.5,2,3), (4,5,6) ] )
R uses formulas "y ~ x + z" for selecting variables in a matrix/data frame. This is considerably nicer than the alternative of ["y"] ["x","z"] or parsing the string "y ~ x + y".
More complicated examples can be implemented in Cython using Easy Extend. But EasyExtend doesn't work on the JVM.
What is the easiest way but reasonably robust way to add syntax to Jython? It would be nice to have a framework to implement entirely new language constructs or define mini languages within jython.
A:
To the best of my knowledge there is not a macro / syntax expanding facility similar to EasyExtend, although the developer of EasyExtend has been working on some jython projects recently (including some which are similar to EE). I suppose you could write a preprocessor of some kind, but I would tend to suggest that syntax extension isn't terribly popular in the python world and you might have better success implementing your own DSL if you really need to.
|
Extending Jython Syntax
|
I would like to add syntax to Jython to enable a nicer API for users. For instance, matrix libraries like NumPy would benefit from having both matrix and elementwise operations like Matlab's :* vs. * infix operators.
You can create a matrix in Octave using:
A = [ 1, 1, 2; 3, 5, 8; 13, 21, 34 ]
which is considerably nicer than NumPy's:
b = array( [ (1.5,2,3), (4,5,6) ] )
R uses formulas "y ~ x + z" for selecting variables in a matrix/data frame. This is considerably nicer than the alternative of ["y"] ["x","z"] or parsing the string "y ~ x + y".
More complicated examples can be implemented in Cython using Easy Extend. But EasyExtend doesn't work on the JVM.
What is the easiest way but reasonably robust way to add syntax to Jython? It would be nice to have a framework to implement entirely new language constructs or define mini languages within jython.
|
[
"To the best of my knowledge there is not a macro / syntax expanding facility similar to EasyExtend, although the developer of EasyExtend has been working on some jython projects recently (including some which are similar to EE). I suppose you could write a preprocessor of some kind, but I would tend to suggest that syntax extension isn't terribly popular in the python world and you might have better success implementing your own DSL if you really need to.\n"
] |
[
1
] |
[] |
[] |
[
"dsl",
"jython",
"python"
] |
stackoverflow_0001331784_dsl_jython_python.txt
|
Q:
Python switch order of elements
I am a newbie and seeking for the Zen of Python :) Today's koan was finding the most Pythonesq way to solve the following problem:
Permute the letters of a string pairwise, e.g.
input: 'abcdefgh'
output: 'badcfehg'
A:
I'd go for:
s="abcdefgh"
print "".join(b+a for a,b in zip(s[::2],s[1::2]))
s[start:end:step] takes every step'th letter, zip matches them up pairwise, the loop swaps them, and the join gives you back a string.
A:
my personal favorite to do stuff pairwise:
def pairwise( iterable ):
it = iter(iterable)
return zip(it, it) # zipping the same iterator twice produces pairs
output = ''.join( b+a for a,b in pairwise(input))
A:
''.join(s[i+1] + s[i] for i in range(0,len(s),2))
Yes, I know it's less pythonic for using range, but it's short, and I probably don't have to explain it for you to figure out what it does.
A:
I just noticed that none of the existing answers work if the length of the input is odd. Most of the answers lose the last character. My previous answer throws an exception.
If you just want the last character tacked onto the end, you could do something like this:
print "".join(map(lambda a,b:(b or '')+a, s[::2], s[1::2]))
or in 2.6 and later:
print "".join(b+a for a,b in izip_longest(s[::2],s[1::2], fillvalue=''))
This is based on Anthony Towns's answer, but uses either map or izip_longest to make sure the last character in an odd-length string doesn't get discarded. The (b or '') bit in the map version is to convert the None that map pads with into ''.
A:
Since in Python, every string is also an iterable, itertools comes in handy here.
In addition to the functions itertools provides, the documentation also supplies lots of recipes.
from itertools import izip_longest
# From Python 2.6 docs
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
Now you can use grouper to group the string by pairs, then reverse the pairs, then join them back into a string.
pairs = grouper(2, "abcdefgh")
reversed_pairs = [''.join(reversed(item)) for item in pairs]
print ''.join(reversed_pairs)
A:
This may look a little scary, but I think you'd learn a lot deciphering the following idiom:
s = "abcdefgh"
print ''.join(b+a for a,b in zip(*[iter(s)]*2))
|
Python switch order of elements
|
I am a newbie and seeking for the Zen of Python :) Today's koan was finding the most Pythonesq way to solve the following problem:
Permute the letters of a string pairwise, e.g.
input: 'abcdefgh'
output: 'badcfehg'
|
[
"I'd go for:\ns=\"abcdefgh\"\nprint \"\".join(b+a for a,b in zip(s[::2],s[1::2]))\n\ns[start:end:step] takes every step'th letter, zip matches them up pairwise, the loop swaps them, and the join gives you back a string.\n",
"my personal favorite to do stuff pairwise:\ndef pairwise( iterable ):\n it = iter(iterable)\n return zip(it, it) # zipping the same iterator twice produces pairs\n\noutput = ''.join( b+a for a,b in pairwise(input))\n\n",
"''.join(s[i+1] + s[i] for i in range(0,len(s),2))\n\nYes, I know it's less pythonic for using range, but it's short, and I probably don't have to explain it for you to figure out what it does.\n",
"I just noticed that none of the existing answers work if the length of the input is odd. Most of the answers lose the last character. My previous answer throws an exception.\nIf you just want the last character tacked onto the end, you could do something like this:\nprint \"\".join(map(lambda a,b:(b or '')+a, s[::2], s[1::2]))\n\nor in 2.6 and later:\nprint \"\".join(b+a for a,b in izip_longest(s[::2],s[1::2], fillvalue=''))\n\nThis is based on Anthony Towns's answer, but uses either map or izip_longest to make sure the last character in an odd-length string doesn't get discarded. The (b or '') bit in the map version is to convert the None that map pads with into ''.\n",
"Since in Python, every string is also an iterable, itertools comes in handy here.\nIn addition to the functions itertools provides, the documentation also supplies lots of recipes.\nfrom itertools import izip_longest\n\n# From Python 2.6 docs\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\nNow you can use grouper to group the string by pairs, then reverse the pairs, then join them back into a string.\npairs = grouper(2, \"abcdefgh\")\nreversed_pairs = [''.join(reversed(item)) for item in pairs]\nprint ''.join(reversed_pairs)\n\n",
"This may look a little scary, but I think you'd learn a lot deciphering the following idiom:\ns = \"abcdefgh\"\nprint ''.join(b+a for a,b in zip(*[iter(s)]*2))\n\n"
] |
[
13,
6,
5,
2,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001428547_python.txt
|
Q:
lxml equivalent to BeautifulSoup "OR" syntax?
I'm converting some html parsing code from BeautifulSoup to lxml. I'm trying to figure out the lxml equivalent syntax for the following BeautifullSoup statement:
soup.find('a', {'class': ['current zzt', 'zzt']})
Basically I want to find all of the "a" tags in the document that have a class attribute of either "current zzt" or "zzt". BeautifulSoup allows one to pass in a list, dictionary, or even a regular express to perform the match.
What is the lxml equivalent?
Thanks!
A:
No, lxml does not provide the "find first or return None" method you're looking for. Just use (select(soup) or [None])[0] if you need that, or write a function to do it for you.
#!/usr/bin/python
import lxml.html
import lxml.cssselect
soup = lxml.html.fromstring("""
<html>
<a href="foo" class="yyy zzz" />
<a href="bar" class="yyy" />
<a href="baz" class="zzz" />
<a href="quux" class="zzz yyy" />
<a href="warble" class="qqq" />
<p class="yyy zzz">Hello</p>
</html>""")
select = lxml.cssselect.CSSSelector("a.yyy.zzz, a.yyy")
print [lxml.html.tostring(s).strip() for s in select(soup)]
print (select(soup) or [None])[0]
Ok, so soup.find('a') would indeed find first a element or None as you expect. Trouble is, it doesn't appear to support the rich XPath syntax needed for CSSSelector.
|
lxml equivalent to BeautifulSoup "OR" syntax?
|
I'm converting some html parsing code from BeautifulSoup to lxml. I'm trying to figure out the lxml equivalent syntax for the following BeautifullSoup statement:
soup.find('a', {'class': ['current zzt', 'zzt']})
Basically I want to find all of the "a" tags in the document that have a class attribute of either "current zzt" or "zzt". BeautifulSoup allows one to pass in a list, dictionary, or even a regular express to perform the match.
What is the lxml equivalent?
Thanks!
|
[
"No, lxml does not provide the \"find first or return None\" method you're looking for. Just use (select(soup) or [None])[0] if you need that, or write a function to do it for you.\n#!/usr/bin/python\nimport lxml.html\nimport lxml.cssselect\nsoup = lxml.html.fromstring(\"\"\"\n <html>\n <a href=\"foo\" class=\"yyy zzz\" />\n <a href=\"bar\" class=\"yyy\" />\n <a href=\"baz\" class=\"zzz\" />\n <a href=\"quux\" class=\"zzz yyy\" />\n <a href=\"warble\" class=\"qqq\" />\n <p class=\"yyy zzz\">Hello</p>\n </html>\"\"\")\n\nselect = lxml.cssselect.CSSSelector(\"a.yyy.zzz, a.yyy\")\nprint [lxml.html.tostring(s).strip() for s in select(soup)]\nprint (select(soup) or [None])[0]\n\nOk, so soup.find('a') would indeed find first a element or None as you expect. Trouble is, it doesn't appear to support the rich XPath syntax needed for CSSSelector.\n"
] |
[
3
] |
[] |
[] |
[
"beautifulsoup",
"lxml",
"python"
] |
stackoverflow_0001384470_beautifulsoup_lxml_python.txt
|
Q:
Managing object instances
I want to be able to create and unknown number of objects. I'm not sure if there is a better way to manage and reference them.
Lets use a standard OOP example... say every time a user enters a name for a pet in a text field and clicks a button a new pet object is created via the petFactory function.
function pet(name)
{
this.name= name;
}
function petFactory(textFieldInput)
{
var x = new pet(textFieldInput)
}
Since there's no way, that I know of (besides an eval function), to dynamically use a new, unique variable name for the pet every time petFactory is called, x is reassigned to a new object and the last pet is lost. What I've been doing is pushing the pet object onto an array as soon as it's initialized.
petArray.push(this);
So if I wanted the pet named 'mrFuzzyButtoms' I could loop through an indexed array until I found the object with the instance variable 'mrFuzzyButtoms'
for (n in petArray)
{
if (petarray[n].name == 'mrFuzzyBottoms')
{
// Kill mrFuzzyBottoms
}
}
or I could use an associative array...whatever...but the array is the only method I know for doing this. Besides using an eval function to create unique variable names from strings but some languages don't have an eval function (actionscript3) and then there is the security risk.
Is there a better way to do this?
Edit:
Right now I'm working in Python, JavaScript and ActionScript.
A:
"I could use an associative array" Correct.
"the array is the only method I know for doing this."
Learn about "Mappings" or "Dictionaries" as soon as you can. You will find that it does exactly what you're asking for.
What language are you using? If you provide a specific language, we can provide specific links to the "Map" structure in that language.
A:
This what collection classes (lists, dictionaries/maps, sets) are for; to collect a number of instances. For the above case I would probably use a map (class name varies depending on language) from name to object.
A:
#!/usr/bin/env python
"""Playing with pets.
"""
class Pet(object):
"""A pet with a name."""
def __init__(self, name):
"""Create a pet with a `name`.
"""
self._name = name
@property
def name(self): # name is read-only
return self._name
def __repr__(self):
"""
>>> eval(repr(self)) == self
"""
klass = self.__class__.__name__
return "%s('%s')" % (klass, self.name.replace("'", r"\'"))
def __eq__(self, other):
return repr(self) == repr(other)
name2pet = {} # combined register of all pets
@classmethod
def get(cls, name):
"""Return a pet with `name`.
Try to get the pet registered by `name`
otherwise register a new pet and return it
"""
register = cls.name2pet
try: return register[name]
except KeyError:
pet = register[name] = cls(name) # no duplicates allowed
Pet.name2pet.setdefault(name, []).append(pet)
return pet
class Cat(Pet):
name2pet = {} # each class has its own registry
class Dog(Pet):
name2pet = {}
def test():
assert eval(repr(Cat("Teal'c"))) == Cat("Teal'c")
pets = [Pet('a'), Cat('a'), Dog('a'), Cat.get('a'), Dog.get('a')]
assert all(pet.name == 'a' for pet in pets)
cat, dog = Cat.get('pet'), Dog.get('pet')
assert repr(cat) == "Cat('pet')" and repr(dog) == "Dog('pet')"
assert dog is not cat
assert dog != cat
assert cat.name == dog.name
assert all(v == [Cat(k), Dog(k)]
for k, v in Pet.name2pet.items()), Pet.name2pet
try: cat.name = "cat" # .name is read-only
except AttributeError:
pass
try: assert 0
except AssertionError:
return "OK"
raise AssertionError("Assertions must be enabled during the test")
if __name__=="__main__":
print test()
pet.py
|
Managing object instances
|
I want to be able to create and unknown number of objects. I'm not sure if there is a better way to manage and reference them.
Lets use a standard OOP example... say every time a user enters a name for a pet in a text field and clicks a button a new pet object is created via the petFactory function.
function pet(name)
{
this.name= name;
}
function petFactory(textFieldInput)
{
var x = new pet(textFieldInput)
}
Since there's no way, that I know of (besides an eval function), to dynamically use a new, unique variable name for the pet every time petFactory is called, x is reassigned to a new object and the last pet is lost. What I've been doing is pushing the pet object onto an array as soon as it's initialized.
petArray.push(this);
So if I wanted the pet named 'mrFuzzyButtoms' I could loop through an indexed array until I found the object with the instance variable 'mrFuzzyButtoms'
for (n in petArray)
{
if (petarray[n].name == 'mrFuzzyBottoms')
{
// Kill mrFuzzyBottoms
}
}
or I could use an associative array...whatever...but the array is the only method I know for doing this. Besides using an eval function to create unique variable names from strings but some languages don't have an eval function (actionscript3) and then there is the security risk.
Is there a better way to do this?
Edit:
Right now I'm working in Python, JavaScript and ActionScript.
|
[
"\"I could use an associative array\" Correct.\n\"the array is the only method I know for doing this.\"\nLearn about \"Mappings\" or \"Dictionaries\" as soon as you can. You will find that it does exactly what you're asking for.\nWhat language are you using? If you provide a specific language, we can provide specific links to the \"Map\" structure in that language.\n",
"This what collection classes (lists, dictionaries/maps, sets) are for; to collect a number of instances. For the above case I would probably use a map (class name varies depending on language) from name to object. \n",
"#!/usr/bin/env python\n\"\"\"Playing with pets. \n\n\"\"\"\n\n\nclass Pet(object):\n \"\"\"A pet with a name.\"\"\"\n def __init__(self, name):\n \"\"\"Create a pet with a `name`.\n\n \"\"\"\n self._name = name\n\n @property\n def name(self): # name is read-only\n return self._name\n\n def __repr__(self):\n \"\"\"\n >>> eval(repr(self)) == self\n \"\"\"\n klass = self.__class__.__name__\n return \"%s('%s')\" % (klass, self.name.replace(\"'\", r\"\\'\"))\n\n def __eq__(self, other):\n return repr(self) == repr(other)\n\n name2pet = {} # combined register of all pets\n\n @classmethod\n def get(cls, name):\n \"\"\"Return a pet with `name`.\n\n Try to get the pet registered by `name` \n otherwise register a new pet and return it\n \"\"\"\n register = cls.name2pet\n try: return register[name]\n except KeyError:\n pet = register[name] = cls(name) # no duplicates allowed\n Pet.name2pet.setdefault(name, []).append(pet)\n return pet\n\n\nclass Cat(Pet):\n name2pet = {} # each class has its own registry\n\n\nclass Dog(Pet):\n name2pet = {}\n\n\ndef test():\n assert eval(repr(Cat(\"Teal'c\"))) == Cat(\"Teal'c\")\n\n pets = [Pet('a'), Cat('a'), Dog('a'), Cat.get('a'), Dog.get('a')]\n assert all(pet.name == 'a' for pet in pets)\n\n cat, dog = Cat.get('pet'), Dog.get('pet')\n assert repr(cat) == \"Cat('pet')\" and repr(dog) == \"Dog('pet')\"\n assert dog is not cat\n assert dog != cat \n assert cat.name == dog.name\n\n assert all(v == [Cat(k), Dog(k)]\n for k, v in Pet.name2pet.items()), Pet.name2pet\n\n try: cat.name = \"cat\" # .name is read-only\n except AttributeError:\n pass\n\n try: assert 0\n except AssertionError:\n return \"OK\"\n raise AssertionError(\"Assertions must be enabled during the test\")\n\n\nif __name__==\"__main__\":\n print test()\n\npet.py\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"actionscript_3",
"javascript",
"oop",
"python"
] |
stackoverflow_0001427479_actionscript_3_javascript_oop_python.txt
|
Q:
Using GetExtendedTcpTable in Python
I am trying to use GetExtendedTcpTable via a Python program. Basically I am trying to convert "ActiveState Code Recipe 392572: Using the Win32 IPHelper API" to "Getting the active TCP/UDP connections using the GetExtendedTcpTable function".
My problem is that I cannot seem to get the Python script to recognize TCP_TABLE_CLASS.TCP_TABL\E_OWNER_PID_ALL.
I have tried
ctypes.windll.iphlpapi.GetExtendedTcpTable(NULL, ctypes.byref(dwSize), bOrder, AF_INET, TCP_TABLE_CLASS.TCP_TABLE_OWNER_PID_ALL, 0)
but this always bails with "AttributeError: function 'TCP_TABLE_CLASS' not found"
I have also tried
ctypes.windll.iphlpapi.GetExtendedTcpTable(NULL, ctypes.byref(dwSize), bOrder, AF_INET, ctypes.windll.iphlpapi.TCP_TABLE_CLASS.TCP_TABLE_OWNER_PID_ALL, 0)
which receives the same results.
Any recommendations are appreciated.
Cutaway
A:
The TCP_TABLE_CLASS is an enum
typedef enum {
TCP_TABLE_BASIC_LISTENER,
TCP_TABLE_BASIC_CONNECTIONS,
TCP_TABLE_BASIC_ALL,
TCP_TABLE_OWNER_PID_LISTENER,
TCP_TABLE_OWNER_PID_CONNECTIONS,
TCP_TABLE_OWNER_PID_ALL,
TCP_TABLE_OWNER_MODULE_LISTENER,
TCP_TABLE_OWNER_MODULE_CONNECTIONS,
TCP_TABLE_OWNER_MODULE_ALL
} TCP_TABLE_CLASS, *PTCP_TABLE_CLASS;
you must define it in your python script with some constants. This is not exported by the dll.
TCP_TABLE_BASIC_LISTENER = 0
TCP_TABLE_BASIC_CONNECTIONS = 1
TCP_TABLE_BASIC_ALL = 2
TCP_TABLE_OWNER_PID_LISTENER = 3
TCP_TABLE_OWNER_PID_CONNECTIONS = 4
TCP_TABLE_OWNER_PID_ALL = 5
TCP_TABLE_OWNER_MODULE_LISTENER = 6
TCP_TABLE_OWNER_MODULE_CONNECTIONS = 7
TCP_TABLE_OWNER_MODULE_ALL = 8
A:
In this case, since:
typedef enum {
TCP_TABLE_BASIC_LISTENER,
TCP_TABLE_BASIC_CONNECTIONS,
TCP_TABLE_BASIC_ALL,
TCP_TABLE_OWNER_PID_LISTENER,
TCP_TABLE_OWNER_PID_CONNECTIONS,
TCP_TABLE_OWNER_PID_ALL,
TCP_TABLE_OWNER_MODULE_LISTENER,
TCP_TABLE_OWNER_MODULE_CONNECTIONS,
TCP_TABLE_OWNER_MODULE_ALL
} TCP_TABLE_CLASS, *PTCP_TABLE_CLASS;
I used '5' and it worked.
Thank you,
Cutaway
|
Using GetExtendedTcpTable in Python
|
I am trying to use GetExtendedTcpTable via a Python program. Basically I am trying to convert "ActiveState Code Recipe 392572: Using the Win32 IPHelper API" to "Getting the active TCP/UDP connections using the GetExtendedTcpTable function".
My problem is that I cannot seem to get the Python script to recognize TCP_TABLE_CLASS.TCP_TABL\E_OWNER_PID_ALL.
I have tried
ctypes.windll.iphlpapi.GetExtendedTcpTable(NULL, ctypes.byref(dwSize), bOrder, AF_INET, TCP_TABLE_CLASS.TCP_TABLE_OWNER_PID_ALL, 0)
but this always bails with "AttributeError: function 'TCP_TABLE_CLASS' not found"
I have also tried
ctypes.windll.iphlpapi.GetExtendedTcpTable(NULL, ctypes.byref(dwSize), bOrder, AF_INET, ctypes.windll.iphlpapi.TCP_TABLE_CLASS.TCP_TABLE_OWNER_PID_ALL, 0)
which receives the same results.
Any recommendations are appreciated.
Cutaway
|
[
"The TCP_TABLE_CLASS is an enum\n\ntypedef enum {\n TCP_TABLE_BASIC_LISTENER,\n TCP_TABLE_BASIC_CONNECTIONS,\n TCP_TABLE_BASIC_ALL,\n TCP_TABLE_OWNER_PID_LISTENER,\n TCP_TABLE_OWNER_PID_CONNECTIONS,\n TCP_TABLE_OWNER_PID_ALL,\n TCP_TABLE_OWNER_MODULE_LISTENER,\n TCP_TABLE_OWNER_MODULE_CONNECTIONS,\n TCP_TABLE_OWNER_MODULE_ALL \n} TCP_TABLE_CLASS, *PTCP_TABLE_CLASS;\n\nyou must define it in your python script with some constants. This is not exported by the dll.\n\n TCP_TABLE_BASIC_LISTENER = 0\n TCP_TABLE_BASIC_CONNECTIONS = 1\n TCP_TABLE_BASIC_ALL = 2\n TCP_TABLE_OWNER_PID_LISTENER = 3\n TCP_TABLE_OWNER_PID_CONNECTIONS = 4\n TCP_TABLE_OWNER_PID_ALL = 5\n TCP_TABLE_OWNER_MODULE_LISTENER = 6\n TCP_TABLE_OWNER_MODULE_CONNECTIONS = 7\n TCP_TABLE_OWNER_MODULE_ALL = 8\n\n\n",
"In this case, since: \n\ntypedef enum {\n TCP_TABLE_BASIC_LISTENER,\n TCP_TABLE_BASIC_CONNECTIONS,\n TCP_TABLE_BASIC_ALL,\n TCP_TABLE_OWNER_PID_LISTENER,\n TCP_TABLE_OWNER_PID_CONNECTIONS,\n TCP_TABLE_OWNER_PID_ALL,\n TCP_TABLE_OWNER_MODULE_LISTENER,\n TCP_TABLE_OWNER_MODULE_CONNECTIONS,\n TCP_TABLE_OWNER_MODULE_ALL \n } TCP_TABLE_CLASS, *PTCP_TABLE_CLASS;\n\nI used '5' and it worked.\nThank you,\nCutaway\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"winapi"
] |
stackoverflow_0001429403_python_winapi.txt
|
Q:
Python - Windows Shutdown Events
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
However, this only works when running the app under python.exe (i.e., it has a console window), but not under pythonw.exe (no console window).
Is there an equivalent way in Windows to receive these events when you have no console and no window to receive them? Or, is there a programmatic way to hide the console window?
To be clear - my goal is to be able to successfully receive Windows shutdown/logoff/etc events, without having any kind of console window showing.
EDIT:
I've been playing around, and I've gotten quite a bit further. I wrote a piece of test code for this. When I do a taskkill /im pythonw.exe - it will receive the message.
However, when I do a shutdown, restart, or logoff on Windows, I do not get any messages.
Here's the whole thing:
""" Testing Windows shutdown events """
import win32con
import win32api
import win32gui
import sys
import time
def log_info(msg):
""" Prints """
print msg
f = open("c:\\test.log", "a")
f.write(msg + "\n")
f.close()
def wndproc(hwnd, msg, wparam, lparam):
log_info("wndproc: %s" % msg)
if __name__ == "__main__":
log_info("*** STARTING ***")
hinst = win32api.GetModuleHandle(None)
wndclass = win32gui.WNDCLASS()
wndclass.hInstance = hinst
wndclass.lpszClassName = "testWindowClass"
messageMap = { win32con.WM_QUERYENDSESSION : wndproc,
win32con.WM_ENDSESSION : wndproc,
win32con.WM_QUIT : wndproc,
win32con.WM_DESTROY : wndproc,
win32con.WM_CLOSE : wndproc }
wndclass.lpfnWndProc = messageMap
try:
myWindowClass = win32gui.RegisterClass(wndclass)
hwnd = win32gui.CreateWindowEx(win32con.WS_EX_LEFT,
myWindowClass,
"testMsgWindow",
0,
0,
0,
win32con.CW_USEDEFAULT,
win32con.CW_USEDEFAULT,
win32con.HWND_MESSAGE,
0,
hinst,
None)
except Exception, e:
log_info("Exception: %s" % str(e))
if hwnd is None:
log_info("hwnd is none!")
else:
log_info("hwnd: %s" % hwnd)
while True:
win32gui.PumpWaitingMessages()
time.sleep(1)
I feel like I'm pretty close here, but I'm definitely missing something!
A:
The problem here was that the HWND_MESSAGE window type doesn't actually receive broadcast messages - like the WM_QUERYENDSESSION and WM_ENDSESSION.
So instead of specifying win32con.HWND_MESSAGE for the "parent window" parameter of CreateWindowEx(), I just specified 0.
Basically, this creates an actual window, but I never show it, so it's effectively the same thing. Now, I can successfully receive those broadcast messages and shut down the app properly.
A:
If you don't have a console, setting a console handler of course can't work. You can receive system events on a GUI (non-console) program by making another window (doesn't have to be visible), making sure you have a normal "message pump" on it serving, and handling WM_QUERYENDSESSION -- that's the message telling your window about shutdown and logoff events (and your window can try to push back against the end-session by returning 0 for this message). ("Windows Services" are different from normal apps -- if that's what you're writing, see an example here).
|
Python - Windows Shutdown Events
|
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
However, this only works when running the app under python.exe (i.e., it has a console window), but not under pythonw.exe (no console window).
Is there an equivalent way in Windows to receive these events when you have no console and no window to receive them? Or, is there a programmatic way to hide the console window?
To be clear - my goal is to be able to successfully receive Windows shutdown/logoff/etc events, without having any kind of console window showing.
EDIT:
I've been playing around, and I've gotten quite a bit further. I wrote a piece of test code for this. When I do a taskkill /im pythonw.exe - it will receive the message.
However, when I do a shutdown, restart, or logoff on Windows, I do not get any messages.
Here's the whole thing:
""" Testing Windows shutdown events """
import win32con
import win32api
import win32gui
import sys
import time
def log_info(msg):
""" Prints """
print msg
f = open("c:\\test.log", "a")
f.write(msg + "\n")
f.close()
def wndproc(hwnd, msg, wparam, lparam):
log_info("wndproc: %s" % msg)
if __name__ == "__main__":
log_info("*** STARTING ***")
hinst = win32api.GetModuleHandle(None)
wndclass = win32gui.WNDCLASS()
wndclass.hInstance = hinst
wndclass.lpszClassName = "testWindowClass"
messageMap = { win32con.WM_QUERYENDSESSION : wndproc,
win32con.WM_ENDSESSION : wndproc,
win32con.WM_QUIT : wndproc,
win32con.WM_DESTROY : wndproc,
win32con.WM_CLOSE : wndproc }
wndclass.lpfnWndProc = messageMap
try:
myWindowClass = win32gui.RegisterClass(wndclass)
hwnd = win32gui.CreateWindowEx(win32con.WS_EX_LEFT,
myWindowClass,
"testMsgWindow",
0,
0,
0,
win32con.CW_USEDEFAULT,
win32con.CW_USEDEFAULT,
win32con.HWND_MESSAGE,
0,
hinst,
None)
except Exception, e:
log_info("Exception: %s" % str(e))
if hwnd is None:
log_info("hwnd is none!")
else:
log_info("hwnd: %s" % hwnd)
while True:
win32gui.PumpWaitingMessages()
time.sleep(1)
I feel like I'm pretty close here, but I'm definitely missing something!
|
[
"The problem here was that the HWND_MESSAGE window type doesn't actually receive broadcast messages - like the WM_QUERYENDSESSION and WM_ENDSESSION.\nSo instead of specifying win32con.HWND_MESSAGE for the \"parent window\" parameter of CreateWindowEx(), I just specified 0.\nBasically, this creates an actual window, but I never show it, so it's effectively the same thing. Now, I can successfully receive those broadcast messages and shut down the app properly.\n",
"If you don't have a console, setting a console handler of course can't work. You can receive system events on a GUI (non-console) program by making another window (doesn't have to be visible), making sure you have a normal \"message pump\" on it serving, and handling WM_QUERYENDSESSION -- that's the message telling your window about shutdown and logoff events (and your window can try to push back against the end-session by returning 0 for this message). (\"Windows Services\" are different from normal apps -- if that's what you're writing, see an example here).\n"
] |
[
15,
5
] |
[] |
[] |
[
"python",
"windows"
] |
stackoverflow_0001411186_python_windows.txt
|
Q:
How to wait for a child that respawns itself with os.execv() on win32?
I have some code that uses pip to bootstrap a Python envionment for out build process: this is a lovely way of ensuring we get proper isolation of the build requirements from the rest of the host system, and helping us get more consistent build results overall.
Anyway, the code I have that drives pip.py appears to have some problems on windows. The problem is that I'm spawning the pip process from my bootstrapping scripts using subprocess.Popen() and then waiting for the process to complete but this is happening too early due to the fact that pip uses execv to relaunch itself under the new virtualenv it creates. When this happens my parent is seeing that the child has exited with an exitcode of 0 and it carries on on it's merry way.
So the question is simple: how can I cope with an os.execv() call from a child process on win32 in manner where i can ascertain the return code of the newly executed child process?
A:
I can't think of any smooth way to handle this. This may sound dirty, but perhaps you could work with dropping flag files onto the filesystem while your script is running, and wait for those files to be cleaned up?
|
How to wait for a child that respawns itself with os.execv() on win32?
|
I have some code that uses pip to bootstrap a Python envionment for out build process: this is a lovely way of ensuring we get proper isolation of the build requirements from the rest of the host system, and helping us get more consistent build results overall.
Anyway, the code I have that drives pip.py appears to have some problems on windows. The problem is that I'm spawning the pip process from my bootstrapping scripts using subprocess.Popen() and then waiting for the process to complete but this is happening too early due to the fact that pip uses execv to relaunch itself under the new virtualenv it creates. When this happens my parent is seeing that the child has exited with an exitcode of 0 and it carries on on it's merry way.
So the question is simple: how can I cope with an os.execv() call from a child process on win32 in manner where i can ascertain the return code of the newly executed child process?
|
[
"I can't think of any smooth way to handle this. This may sound dirty, but perhaps you could work with dropping flag files onto the filesystem while your script is running, and wait for those files to be cleaned up?\n"
] |
[
0
] |
[] |
[] |
[
"execv",
"pip",
"python",
"windows"
] |
stackoverflow_0001194078_execv_pip_python_windows.txt
|
Q:
Pythonic way to split comma separated numbers into pairs
I'd like to split a comma separated value into pairs:
>>> s = '0,1,2,3,4,5,6,7,8,9'
>>> pairs = # something pythonic
>>> pairs
[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
What would # something pythonic look like?
How would you detect and handle a string with an odd set of numbers?
A:
Something like:
zip(t[::2], t[1::2])
Full example:
>>> s = ','.join(str(i) for i in range(10))
>>> s
'0,1,2,3,4,5,6,7,8,9'
>>> t = [int(i) for i in s.split(',')]
>>> t
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> p = zip(t[::2], t[1::2])
>>> p
[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
>>>
If the number of items is odd, the last element will be ignored. Only complete pairs will be included.
A:
How about this:
>>> x = '0,1,2,3,4,5,6,7,8,9'.split(',')
>>> def chunker(seq, size):
... return (tuple(seq[pos:pos + size]) for pos in xrange(0, len(seq), size))
...
>>> list(chunker(x, 2))
[('0', '1'), ('2', '3'), ('4', '5'), ('6', '7'), ('8', '9')]
This will also nicely handle uneven amounts:
>>> x = '0,1,2,3,4,5,6,7,8,9,10'.split(',')
>>> list(chunker(x, 2))
[('0', '1'), ('2', '3'), ('4', '5'), ('6', '7'), ('8', '9'), ('10',)]
P.S. I had this code stashed away and I just realized where I got it from. There's two very similar questions in stackoverflow about this:
What is the most “pythonic” way to iterate over a list in chunks?
How do you split a list into evenly sized chunks in Python?
There's also this gem from the Recipes section of itertools:
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
A:
A more general option, that also works on iterators and allows for combining any number of items:
def n_wise(seq, n):
return zip(*([iter(seq)]*n))
Replace zip with itertools.izip if you want to get a lazy iterator instead of a list.
A:
A solution much like FogleBirds, but using an iterator (a generator expression) instead of list comprehension.
s = '0,1,2,3,4,5,6,7,8,9'
# generator expression creating an iterator yielding numbers
iterator = (int(i) for i in s.split(','))
# use zip to create pairs
# (will ignore last item if odd number of items)
# Note that zip() returns a list in Python 2.x,
# in Python 3 it returns an iterator
pairs = zip(iterator, iterator)
Both list comprehensions and generator expressions would probably be considered quite "pythonic".
A:
This will ignore the last number in an odd list:
n = [int(x) for x in s.split(',')]
print zip(n[::2], n[1::2])
This will pad the shorter list by 0 in an odd list:
import itertools
n = [int(x) for x in s.split(',')]
print list(itertools.izip_longest(n[::2], n[1::2], fillvalue=0))
izip_longest is available in Python 2.6.
|
Pythonic way to split comma separated numbers into pairs
|
I'd like to split a comma separated value into pairs:
>>> s = '0,1,2,3,4,5,6,7,8,9'
>>> pairs = # something pythonic
>>> pairs
[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]
What would # something pythonic look like?
How would you detect and handle a string with an odd set of numbers?
|
[
"Something like:\nzip(t[::2], t[1::2])\n\nFull example:\n>>> s = ','.join(str(i) for i in range(10))\n>>> s\n'0,1,2,3,4,5,6,7,8,9'\n>>> t = [int(i) for i in s.split(',')]\n>>> t\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n>>> p = zip(t[::2], t[1::2])\n>>> p\n[(0, 1), (2, 3), (4, 5), (6, 7), (8, 9)]\n>>>\n\nIf the number of items is odd, the last element will be ignored. Only complete pairs will be included.\n",
"How about this:\n>>> x = '0,1,2,3,4,5,6,7,8,9'.split(',')\n>>> def chunker(seq, size):\n... return (tuple(seq[pos:pos + size]) for pos in xrange(0, len(seq), size))\n...\n>>> list(chunker(x, 2))\n[('0', '1'), ('2', '3'), ('4', '5'), ('6', '7'), ('8', '9')]\n\nThis will also nicely handle uneven amounts:\n>>> x = '0,1,2,3,4,5,6,7,8,9,10'.split(',')\n>>> list(chunker(x, 2))\n[('0', '1'), ('2', '3'), ('4', '5'), ('6', '7'), ('8', '9'), ('10',)]\n\nP.S. I had this code stashed away and I just realized where I got it from. There's two very similar questions in stackoverflow about this:\n\nWhat is the most “pythonic” way to iterate over a list in chunks?\nHow do you split a list into evenly sized chunks in Python?\n\nThere's also this gem from the Recipes section of itertools: \ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\n",
"A more general option, that also works on iterators and allows for combining any number of items:\n def n_wise(seq, n):\n return zip(*([iter(seq)]*n))\n\nReplace zip with itertools.izip if you want to get a lazy iterator instead of a list.\n",
"A solution much like FogleBirds, but using an iterator (a generator expression) instead of list comprehension.\ns = '0,1,2,3,4,5,6,7,8,9'\n# generator expression creating an iterator yielding numbers\niterator = (int(i) for i in s.split(','))\n\n# use zip to create pairs\n# (will ignore last item if odd number of items)\n# Note that zip() returns a list in Python 2.x, \n# in Python 3 it returns an iterator\npairs = zip(iterator, iterator)\n\nBoth list comprehensions and generator expressions would probably be considered quite \"pythonic\".\n",
"This will ignore the last number in an odd list:\nn = [int(x) for x in s.split(',')]\nprint zip(n[::2], n[1::2])\n\nThis will pad the shorter list by 0 in an odd list:\nimport itertools\nn = [int(x) for x in s.split(',')]\nprint list(itertools.izip_longest(n[::2], n[1::2], fillvalue=0))\n\nizip_longest is available in Python 2.6.\n"
] |
[
44,
8,
8,
4,
2
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0000870652_python_tuples.txt
|
Q:
Benefit cost analysis libraries
I was wondering if there are any opensource libraries that are geared towards transportation ben/cost analysis.
I currently use microBENCOST and would like to build my own solution. I'm most comfortable with C/c++ and Python.
cheers
A:
My girlfriend works for a transportation planning firm, and they use a variety of models developed in SPSS, with a lot of data munging in Excel and visualization in ArcGIS. As far as turnkey solutions go, though, I think you're going to be more or less on your own.
Assuming you want to move on to something a bit newer/more maintainable than a DOS application like MicroBENCOST, though, I would second the recommendation to become comfortable with Scipy, and then start building up a toolbox of statistical models based on the original application. For other types of modeling, you may also find SimPy useful; it doesn't do the simplified cost/benefit analysis that MicroBENCOST does, but it may be applicable for more open-ended design problems where original discrete simulation models are called for.
A:
I don't think there are any alternatives, considering MicroBENCOST appears to be a special project developed for the state of California, and transportation analysis is about as niche as it gets.
If you are going to build your own solution, you will probably want to look into the various math libraries available to Python - particularly numpy and/or scipy.
A:
microBENCOST is purchased by the californian department of transport and really it seems a suitable tool. What is missing? If you need something particular that is not yet implemented, maybe you should consider to write your own. It's really difficult to find something better
|
Benefit cost analysis libraries
|
I was wondering if there are any opensource libraries that are geared towards transportation ben/cost analysis.
I currently use microBENCOST and would like to build my own solution. I'm most comfortable with C/c++ and Python.
cheers
|
[
"My girlfriend works for a transportation planning firm, and they use a variety of models developed in SPSS, with a lot of data munging in Excel and visualization in ArcGIS. As far as turnkey solutions go, though, I think you're going to be more or less on your own.\nAssuming you want to move on to something a bit newer/more maintainable than a DOS application like MicroBENCOST, though, I would second the recommendation to become comfortable with Scipy, and then start building up a toolbox of statistical models based on the original application. For other types of modeling, you may also find SimPy useful; it doesn't do the simplified cost/benefit analysis that MicroBENCOST does, but it may be applicable for more open-ended design problems where original discrete simulation models are called for. \n",
"I don't think there are any alternatives, considering MicroBENCOST appears to be a special project developed for the state of California, and transportation analysis is about as niche as it gets.\nIf you are going to build your own solution, you will probably want to look into the various math libraries available to Python - particularly numpy and/or scipy.\n",
"microBENCOST is purchased by the californian department of transport and really it seems a suitable tool. What is missing? If you need something particular that is not yet implemented, maybe you should consider to write your own. It's really difficult to find something better \n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"c++",
"economics",
"python",
"transport"
] |
stackoverflow_0001373902_c++_economics_python_transport.txt
|
Q:
How to programmatically set a global (module) variable?
I would like to define globals in a "programmatic" way. Something similar to what I want to do would be:
definitions = {'a': 1, 'b': 2, 'c': 123.4}
for definition in definitions.items():
exec("%s = %r" % definition) # a = 1, etc.
Specifically, I want to create a module fundamentalconstants that contains variables that can be accessed as fundamentalconstants.electron_mass, etc., where all values are obtained through parsing a file (hence the need to do the assignments in a "programmatic" way).
Now, the exec solution above would work. But I am a little bit uneasy with it, because I'm afraid that exec is not the cleanest way to achieve the goal of setting module globals.
A:
Here is a better way to do it:
import sys
definitions = {'a': 1, 'b': 2, 'c': 123.4}
module = sys.modules[__name__]
for name, value in definitions.iteritems():
setattr(module, name, value)
A:
You can set globals in the dictionary returned by globals():
definitions = {'a': 1, 'b': 2, 'c': 123.4}
for name, value in definitions.items():
globals()[name] = value
A:
You're right, exec is usually a bad idea and it certainly isn't needed in this case.
Ned's answer is fine. Another possible way to do it if you're a module is to import yourself:
fundamentalconstants.py:
import fundamentalconstants
fundamentalconstants.life_meaning= 42
for line in open('constants.dat'):
name, _, value= line.partition(':')
setattr(fundamentalconstants, name, value)
|
How to programmatically set a global (module) variable?
|
I would like to define globals in a "programmatic" way. Something similar to what I want to do would be:
definitions = {'a': 1, 'b': 2, 'c': 123.4}
for definition in definitions.items():
exec("%s = %r" % definition) # a = 1, etc.
Specifically, I want to create a module fundamentalconstants that contains variables that can be accessed as fundamentalconstants.electron_mass, etc., where all values are obtained through parsing a file (hence the need to do the assignments in a "programmatic" way).
Now, the exec solution above would work. But I am a little bit uneasy with it, because I'm afraid that exec is not the cleanest way to achieve the goal of setting module globals.
|
[
"Here is a better way to do it:\nimport sys\ndefinitions = {'a': 1, 'b': 2, 'c': 123.4}\nmodule = sys.modules[__name__]\nfor name, value in definitions.iteritems():\n setattr(module, name, value)\n\n",
"You can set globals in the dictionary returned by globals():\ndefinitions = {'a': 1, 'b': 2, 'c': 123.4}\nfor name, value in definitions.items():\n globals()[name] = value\n\n",
"You're right, exec is usually a bad idea and it certainly isn't needed in this case.\nNed's answer is fine. Another possible way to do it if you're a module is to import yourself:\nfundamentalconstants.py:\nimport fundamentalconstants\n\nfundamentalconstants.life_meaning= 42\n\nfor line in open('constants.dat'):\n name, _, value= line.partition(':')\n setattr(fundamentalconstants, name, value)\n\n"
] |
[
68,
49,
4
] |
[] |
[] |
[
"global_variables",
"module",
"python"
] |
stackoverflow_0001429814_global_variables_module_python.txt
|
Q:
Symmetrically adressable matrix
I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?
A:
Golub and Van Loan's "Matrix Computations" book outlines a feasible addressing scheme:
You pack the data in to a vector and access as follows, assuming i >= j:
a_ij = A.vec((j-1)n - j(j-1)/2 + i)
A:
You're probably better off using a full square numpy matrix. Yes, it wastes half the memory storing redundant values, but rolling your own symmetric matrix in Python will waste even more memory and CPU by storing and processing the integers as Python objects.
A:
You only need to store the lower triangle of the matrix. Typically this is done with one n(n+1)/2 length list. You'll need to overload the __getitem__ method to interpret what the entry means.
A:
A simpler and cleaner way is to just use a dictionary with sorted tuples as keys. The tuples correspond with your matrix index. Override __getitem__ and __setitem__ to access the dictionary by sorted tuples; here's an example class:
class Matrix(dict):
def __getitem__(self, index):
return super(Matrix, self).__getitem__(tuple(sorted(index)))
def __setitem__(self, index, value):
return super(Matrix, self).__setitem__(tuple(sorted(index)), value)
And then use it like this:
>>> matrix = Matrix()
>>> matrix[2,3] = 1066
>>> print matrix
{(2, 3): 1066}
>>> matrix[2,3]
1066
>>> matrix[3,2]
1066
>>> matrix[1,1]
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "z.py", line 3, in __getitem__
return super(Matrix, self).__getitem__(tuple(sorted(index)))
KeyError: (1, 1)
|
Symmetrically adressable matrix
|
I'm looking to create a 2d matrix of integers with symmetric addressing ( i.e. matrix[2,3] and matrix[3,2] will return the same value ) in python. The integers will have addition and subtraction done on them, and be used for logical comparisons. My initial idea was to create the integer objects up front and try to fill a list of lists with some python equivalent of pointers. I'm not sure how to do it, though. What is the best way to implement this, and should I be using lists or another data structure?
|
[
"Golub and Van Loan's \"Matrix Computations\" book outlines a feasible addressing scheme:\nYou pack the data in to a vector and access as follows, assuming i >= j:\na_ij = A.vec((j-1)n - j(j-1)/2 + i) \n\n",
"You're probably better off using a full square numpy matrix. Yes, it wastes half the memory storing redundant values, but rolling your own symmetric matrix in Python will waste even more memory and CPU by storing and processing the integers as Python objects. \n",
"You only need to store the lower triangle of the matrix. Typically this is done with one n(n+1)/2 length list. You'll need to overload the __getitem__ method to interpret what the entry means.\n",
"A simpler and cleaner way is to just use a dictionary with sorted tuples as keys. The tuples correspond with your matrix index. Override __getitem__ and __setitem__ to access the dictionary by sorted tuples; here's an example class:\nclass Matrix(dict):\n def __getitem__(self, index):\n return super(Matrix, self).__getitem__(tuple(sorted(index)))\n def __setitem__(self, index, value):\n return super(Matrix, self).__setitem__(tuple(sorted(index)), value)\n\nAnd then use it like this:\n>>> matrix = Matrix()\n>>> matrix[2,3] = 1066\n>>> print matrix\n{(2, 3): 1066}\n>>> matrix[2,3]\n1066\n>>> matrix[3,2]\n1066\n>>> matrix[1,1]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in ?\n File \"z.py\", line 3, in __getitem__\n return super(Matrix, self).__getitem__(tuple(sorted(index)))\nKeyError: (1, 1)\n\n"
] |
[
3,
2,
1,
1
] |
[] |
[] |
[
"data_structures",
"matrix",
"python"
] |
stackoverflow_0001425162_data_structures_matrix_python.txt
|
Q:
Python 3 smtplib send with unicode characters
I'm having a problem emailing unicode characters using smtplib in Python 3. This fails in 3.1.1, but works in 2.5.4:
import smtplib
from email.mime.text import MIMEText
sender = to = '[email protected]'
server = 'smtp.DEF.com'
msg = MIMEText('€10')
msg['Subject'] = 'Hello'
msg['From'] = sender
msg['To'] = to
s = smtplib.SMTP(server)
s.sendmail(sender, [to], msg.as_string())
s.quit()
I tried an example from the docs, which also failed. http://docs.python.org/3.1/library/email-examples.html, the Send the contents of a directory as a MIME message example
Any suggestions?
A:
The key is in the docs:
class email.mime.text.MIMEText(_text, _subtype='plain', _charset='us-ascii')
A subclass of MIMENonMultipart, the
MIMEText class is used to create MIME
objects of major type text. _text is
the string for the payload. _subtype
is the minor type and defaults to
plain. _charset is the character set
of the text and is passed as a
parameter to the MIMENonMultipart
constructor; it defaults to us-ascii.
No guessing or encoding is performed
on the text data.
So what you need is clearly, not msg = MIMEText('€10'), but rather:
msg = MIMEText('€10'.encode('utf-8'), _charset='utf-8')
While not all that clearly documented, sendmail needs a byte-string, not a Unicode one (that's what the SMTP protocol specifies); look to what msg.as_string() looks like for each of the two ways of building it -- given the "no guessing or encoding", your way still has that euro character in there (and no way for sendmail to turn it into a bytestring), mine doesn't (and utf-8 is clearly specified throughout).
A:
_charset parameter of MIMEText defaults to us-ascii according to the docs. Since € is not from us-ascii set it isn't working.
example in the docs that you've tried clearly states:
For this example, assume that the text file contains only ASCII characters.
You could use .get_charset method on your message to investigate the charset, there is incidentally .set_charset as well.
|
Python 3 smtplib send with unicode characters
|
I'm having a problem emailing unicode characters using smtplib in Python 3. This fails in 3.1.1, but works in 2.5.4:
import smtplib
from email.mime.text import MIMEText
sender = to = '[email protected]'
server = 'smtp.DEF.com'
msg = MIMEText('€10')
msg['Subject'] = 'Hello'
msg['From'] = sender
msg['To'] = to
s = smtplib.SMTP(server)
s.sendmail(sender, [to], msg.as_string())
s.quit()
I tried an example from the docs, which also failed. http://docs.python.org/3.1/library/email-examples.html, the Send the contents of a directory as a MIME message example
Any suggestions?
|
[
"The key is in the docs:\nclass email.mime.text.MIMEText(_text, _subtype='plain', _charset='us-ascii')\n\n\nA subclass of MIMENonMultipart, the\n MIMEText class is used to create MIME\n objects of major type text. _text is\n the string for the payload. _subtype\n is the minor type and defaults to\n plain. _charset is the character set\n of the text and is passed as a\n parameter to the MIMENonMultipart\n constructor; it defaults to us-ascii.\n No guessing or encoding is performed\n on the text data.\n\nSo what you need is clearly, not msg = MIMEText('€10'), but rather:\nmsg = MIMEText('€10'.encode('utf-8'), _charset='utf-8')\n\nWhile not all that clearly documented, sendmail needs a byte-string, not a Unicode one (that's what the SMTP protocol specifies); look to what msg.as_string() looks like for each of the two ways of building it -- given the \"no guessing or encoding\", your way still has that euro character in there (and no way for sendmail to turn it into a bytestring), mine doesn't (and utf-8 is clearly specified throughout).\n",
"_charset parameter of MIMEText defaults to us-ascii according to the docs. Since € is not from us-ascii set it isn't working.\nexample in the docs that you've tried clearly states: \n\nFor this example, assume that the text file contains only ASCII characters.\n\nYou could use .get_charset method on your message to investigate the charset, there is incidentally .set_charset as well.\n"
] |
[
14,
2
] |
[] |
[] |
[
"email",
"python",
"python_3.x",
"smtplib",
"unicode"
] |
stackoverflow_0001429147_email_python_python_3.x_smtplib_unicode.txt
|
Q:
Application to generate installers for Linux, Windows and MacOSX from a single configuration
Here's what I want:
Given a set of definitions (preferably in Python) on what files to install where and what post-install script to run, etc.. I would like this program to generate installers for the three major platforms:
MSI on Windows
dmg on MacOSX
Tarball w/ install.sh (and rpm/deb, if possible) on Linux
For example,
installconfig.py:
name = 'Foo'
version = '1.0'
components = {
'core': {'recursive-include': 'image/'
'target_dir': '$APPDIR'}
'plugins': {'recursive-include': 'contrib/plugins',
'target_dir': '$APPDIR/plugins'}
}
def post_install():
...
And I want this program to generate Foo-1.0.x86.msi, Foo-1.0.universal.dmg and Foo-1.0-linux-x86.tar.gz.
You get the idea. Does such a program exist? (It could, under the hood, make use of WiX on Windows).
NOTE 1: Foo can be an application written in any programming language. That should not matter.
NOTE 2: Platform-specific things should be possible. For example, I should be able to specify merge modules on Windows.
A:
Look into CPack. It works very well with CMake, if you use that for your build system, but it also works without it. This uses CMake-type syntax, not Python, but it can generate NSIS installers, ZIP archives, binary executables on Linux, RPMs, DEBs, and Mac OS X bundles
A:
Your requirements are probably such that hand-rolling a make script to do these things is the order of the day. Or write it in python if you don't like make. It will be more flexible and probably faster than trying to learn some proprietary scripting language from some installer creator. Anything with a fancy gui and checkboxes and so on is unlikely to be able to automatically do anything rational on linux.
A:
perhaps paver can be made to meet your needs? you'd have to add the msi, dmg, tgz, etc parts as tasks using some external library, but i believe it can be done.
|
Application to generate installers for Linux, Windows and MacOSX from a single configuration
|
Here's what I want:
Given a set of definitions (preferably in Python) on what files to install where and what post-install script to run, etc.. I would like this program to generate installers for the three major platforms:
MSI on Windows
dmg on MacOSX
Tarball w/ install.sh (and rpm/deb, if possible) on Linux
For example,
installconfig.py:
name = 'Foo'
version = '1.0'
components = {
'core': {'recursive-include': 'image/'
'target_dir': '$APPDIR'}
'plugins': {'recursive-include': 'contrib/plugins',
'target_dir': '$APPDIR/plugins'}
}
def post_install():
...
And I want this program to generate Foo-1.0.x86.msi, Foo-1.0.universal.dmg and Foo-1.0-linux-x86.tar.gz.
You get the idea. Does such a program exist? (It could, under the hood, make use of WiX on Windows).
NOTE 1: Foo can be an application written in any programming language. That should not matter.
NOTE 2: Platform-specific things should be possible. For example, I should be able to specify merge modules on Windows.
|
[
"Look into CPack. It works very well with CMake, if you use that for your build system, but it also works without it. This uses CMake-type syntax, not Python, but it can generate NSIS installers, ZIP archives, binary executables on Linux, RPMs, DEBs, and Mac OS X bundles\n",
"Your requirements are probably such that hand-rolling a make script to do these things is the order of the day. Or write it in python if you don't like make. It will be more flexible and probably faster than trying to learn some proprietary scripting language from some installer creator. Anything with a fancy gui and checkboxes and so on is unlikely to be able to automatically do anything rational on linux.\n",
"perhaps paver can be made to meet your needs? you'd have to add the msi, dmg, tgz, etc parts as tasks using some external library, but i believe it can be done.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"cross_platform",
"dmg",
"installation",
"python",
"wix"
] |
stackoverflow_0001430497_cross_platform_dmg_installation_python_wix.txt
|
Q:
Rescale intensities of a PIL Image
What is the simplest/cleanest way to rescale the intensities of a PIL Image?
Suppose that I have a 16-bit image from a 12-bit camera, so only the values 0–4095 are in use. I would like to rescale the intensities so that the entire range 0–65535 is used. What is the simplest/cleanest way to do this when the image is represented as PIL's Image type?
The best solution I have come up with so far is:
pixels = img.getdata()
img.putdata(pixels, 16)
That works, but always leaves the four least significant bits blank. Ideally, I would like to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits. I don't know how to do that fast.
A:
Since you know that the pixel values are 0-4095, I can't find a faster way than this:
new_image= image.point(lambda value: value<<4 | value>>8)
According to the documentation, the lambda function will be called at most 4096 times, whatever the size of your image.
EDIT: Since the function given to point must be of the form argument * scale + offset for in I image, then this is the best possible using the point function:
new_image= image.point(lambda argument: argument*16)
The maximum output pixel value will be 65520.
A second take:
A modified version of your own solution, using itertools for improved efficiency:
import itertools as it # for brevity
import operator
def scale_12to16(image):
new_image= image.copy()
new_image.putdata(
it.imap(operator.or_,
it.imap(operator.lshift, image.getdata(), it.repeat(4)),
it.imap(operator.rshift, image.getdata(), it.repeat(8))
)
)
return new_image
This avoids the limitation of the point function argument.
A:
Why would you want to copy the 4 msb back into the 4 lsb? You only have 12 significant bits of information per pixel. Nothing you do will improve that. If you are OK with only having 4K of intensities, which is fine for most applications, then your solution is correct and probably optimal. If you need more levels of shading, then as David posted, recompute using a histogram. But, this will be significantly slower.
But, copying the 4 msb into the 4 lsb is NOT the way to go :)
A:
You need to do a histogram stretch (link to a similar question I answered) not histogram equalization:
histogram stretch http://cct.rncan.gc.ca/resource/tutor/fundam/images/linstre.gif
Image source
In your case you one need to multiply all the pixel values by 16, which is the factor between the two dynamic ranges (65536/4096).
A:
What you need to do is Histogram Equalization.
For how to do it with python and pil:
LINK1
LINK2
EDIT:
Code to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits...
def f(n):
return n<<4 + int(bin(n)[2:6],2)
print(f(0))
print(f(2**12))
# output
>>> 0
65664 # Oops > 2^16
A:
Maybe you should pass 16. (a float) instead of 16 (an int). I was trying to test it, but for some reason putdata does not multiply at all... So I hope it just works for you.
|
Rescale intensities of a PIL Image
|
What is the simplest/cleanest way to rescale the intensities of a PIL Image?
Suppose that I have a 16-bit image from a 12-bit camera, so only the values 0–4095 are in use. I would like to rescale the intensities so that the entire range 0–65535 is used. What is the simplest/cleanest way to do this when the image is represented as PIL's Image type?
The best solution I have come up with so far is:
pixels = img.getdata()
img.putdata(pixels, 16)
That works, but always leaves the four least significant bits blank. Ideally, I would like to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits. I don't know how to do that fast.
|
[
"Since you know that the pixel values are 0-4095, I can't find a faster way than this:\nnew_image= image.point(lambda value: value<<4 | value>>8)\n\nAccording to the documentation, the lambda function will be called at most 4096 times, whatever the size of your image.\nEDIT: Since the function given to point must be of the form argument * scale + offset for in I image, then this is the best possible using the point function:\nnew_image= image.point(lambda argument: argument*16)\n\nThe maximum output pixel value will be 65520.\nA second take:\nA modified version of your own solution, using itertools for improved efficiency:\nimport itertools as it # for brevity\nimport operator\n\ndef scale_12to16(image):\n new_image= image.copy()\n new_image.putdata(\n it.imap(operator.or_,\n it.imap(operator.lshift, image.getdata(), it.repeat(4)),\n it.imap(operator.rshift, image.getdata(), it.repeat(8))\n )\n )\n return new_image\n\nThis avoids the limitation of the point function argument.\n",
"Why would you want to copy the 4 msb back into the 4 lsb? You only have 12 significant bits of information per pixel. Nothing you do will improve that. If you are OK with only having 4K of intensities, which is fine for most applications, then your solution is correct and probably optimal. If you need more levels of shading, then as David posted, recompute using a histogram. But, this will be significantly slower. \nBut, copying the 4 msb into the 4 lsb is NOT the way to go :)\n",
"You need to do a histogram stretch (link to a similar question I answered) not histogram equalization:\nhistogram stretch http://cct.rncan.gc.ca/resource/tutor/fundam/images/linstre.gif\nImage source\nIn your case you one need to multiply all the pixel values by 16, which is the factor between the two dynamic ranges (65536/4096).\n",
"What you need to do is Histogram Equalization.\nFor how to do it with python and pil:\n\nLINK1\nLINK2\n\nEDIT:\nCode to shift each value four bits to the left, then copy the four most significant bits to the four least significant bits...\ndef f(n):\n return n<<4 + int(bin(n)[2:6],2)\n\nprint(f(0))\nprint(f(2**12))\n\n# output\n>>> 0\n 65664 # Oops > 2^16\n\n",
"Maybe you should pass 16. (a float) instead of 16 (an int). I was trying to test it, but for some reason putdata does not multiply at all... So I hope it just works for you.\n"
] |
[
3,
2,
2,
1,
0
] |
[] |
[] |
[
"image_processing",
"python",
"python_imaging_library"
] |
stackoverflow_0001327954_image_processing_python_python_imaging_library.txt
|
Q:
Kill sub-threads when Django restarts?
I'm running Django, and I'm creating threads that run in parallel while Django runs. Those threads sometimes run external processes that block while waiting for external input.
When I restart Django, those threads that are blocking while awaiting external input sometimes persist through the restart, and further they have and keep open Port 8080 so Django can't restart.
If I knew when Django was restarting, I could kill those threads. How can I tell when Django is restarting so that I can kill those threads (and their spawn).
It wasn't obvious from django.utils.autoreload where any hooks may be to tell when a restart is occurring.
Is there an alternative way to kill these threads when Django starts up?
Thanks for reading.
Brian
A:
It's not easy for a Python process to kill its own threads -- even harder (nearly impossible) to kill the threads of another process, and I suspect the latter is the case you have... the "restart" is presumably happening on a different process, so those threads are more or less out of bounds for you!
What I suggest instead is "a stitch in time saves nine": when you create those threads, make sure you set their daemon property to True (see the docs -- it's the setDaemon method in Python <= 2.5). This way, when the main thread finishes, e.g. to restart in another process, so will the entire process (which should take all the daemon threads down, too, automatically!-)
A:
What are you using to restart django? I'd put something in that script to look for process id's in the socket file(s) and kill those before starting django.
Alternatively, you could be very heavy handed and just run something like 'pkill -9 *django*' before your django startup sequence.
|
Kill sub-threads when Django restarts?
|
I'm running Django, and I'm creating threads that run in parallel while Django runs. Those threads sometimes run external processes that block while waiting for external input.
When I restart Django, those threads that are blocking while awaiting external input sometimes persist through the restart, and further they have and keep open Port 8080 so Django can't restart.
If I knew when Django was restarting, I could kill those threads. How can I tell when Django is restarting so that I can kill those threads (and their spawn).
It wasn't obvious from django.utils.autoreload where any hooks may be to tell when a restart is occurring.
Is there an alternative way to kill these threads when Django starts up?
Thanks for reading.
Brian
|
[
"It's not easy for a Python process to kill its own threads -- even harder (nearly impossible) to kill the threads of another process, and I suspect the latter is the case you have... the \"restart\" is presumably happening on a different process, so those threads are more or less out of bounds for you!\nWhat I suggest instead is \"a stitch in time saves nine\": when you create those threads, make sure you set their daemon property to True (see the docs -- it's the setDaemon method in Python <= 2.5). This way, when the main thread finishes, e.g. to restart in another process, so will the entire process (which should take all the daemon threads down, too, automatically!-)\n",
"What are you using to restart django? I'd put something in that script to look for process id's in the socket file(s) and kill those before starting django.\nAlternatively, you could be very heavy handed and just run something like 'pkill -9 *django*' before your django startup sequence.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"multithreading",
"python"
] |
stackoverflow_0001430517_django_multithreading_python.txt
|
Q:
Python: Traversing a string, checking its element, and inputting dictionary key-value pairs
I have a function that returns an 8 digit long binary string for given parameter:
def rule(x):
rule = bin(x)[2:].zfill(8)
return rule
I want to traverse each index of this string and check if it is a zero or a one. I tried to write a code like this:
def rule(x):
rule = bin(x)[2:].zfill(8)
while i < len(rule(x)):
if rule[i] == '0'
ruleList = {i:'OFF'}
elif rule[i] == '1'
ruleList = {i:'ON'}
i = i + 1
return ruleList
This code doesn't work. I am getting "Error: Object is unsubscriptable". What I am attempting to do is write a function that takes the following input, for example:
Input: 30
1. Converts to '00011110' (So far, so good)..
2. Checks if rule(30)[i] is '0' or '1' ('0' in this case where i = 0)
3. Then puts the result in a key value pair, where the index of the
string is the key and the state (on
or off) is the value.
4. The end result would be 'ruleList', where print ruleList
would yield something like this:
{0:'Off',1:'Off',2:'Off',3:'On',4:'On',5:'On',6:'On',7:'Off'}
Can someone help me out? I am new to python and programming in general so this function has proven to be quite challenging. I would like to see some of the more experienced coders solutions to this particular problem.
Thanks,
A:
Is this what you want?
def rule(x) :
rule = bin(x)[2:].zfill(8)
return dict((index, 'ON' if int(i) else 'OFF') for index, i in enumerate(rule))
A:
Here's a much more Pythonic version of the code you've written - hopefully the comments explain the code well enough to understand.
def rule(x):
rule = bin(x)[2:].zfill(8)
ruleDict = {} # create an empty dictionary
for i,c in enumerate(rule): # i = index, c = character at index, for each character in rule
# Leftmost bit of rule is key 0, increasing as you move right
ruleDict[i] = 'OFF' if c == '0' else 'ON'
# could have been written as:
# if c == '0':
# ruleDict[i] = 'OFF'
# else:
# ruleDict[i] = 'ON'
# To make it so ruleDict[0] is the LSB of the number:
#ruleDict[len(rule)-1-i] = 'OFF' if c == '0' else 'ON'
return ruleDict
print rule(30)
Output:
$ python rule.py
{0: 'OFF', 1: 'ON', 2: 'ON', 3: 'ON', 4: 'ON', 5: 'OFF', 6: 'OFF', 7: 'OFF'}
The output actually happens to be printed in reverse order, because there is no guarantee that a dictionary's keys will be printed in any particular order. However, you will notice that the numbers correspond where the largest number is the most significant bit. That's why we had to do the funny business of indexing ruleDict at len(rule)-1-i.
|
Python: Traversing a string, checking its element, and inputting dictionary key-value pairs
|
I have a function that returns an 8 digit long binary string for given parameter:
def rule(x):
rule = bin(x)[2:].zfill(8)
return rule
I want to traverse each index of this string and check if it is a zero or a one. I tried to write a code like this:
def rule(x):
rule = bin(x)[2:].zfill(8)
while i < len(rule(x)):
if rule[i] == '0'
ruleList = {i:'OFF'}
elif rule[i] == '1'
ruleList = {i:'ON'}
i = i + 1
return ruleList
This code doesn't work. I am getting "Error: Object is unsubscriptable". What I am attempting to do is write a function that takes the following input, for example:
Input: 30
1. Converts to '00011110' (So far, so good)..
2. Checks if rule(30)[i] is '0' or '1' ('0' in this case where i = 0)
3. Then puts the result in a key value pair, where the index of the
string is the key and the state (on
or off) is the value.
4. The end result would be 'ruleList', where print ruleList
would yield something like this:
{0:'Off',1:'Off',2:'Off',3:'On',4:'On',5:'On',6:'On',7:'Off'}
Can someone help me out? I am new to python and programming in general so this function has proven to be quite challenging. I would like to see some of the more experienced coders solutions to this particular problem.
Thanks,
|
[
"Is this what you want?\ndef rule(x) :\n rule = bin(x)[2:].zfill(8)\n return dict((index, 'ON' if int(i) else 'OFF') for index, i in enumerate(rule)) \n\n",
"Here's a much more Pythonic version of the code you've written - hopefully the comments explain the code well enough to understand.\ndef rule(x):\n rule = bin(x)[2:].zfill(8)\n ruleDict = {} # create an empty dictionary\n for i,c in enumerate(rule): # i = index, c = character at index, for each character in rule\n # Leftmost bit of rule is key 0, increasing as you move right\n ruleDict[i] = 'OFF' if c == '0' else 'ON' \n # could have been written as:\n # if c == '0':\n # ruleDict[i] = 'OFF'\n # else:\n # ruleDict[i] = 'ON'\n\n # To make it so ruleDict[0] is the LSB of the number:\n #ruleDict[len(rule)-1-i] = 'OFF' if c == '0' else 'ON' \n return ruleDict\n\nprint rule(30)\n\nOutput:\n$ python rule.py\n{0: 'OFF', 1: 'ON', 2: 'ON', 3: 'ON', 4: 'ON', 5: 'OFF', 6: 'OFF', 7: 'OFF'}\n\nThe output actually happens to be printed in reverse order, because there is no guarantee that a dictionary's keys will be printed in any particular order. However, you will notice that the numbers correspond where the largest number is the most significant bit. That's why we had to do the funny business of indexing ruleDict at len(rule)-1-i.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001430957_python.txt
|
Q:
Configuring Roundup with Apache
I think I just need a bit more guidance than what the documentation
gives, and it's quite hard to find anything relating to Roundup and
Apache specifically.
All i'm trying to do currently is to have Apache display what the
stand-alone server does when running roundup-server
support=C:/Roundup/
Running windows XP with apache 2.2 and python 2.5 roundup 1.4.6
I don't really have any further notes of interest, so if anyone has already
got this running, could you please show me your configuration and i'll
see how i go from there :) I don't expect anyone to analyse the 403 forbidden error I get before I'm sure my httpd.conf file is correct first
Thanks in advance
A:
First, requires the following modules enabled:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
Then, the following lines are needed to ensure any requests to /issues/ are directed to the already running roundup-server thread. Apache doesn't actually deal with the code it just passes it on! As below, i'm only worried about getting one tracker to run currently, let alone all of them as it's running on a server with other modules and I'm really not sure how to make virtual hosts work on my domain.
<IfModule mod_proxy.c>
# proxy through one tracker
ProxyPass /issues/ http://localhost:80/issues/
# proxy through all tracker(*)
#ProxyPass /roundup/ http://localhost:80/
</IfModule>
And that's it :) Just ensure you have roundup-server -p 8080 issues=C:/Roundup/ going on in the background and it should accept requests through Apache.
A:
It is fairly easy to run roundup under Apache if you use mod_wsgi.
Unfortunately I've since moved away from roundup and no longer have a copy of my wsgi script to show you, but you should be able to figure it out from this mod_wgi mailing list thead.
|
Configuring Roundup with Apache
|
I think I just need a bit more guidance than what the documentation
gives, and it's quite hard to find anything relating to Roundup and
Apache specifically.
All i'm trying to do currently is to have Apache display what the
stand-alone server does when running roundup-server
support=C:/Roundup/
Running windows XP with apache 2.2 and python 2.5 roundup 1.4.6
I don't really have any further notes of interest, so if anyone has already
got this running, could you please show me your configuration and i'll
see how i go from there :) I don't expect anyone to analyse the 403 forbidden error I get before I'm sure my httpd.conf file is correct first
Thanks in advance
|
[
"First, requires the following modules enabled:\nLoadModule proxy_module modules/mod_proxy.so\nLoadModule proxy_ajp_module modules/mod_proxy_ajp.so\nLoadModule proxy_balancer_module modules/mod_proxy_balancer.so\nLoadModule proxy_connect_module modules/mod_proxy_connect.so\nLoadModule proxy_ftp_module modules/mod_proxy_ftp.so\nLoadModule proxy_http_module modules/mod_proxy_http.so\n\nThen, the following lines are needed to ensure any requests to /issues/ are directed to the already running roundup-server thread. Apache doesn't actually deal with the code it just passes it on! As below, i'm only worried about getting one tracker to run currently, let alone all of them as it's running on a server with other modules and I'm really not sure how to make virtual hosts work on my domain.\n<IfModule mod_proxy.c>\n # proxy through one tracker\n ProxyPass /issues/ http://localhost:80/issues/\n # proxy through all tracker(*)\n #ProxyPass /roundup/ http://localhost:80/\n</IfModule>\n\nAnd that's it :) Just ensure you have roundup-server -p 8080 issues=C:/Roundup/ going on in the background and it should accept requests through Apache.\n",
"It is fairly easy to run roundup under Apache if you use mod_wsgi.\nUnfortunately I've since moved away from roundup and no longer have a copy of my wsgi script to show you, but you should be able to figure it out from this mod_wgi mailing list thead.\n"
] |
[
3,
1
] |
[] |
[] |
[
"apache",
"python",
"roundup"
] |
stackoverflow_0001430364_apache_python_roundup.txt
|
Q:
What are the ways to run a server side script forever?
I need to run a server side script like Python "forever" (or as long as possible without loosing state), so they can keep sockets open and asynchronously react to events like data received. For example if I use Twisted for socket communication.
How would I manage something like this?
Am I confused? or are there are better ways to implement asynchronous socket communication?
After starting the script once via Apache server, how do I stop it running?
A:
If you are using twisted then it has a whole infrastructure for starting and stopping daemons.
http://twistedmatrix.com/projects/core/documentation/howto/application.html
How would I manage something like this?
Twisted works well for this, read the link above
Am I confused? or are there are better ways to implement asynchronous socket communication?
Twisted is very good at asynchronous socket communications. It is hard on the brain until you get the hang of it though!
After starting the script once via Apache server, how do I stop it running?
The twisted tools assume command line access, so you'd have to write a cgi wrapper for starting / stopping them if I understand what you want to do.
A:
You can just write an script that is continuously in a while block waiting for the connection to happen and waits for a signal to close it.
http://docs.python.org/library/signal.html
Then to stop it you just need to run another script that sends that signal to him.
A:
You can use a ‘double fork’ to run your code in a new background process unbound to the old one. See eg this recipe with more explanatory comments than you could possibly want.
I wouldn't recommend this as a primary way of running background tasks for a web site. If your Python is embedded in an Apache process, for example, you'll be forking more than you want. Better to invoke the daemon separately (just under a similar low-privilege user).
After starting the script once via Apache server, how do I stop it running?
You have your second fork write the process number (pid) of the daemon process to a file, and then read the pid from that file and send it a terminate signal (os.kill(pid, signal.SIG_TERM)).
Am I confused?
That's the question! I'm assuming you are trying to have a background process that responds on a different port to the web interface for some sort of unusual net service. If you merely talking about responding to normal web requests you shoudn't be doing this, you should rely on Apache to handle your sockets and service one request at a time.
A:
I think Comet is what you're looking for. Make sure to take a look at Tornado too.
A:
You may want to look at FastCGI, it sounds exactly like what you are looking for, but I'm not sure if it's under current development. It uses a CGI daemon and a special apache module to communicate with it. Since the daemon is long running, you don't have the fork/exec cost. But as a cost of managing your own resources (no automagic cleanup on every request)
One reason why this style of FastCGI isn't used much anymore is there are ways to embed interpreters into the Apache binary and have them run in server. I'm not familiar with mod_python, but i know mod_perl has configuration to allow long running processes. Be careful here, since a long running process in the server can cause resource leaks.
A general question is: what do you want to do? Why do you need this second process, but yet somehow controlled by apache? Why can'ty ou just build a daemon that talks to apache, why does it have to be controlled by apache?
|
What are the ways to run a server side script forever?
|
I need to run a server side script like Python "forever" (or as long as possible without loosing state), so they can keep sockets open and asynchronously react to events like data received. For example if I use Twisted for socket communication.
How would I manage something like this?
Am I confused? or are there are better ways to implement asynchronous socket communication?
After starting the script once via Apache server, how do I stop it running?
|
[
"If you are using twisted then it has a whole infrastructure for starting and stopping daemons.\nhttp://twistedmatrix.com/projects/core/documentation/howto/application.html\n\nHow would I manage something like this?\n\nTwisted works well for this, read the link above\n\nAm I confused? or are there are better ways to implement asynchronous socket communication?\n\nTwisted is very good at asynchronous socket communications. It is hard on the brain until you get the hang of it though!\n\nAfter starting the script once via Apache server, how do I stop it running?\n\nThe twisted tools assume command line access, so you'd have to write a cgi wrapper for starting / stopping them if I understand what you want to do.\n",
"You can just write an script that is continuously in a while block waiting for the connection to happen and waits for a signal to close it.\nhttp://docs.python.org/library/signal.html\nThen to stop it you just need to run another script that sends that signal to him.\n",
"You can use a ‘double fork’ to run your code in a new background process unbound to the old one. See eg this recipe with more explanatory comments than you could possibly want.\nI wouldn't recommend this as a primary way of running background tasks for a web site. If your Python is embedded in an Apache process, for example, you'll be forking more than you want. Better to invoke the daemon separately (just under a similar low-privilege user).\n\nAfter starting the script once via Apache server, how do I stop it running?\n\nYou have your second fork write the process number (pid) of the daemon process to a file, and then read the pid from that file and send it a terminate signal (os.kill(pid, signal.SIG_TERM)).\n\nAm I confused?\n\nThat's the question! I'm assuming you are trying to have a background process that responds on a different port to the web interface for some sort of unusual net service. If you merely talking about responding to normal web requests you shoudn't be doing this, you should rely on Apache to handle your sockets and service one request at a time.\n",
"I think Comet is what you're looking for. Make sure to take a look at Tornado too.\n",
"You may want to look at FastCGI, it sounds exactly like what you are looking for, but I'm not sure if it's under current development. It uses a CGI daemon and a special apache module to communicate with it. Since the daemon is long running, you don't have the fork/exec cost. But as a cost of managing your own resources (no automagic cleanup on every request)\nOne reason why this style of FastCGI isn't used much anymore is there are ways to embed interpreters into the Apache binary and have them run in server. I'm not familiar with mod_python, but i know mod_perl has configuration to allow long running processes. Be careful here, since a long running process in the server can cause resource leaks.\nA general question is: what do you want to do? Why do you need this second process, but yet somehow controlled by apache? Why can'ty ou just build a daemon that talks to apache, why does it have to be controlled by apache?\n"
] |
[
3,
1,
1,
0,
0
] |
[] |
[] |
[
"apache",
"python",
"sockets",
"twisted",
"webserver"
] |
stackoverflow_0001427000_apache_python_sockets_twisted_webserver.txt
|
Q:
Pylint, PyChecker or PyFlakes?
I would like to get some feedback on these tools on:
features;
adaptability;
ease of use and learning curve.
A:
Well, I am a bit curious, so I just tested the three myself right after asking the question ;-)
Ok, this is not a very serious review, but here is what I can say:
I tried the tools with the default settings (it's important because you can pretty much choose your check rules) on the following script:
#!/usr/local/bin/python
# by Daniel Rosengren modified by e-satis
import sys, time
stdout = sys.stdout
BAILOUT = 16
MAX_ITERATIONS = 1000
class Iterator(object) :
def __init__(self):
print 'Rendering...'
for y in xrange(-39, 39):
stdout.write('\n')
for x in xrange(-39, 39):
if self.mandelbrot(x/40.0, y/40.0) :
stdout.write(' ')
else:
stdout.write('*')
def mandelbrot(self, x, y):
cr = y - 0.5
ci = x
zi = 0.0
zr = 0.0
for i in xrange(MAX_ITERATIONS) :
temp = zr * zi
zr2 = zr * zr
zi2 = zi * zi
zr = zr2 - zi2 + cr
zi = temp + temp + ci
if zi2 + zr2 > BAILOUT:
return i
return 0
t = time.time()
Iterator()
print '\nPython Elapsed %.02f' % (time.time() - t)
As a result:
PyChecker is troublesome because it compiles the module to analyze it. If you don't want your code to run (e.g, it performs a SQL query), that's bad.
PyFlakes is supposed to be light. Indeed, it decided that the code was perfect. I am looking for something quite severe so I don't think I'll go for it.
PyLint has been very talkative and rated the code 3/10 (OMG, I'm a dirty coder !).
Strong points of PyLint:
Very descriptive and accurate report.
Detect some code smells. Here it told me to drop my class to write something with functions because the OO approach was useless in this specific case. Something I knew, but never expected a computer to tell me :-p
The fully corrected code run faster (no class, no reference binding...).
Made by a French team. OK, it's not a plus for everybody, but I like it ;-)
Cons of Pylint:
Some rules are really strict. I know that you can change it and that the default is to match PEP8, but is it such a crime to write 'for x in seq'? Apparently yes because you can't write a variable name with less than 3 letters. I will change that.
Very very talkative. Be ready to use your eyes.
Corrected script (with lazy doc strings and variable names):
#!/usr/local/bin/python
# by Daniel Rosengren, modified by e-satis
"""
Module doctring
"""
import time
from sys import stdout
BAILOUT = 16
MAX_ITERATIONS = 1000
def mandelbrot(dim_1, dim_2):
"""
function doc string
"""
cr1 = dim_1 - 0.5
ci1 = dim_2
zi1 = 0.0
zr1 = 0.0
for i in xrange(MAX_ITERATIONS) :
temp = zr1 * zi1
zr2 = zr1 * zr1
zi2 = zi1 * zi1
zr1 = zr2 - zi2 + cr1
zi1 = temp + temp + ci1
if zi2 + zr2 > BAILOUT:
return i
return 0
def execute() :
"""
func doc string
"""
print 'Rendering...'
for dim_1 in xrange(-39, 39):
stdout.write('\n')
for dim_2 in xrange(-39, 39):
if mandelbrot(dim_1/40.0, dim_2/40.0) :
stdout.write(' ')
else:
stdout.write('*')
START_TIME = time.time()
execute()
print '\nPython Elapsed %.02f' % (time.time() - START_TIME)
Thanks to Rudiger Wolf, I discovered pep8 that does exactly what its name suggests: matching PEP8. It has found several syntax no-nos that Pylint did not. But Pylint found stuff that was not specifically linked to PEP8 but interesting. Both tools are interesting and complementary.
Eventually I will use both since there are really easy to install (via packages or setuptools) and the output text is so easy to chain.
To give you a little idea of their output:
pep8:
./python_mandelbrot.py:4:11: E401 multiple imports on one line
./python_mandelbrot.py:10:1: E302 expected 2 blank lines, found 1
./python_mandelbrot.py:10:23: E203 whitespace before ':'
./python_mandelbrot.py:15:80: E501 line too long (108 characters)
./python_mandelbrot.py:23:1: W291 trailing whitespace
./python_mandelbrot.py:41:5: E301 expected 1 blank line, found 3
Pylint:
************* Module python_mandelbrot
C: 15: Line too long (108/80)
C: 61: Line too long (85/80)
C: 1: Missing docstring
C: 5: Invalid name "stdout" (should match (([A-Z_][A-Z0-9_]*)|(__.*__))$)
C: 10:Iterator: Missing docstring
C: 15:Iterator.__init__: Invalid name "y" (should match [a-z_][a-z0-9_]{2,30}$)
C: 17:Iterator.__init__: Invalid name "x" (should match [a-z_][a-z0-9_]{2,30}$)
[...] and a very long report with useful stats like :
Duplication
-----------
+-------------------------+------+---------+-----------+
| |now |previous |difference |
+=========================+======+=========+===========+
|nb duplicated lines |0 |0 |= |
+-------------------------+------+---------+-----------+
|percent duplicated lines |0.000 |0.000 |= |
+-------------------------+------+---------+-----------+
A:
pep8 was recently added to PyPi.
pep8 - Python style guide checker
pep8 is a tool to check your Python code against some of the style conventions in PEP 8.
It is now super easy to check your code against pep8.
See http://pypi.python.org/pypi/pep8
|
Pylint, PyChecker or PyFlakes?
|
I would like to get some feedback on these tools on:
features;
adaptability;
ease of use and learning curve.
|
[
"Well, I am a bit curious, so I just tested the three myself right after asking the question ;-)\nOk, this is not a very serious review, but here is what I can say:\nI tried the tools with the default settings (it's important because you can pretty much choose your check rules) on the following script:\n#!/usr/local/bin/python\n# by Daniel Rosengren modified by e-satis\n\nimport sys, time\nstdout = sys.stdout\n\nBAILOUT = 16\nMAX_ITERATIONS = 1000\n\nclass Iterator(object) :\n\n def __init__(self):\n\n print 'Rendering...'\n for y in xrange(-39, 39):\n stdout.write('\\n')\n for x in xrange(-39, 39):\n if self.mandelbrot(x/40.0, y/40.0) :\n stdout.write(' ')\n else:\n stdout.write('*')\n\n\n def mandelbrot(self, x, y):\n cr = y - 0.5\n ci = x\n zi = 0.0\n zr = 0.0\n\n for i in xrange(MAX_ITERATIONS) :\n temp = zr * zi\n zr2 = zr * zr\n zi2 = zi * zi\n zr = zr2 - zi2 + cr\n zi = temp + temp + ci\n\n if zi2 + zr2 > BAILOUT:\n return i\n\n return 0\n\nt = time.time()\nIterator()\nprint '\\nPython Elapsed %.02f' % (time.time() - t)\n\nAs a result:\n\nPyChecker is troublesome because it compiles the module to analyze it. If you don't want your code to run (e.g, it performs a SQL query), that's bad.\nPyFlakes is supposed to be light. Indeed, it decided that the code was perfect. I am looking for something quite severe so I don't think I'll go for it.\nPyLint has been very talkative and rated the code 3/10 (OMG, I'm a dirty coder !).\n\nStrong points of PyLint:\n\nVery descriptive and accurate report.\nDetect some code smells. Here it told me to drop my class to write something with functions because the OO approach was useless in this specific case. Something I knew, but never expected a computer to tell me :-p\nThe fully corrected code run faster (no class, no reference binding...).\nMade by a French team. OK, it's not a plus for everybody, but I like it ;-)\n\nCons of Pylint:\n\nSome rules are really strict. I know that you can change it and that the default is to match PEP8, but is it such a crime to write 'for x in seq'? Apparently yes because you can't write a variable name with less than 3 letters. I will change that.\nVery very talkative. Be ready to use your eyes.\n\nCorrected script (with lazy doc strings and variable names):\n#!/usr/local/bin/python\n# by Daniel Rosengren, modified by e-satis\n\"\"\"\nModule doctring\n\"\"\"\n\n\nimport time\nfrom sys import stdout\n\nBAILOUT = 16\nMAX_ITERATIONS = 1000\n\ndef mandelbrot(dim_1, dim_2):\n \"\"\"\n function doc string\n \"\"\"\n cr1 = dim_1 - 0.5\n ci1 = dim_2\n zi1 = 0.0\n zr1 = 0.0\n\n for i in xrange(MAX_ITERATIONS) :\n temp = zr1 * zi1\n zr2 = zr1 * zr1\n zi2 = zi1 * zi1\n zr1 = zr2 - zi2 + cr1\n zi1 = temp + temp + ci1\n\n if zi2 + zr2 > BAILOUT:\n return i\n\n return 0\n\ndef execute() :\n \"\"\"\n func doc string\n \"\"\"\n print 'Rendering...'\n for dim_1 in xrange(-39, 39):\n stdout.write('\\n')\n for dim_2 in xrange(-39, 39):\n if mandelbrot(dim_1/40.0, dim_2/40.0) :\n stdout.write(' ')\n else:\n stdout.write('*')\n\n\nSTART_TIME = time.time()\nexecute()\nprint '\\nPython Elapsed %.02f' % (time.time() - START_TIME)\n\nThanks to Rudiger Wolf, I discovered pep8 that does exactly what its name suggests: matching PEP8. It has found several syntax no-nos that Pylint did not. But Pylint found stuff that was not specifically linked to PEP8 but interesting. Both tools are interesting and complementary.\nEventually I will use both since there are really easy to install (via packages or setuptools) and the output text is so easy to chain.\nTo give you a little idea of their output:\npep8:\n./python_mandelbrot.py:4:11: E401 multiple imports on one line\n./python_mandelbrot.py:10:1: E302 expected 2 blank lines, found 1\n./python_mandelbrot.py:10:23: E203 whitespace before ':'\n./python_mandelbrot.py:15:80: E501 line too long (108 characters)\n./python_mandelbrot.py:23:1: W291 trailing whitespace\n./python_mandelbrot.py:41:5: E301 expected 1 blank line, found 3\n\nPylint:\n************* Module python_mandelbrot\nC: 15: Line too long (108/80)\nC: 61: Line too long (85/80)\nC: 1: Missing docstring\nC: 5: Invalid name \"stdout\" (should match (([A-Z_][A-Z0-9_]*)|(__.*__))$)\nC: 10:Iterator: Missing docstring\nC: 15:Iterator.__init__: Invalid name \"y\" (should match [a-z_][a-z0-9_]{2,30}$)\nC: 17:Iterator.__init__: Invalid name \"x\" (should match [a-z_][a-z0-9_]{2,30}$)\n\n[...] and a very long report with useful stats like :\n\nDuplication\n-----------\n\n+-------------------------+------+---------+-----------+\n| |now |previous |difference |\n+=========================+======+=========+===========+\n|nb duplicated lines |0 |0 |= |\n+-------------------------+------+---------+-----------+\n|percent duplicated lines |0.000 |0.000 |= |\n+-------------------------+------+---------+-----------+\n\n",
"pep8 was recently added to PyPi.\n\npep8 - Python style guide checker\npep8 is a tool to check your Python code against some of the style conventions in PEP 8.\n\nIt is now super easy to check your code against pep8.\nSee http://pypi.python.org/pypi/pep8\n"
] |
[
292,
95
] |
[] |
[] |
[
"pep8",
"pychecker",
"pyflakes",
"pylint",
"python"
] |
stackoverflow_0001428872_pep8_pychecker_pyflakes_pylint_python.txt
|
Q:
Accessing Plist items in a dict
I have a class in a module I that reads a plist (XML) file and returns a dict. This is extremely convenient because I can say something like:
Data.ServerNow.Property().DefaultChart
This returns a property dictionary, specifically the value for DefaultChart. Very elegant.
However, assembling a dictionary this way fails:
dict={'Data': 'text1', 'Name':'text2', 'Place':'text3]}
dict looks exactly like the Plist dict.
But when I say
print TextNow.Data().Name
I get error
'dict' object has no attribute 'Name'
But if I say
print TextNow.Data()['Name']
suddenly it works!
Can someone explain this behavior? Is there a way to convert a dict to an XML-ish dict?
A:
It doesn't work because the dot operator is not proper accessor syntax for python dictionaries. You;re trying to treat it as an object and access a property, rather than accessing a data member of the data structure.
A:
You can use getattr redefinition to treat dictionary keys as attributes, e.g.:
class xmldict(dict):
def __getattr__(self, attr):
try:
return object.__getattribute__(self, attr)
except AttributeError:
if attr in self:
return self[attr]
else:
raise
So, for example if you will have following dict:
dict_ = {'a':'some text'}
You can do so:
>> print xmldict(dict_).a
some text
>> print xmldict(dict_).NonExistent
Traceback (most recent call last):
...
AttributeError: 'xmldict' object has no attribute 'NonExistent'
|
Accessing Plist items in a dict
|
I have a class in a module I that reads a plist (XML) file and returns a dict. This is extremely convenient because I can say something like:
Data.ServerNow.Property().DefaultChart
This returns a property dictionary, specifically the value for DefaultChart. Very elegant.
However, assembling a dictionary this way fails:
dict={'Data': 'text1', 'Name':'text2', 'Place':'text3]}
dict looks exactly like the Plist dict.
But when I say
print TextNow.Data().Name
I get error
'dict' object has no attribute 'Name'
But if I say
print TextNow.Data()['Name']
suddenly it works!
Can someone explain this behavior? Is there a way to convert a dict to an XML-ish dict?
|
[
"It doesn't work because the dot operator is not proper accessor syntax for python dictionaries. You;re trying to treat it as an object and access a property, rather than accessing a data member of the data structure.\n",
"You can use getattr redefinition to treat dictionary keys as attributes, e.g.:\nclass xmldict(dict):\n def __getattr__(self, attr):\n try:\n return object.__getattribute__(self, attr)\n except AttributeError:\n if attr in self:\n return self[attr]\n else:\n raise\n\nSo, for example if you will have following dict:\ndict_ = {'a':'some text'}\n\nYou can do so:\n>> print xmldict(dict_).a\nsome text\n>> print xmldict(dict_).NonExistent\nTraceback (most recent call last):\n ...\nAttributeError: 'xmldict' object has no attribute 'NonExistent'\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"namespaces",
"plist",
"python",
"xml"
] |
stackoverflow_0001431424_namespaces_plist_python_xml.txt
|
Q:
Casting regex arguments into a list
Greetings,
A script is working on one or more files. I want to pass the filenames (with regex in them) as arguments and put them in a list. What is the best way to do it?
For example I would accept the following arguments:
script.py file[1-3].nc #would create list [file1.nc, file2.nc, file3.nc] that I can work on
script.py file*.nc #would scan the folder for matching patterns and create a list
script.py file1.nc file15.nc booba[1-2].nc #creates [file1.nc, file15.nc, booba1.nc, booba2.nc]
A:
The glob module is exactly what you are looking for
Check the examples:
>>> import glob
>>> glob.glob('./[0-9].*')
['./1.gif', './2.txt']
>>> glob.glob('*.gif')
['1.gif', 'card.gif']
>>> glob.glob('?.gif')
['1.gif']
You can use optparse or just sys.argv to get arguments. And pass them to glob.
A:
Updated:
Under Unix, the shell will do the example expansions you want.
Under Windows it won't, and then you need to use glob.glob().
But if you really do want regexp: Then you will simply have to list the directory, with listdir, and match the filenames with the regexp pattern. You'll also have to pass the parameter in quotes (at least under unix) so it doesn't expand it for you. :-)
|
Casting regex arguments into a list
|
Greetings,
A script is working on one or more files. I want to pass the filenames (with regex in them) as arguments and put them in a list. What is the best way to do it?
For example I would accept the following arguments:
script.py file[1-3].nc #would create list [file1.nc, file2.nc, file3.nc] that I can work on
script.py file*.nc #would scan the folder for matching patterns and create a list
script.py file1.nc file15.nc booba[1-2].nc #creates [file1.nc, file15.nc, booba1.nc, booba2.nc]
|
[
"The glob module is exactly what you are looking for\nCheck the examples:\n>>> import glob\n>>> glob.glob('./[0-9].*')\n['./1.gif', './2.txt']\n>>> glob.glob('*.gif')\n['1.gif', 'card.gif']\n>>> glob.glob('?.gif')\n['1.gif']\n\nYou can use optparse or just sys.argv to get arguments. And pass them to glob.\n",
"Updated:\nUnder Unix, the shell will do the example expansions you want.\nUnder Windows it won't, and then you need to use glob.glob().\nBut if you really do want regexp: Then you will simply have to list the directory, with listdir, and match the filenames with the regexp pattern. You'll also have to pass the parameter in quotes (at least under unix) so it doesn't expand it for you. :-)\n"
] |
[
4,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001431968_python_regex.txt
|
Q:
In which module does the Timer class reside in python?
Am reading the following online Timer manual for python 2.5, but am wondering where is the actual module where the class Timer resides?
Gath
A:
The way to figure this out: on the page you linked to, there's an Up link. It goes to the threading module.
A:
It's in the threading module.
|
In which module does the Timer class reside in python?
|
Am reading the following online Timer manual for python 2.5, but am wondering where is the actual module where the class Timer resides?
Gath
|
[
"The way to figure this out: on the page you linked to, there's an Up link. It goes to the threading module.\n",
"It's in the threading module.\n"
] |
[
6,
1
] |
[] |
[] |
[
"python",
"timer"
] |
stackoverflow_0001432068_python_timer.txt
|
Q:
deleting files with python scripts
I want to delete some files with python scripts (while using Windows). I have tried the following code:
>>>import os
>>> os.remove ('D:\new.docx')
but I am getting the following error:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in -toplevel-
os.remove ('D:\new.docx')
OSError: [Errno 22] Invalid argument: 'D:\new.docx'
Can anyone here help me with this?
THanks.
Gillani
A:
\ is the escape char for python. try replacing it with \\ .
ex:
os.remove ('D:\\new.docx')
A:
A few options:
Escape the backslash:
>>> os.remove('D:\\new.docx')
The runtime library in Windows accepts a forward slash as a separator:
>>> os.remove('D:/new.docx')
Raw string:
>>> os.remove(r'D:\new.docx')
|
deleting files with python scripts
|
I want to delete some files with python scripts (while using Windows). I have tried the following code:
>>>import os
>>> os.remove ('D:\new.docx')
but I am getting the following error:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in -toplevel-
os.remove ('D:\new.docx')
OSError: [Errno 22] Invalid argument: 'D:\new.docx'
Can anyone here help me with this?
THanks.
Gillani
|
[
"\\ is the escape char for python. try replacing it with \\\\ .\nex:\nos.remove ('D:\\\\new.docx')\n\n",
"A few options:\nEscape the backslash:\n>>> os.remove('D:\\\\new.docx')\n\nThe runtime library in Windows accepts a forward slash as a separator:\n>>> os.remove('D:/new.docx')\n\nRaw string:\n>>> os.remove(r'D:\\new.docx')\n\n"
] |
[
6,
6
] |
[] |
[] |
[
"file",
"python"
] |
stackoverflow_0001432122_file_python.txt
|
Q:
Returning a list
I have the following code:
def foo(*args)
print len(args)
print args
now I'd like to know how to return that same args list. I guess it should be simple?
Thanks
A:
It is indeed simple:
return args
Here is the Python tutorial:
There are also many resources on beginners python on the net. Some are listed in this question.
|
Returning a list
|
I have the following code:
def foo(*args)
print len(args)
print args
now I'd like to know how to return that same args list. I guess it should be simple?
Thanks
|
[
"It is indeed simple:\nreturn args\n\nHere is the Python tutorial: \nThere are also many resources on beginners python on the net. Some are listed in this question.\n"
] |
[
6
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001432358_python.txt
|
Q:
Is it possible to run pydev connected to a virtualbox instance?
At the moment I'm developing using a simple editor, putty, and a VirtualBox instance of a linux server. I've heard good things about pydev and would like to try it, but I'd like to use the python install & terminal from my VirtualBox guest OS.
I'm already using a Shared Folder with VirtualBox so my Guest OS can see my local files.
Is it possible to tell pydev to use this "remote" host over SSH to execute its python-related commands?
UPDATE:
My main environment is windows, but I'd also like to be able to work this way on OS X.
A:
I assume your host box is windows.
I also assume that pydev will run under linux (since it's eclipse based). Are you ok installing the dev environment on your linux server?
In which case:
install and run xming on your windows box
Install eclipse & pydev on your linux box
Configure x forwarding in putty
Run pydev through putty and you'll have the UI appear on your windows machine like normal
Then pydev will be running on the linux box quite happily, and so using the python environment on there.
Downsides: you will need to install the X libraries & java on your server (installing eclipse using your normal package manager should be enough), although you won't need to run X itself, since that's what Xming is for.
A:
UPDATE:
Let me understand the situation, Windows is hosting the virtualBox which host the linux.
You connect to the linux using putty.
Python files are on the linux machine and you wish to edit them from your Windows using pydev. So either do that using the sharing features of virtual box (which can work for you in vboth ways) or use ssh to edit the linux files from windows.
both options would be valid for MacOSx AFAIK
Below you cna find the way to do so over SSH
You map a netwrok drive over SSH and then you can access the files via that drive letter
see more at
http://www.neophob.com/serendipity/index.php?/archives/103-Map-a-Network-drive-net-use-over-SSH.html
and
http://smithii.com/map_a_network_drive_over_ssh_in_windows
|
Is it possible to run pydev connected to a virtualbox instance?
|
At the moment I'm developing using a simple editor, putty, and a VirtualBox instance of a linux server. I've heard good things about pydev and would like to try it, but I'd like to use the python install & terminal from my VirtualBox guest OS.
I'm already using a Shared Folder with VirtualBox so my Guest OS can see my local files.
Is it possible to tell pydev to use this "remote" host over SSH to execute its python-related commands?
UPDATE:
My main environment is windows, but I'd also like to be able to work this way on OS X.
|
[
"I assume your host box is windows.\nI also assume that pydev will run under linux (since it's eclipse based). Are you ok installing the dev environment on your linux server?\nIn which case:\n\ninstall and run xming on your windows box\nInstall eclipse & pydev on your linux box\nConfigure x forwarding in putty\nRun pydev through putty and you'll have the UI appear on your windows machine like normal\n\nThen pydev will be running on the linux box quite happily, and so using the python environment on there.\nDownsides: you will need to install the X libraries & java on your server (installing eclipse using your normal package manager should be enough), although you won't need to run X itself, since that's what Xming is for.\n",
"UPDATE:\nLet me understand the situation, Windows is hosting the virtualBox which host the linux.\nYou connect to the linux using putty.\nPython files are on the linux machine and you wish to edit them from your Windows using pydev. So either do that using the sharing features of virtual box (which can work for you in vboth ways) or use ssh to edit the linux files from windows.\nboth options would be valid for MacOSx AFAIK\nBelow you cna find the way to do so over SSH\nYou map a netwrok drive over SSH and then you can access the files via that drive letter\nsee more at \nhttp://www.neophob.com/serendipity/index.php?/archives/103-Map-a-Network-drive-net-use-over-SSH.html\nand\nhttp://smithii.com/map_a_network_drive_over_ssh_in_windows\n"
] |
[
1,
0
] |
[] |
[] |
[
"linux",
"pydev",
"python",
"virtualbox"
] |
stackoverflow_0001431936_linux_pydev_python_virtualbox.txt
|
Q:
Help with Admin forms validation error
I am quite new to Django, I'm having few problems with validation
forms in Admin module, more specifically with raising exceptions in the
ModelForm. I can validate and manipulate data in clean methods but
cannot seem to raise any errors. Whenever I include any raise
statement I get this error "'NoneType' object has no attribute
'ValidationError'". When I remove the raise part everything works
fine.
Then if I reimport django.forms (inside clean method) with a different alias (e.g. from django import forms as blahbalh) then I'm able to raise messages using blahblah.ValidateException.
Any tips or suggestions on doing such a thing properly ?
Here's an example of what I'm doing in Admin.py:
admin.py
from django import forms
from proj.models import *
from django.contrib import admin
class FontAdminForm(forms.ModelForm):
class Meta:
model = Font
def clean_name(self):
return self.cleaned_data["name"].upper()
def clean_description(self):
desc = self.cleaned_data['description']
if desc and if len(desc) < 10:
raise forms.ValidationError('Description is too short.')
return desc
class FontAdmin(admin.ModelAdmin):
form = FontAdminForm
list_display = ['name', 'description']
admin.site.register(Font, FontAdmin)
--
Thanks,
A
A:
You problem might be in the * import.
from proj.models import *
if proj.models contains any variable named forms (including some module import like "from django import forms), it could trounce your initial import of:
from django import forms
I would explicitly import from proj.models, e.g.
from proj.models import Font
If that doesn't work, see if there are any other variables name "forms" that could be messing with your scope.
You can use introspection to see what "forms" is. Inside your clean_description method:
print forms.__package__
My guess is it is not going to be "django" (or will return an error, indicating that it is definitely not django.forms).
|
Help with Admin forms validation error
|
I am quite new to Django, I'm having few problems with validation
forms in Admin module, more specifically with raising exceptions in the
ModelForm. I can validate and manipulate data in clean methods but
cannot seem to raise any errors. Whenever I include any raise
statement I get this error "'NoneType' object has no attribute
'ValidationError'". When I remove the raise part everything works
fine.
Then if I reimport django.forms (inside clean method) with a different alias (e.g. from django import forms as blahbalh) then I'm able to raise messages using blahblah.ValidateException.
Any tips or suggestions on doing such a thing properly ?
Here's an example of what I'm doing in Admin.py:
admin.py
from django import forms
from proj.models import *
from django.contrib import admin
class FontAdminForm(forms.ModelForm):
class Meta:
model = Font
def clean_name(self):
return self.cleaned_data["name"].upper()
def clean_description(self):
desc = self.cleaned_data['description']
if desc and if len(desc) < 10:
raise forms.ValidationError('Description is too short.')
return desc
class FontAdmin(admin.ModelAdmin):
form = FontAdminForm
list_display = ['name', 'description']
admin.site.register(Font, FontAdmin)
--
Thanks,
A
|
[
"You problem might be in the * import.\nfrom proj.models import * \n\nif proj.models contains any variable named forms (including some module import like \"from django import forms), it could trounce your initial import of:\nfrom django import forms\n\nI would explicitly import from proj.models, e.g.\nfrom proj.models import Font\n\nIf that doesn't work, see if there are any other variables name \"forms\" that could be messing with your scope.\nYou can use introspection to see what \"forms\" is. Inside your clean_description method:\nprint forms.__package__\n\nMy guess is it is not going to be \"django\" (or will return an error, indicating that it is definitely not django.forms).\n"
] |
[
4
] |
[] |
[] |
[
"django",
"django_admin",
"django_models",
"python"
] |
stackoverflow_0001432530_django_django_admin_django_models_python.txt
|
Q:
Load an existing many-to-many table relation with sqlalchemy
I'm using SqlAlchemy to interact with an existing PostgreSQL database.
I need to access data organized in a many-to-many relationship. The documentation describes how to create relationships, but I cannot find an example for neatly loading and query an existing one.
A:
Querying an existing relation is not really different than creating a new one. You pretty much write the same code but specify the table and column names that are already there, and of course you won't need SQLAlchemy to issue the CREATE TABLE statements.
See http://www.sqlalchemy.org/docs/05/mappers.html#many-to-many . All you need to do is specify the foreign key columns for your existing parent, child, and association tables as in the example, and specify autoload=True to fill out the other fields on your Tables. If your association table stores additional information, as they almost always do, you should just break your many-to-many relation into two many-to-one relations.
I learned SQLAlchemy while working with MySQL. With that database I always had to specify the foreign key relationships because they weren't explicit database constraints. You might get lucky and be able to reflect even more from your database, but you might prefer to use something like http://pypi.python.org/pypi/sqlautocode to just code the entire database schema and avoid the reflection delay.
|
Load an existing many-to-many table relation with sqlalchemy
|
I'm using SqlAlchemy to interact with an existing PostgreSQL database.
I need to access data organized in a many-to-many relationship. The documentation describes how to create relationships, but I cannot find an example for neatly loading and query an existing one.
|
[
"Querying an existing relation is not really different than creating a new one. You pretty much write the same code but specify the table and column names that are already there, and of course you won't need SQLAlchemy to issue the CREATE TABLE statements.\nSee http://www.sqlalchemy.org/docs/05/mappers.html#many-to-many . All you need to do is specify the foreign key columns for your existing parent, child, and association tables as in the example, and specify autoload=True to fill out the other fields on your Tables. If your association table stores additional information, as they almost always do, you should just break your many-to-many relation into two many-to-one relations.\nI learned SQLAlchemy while working with MySQL. With that database I always had to specify the foreign key relationships because they weren't explicit database constraints. You might get lucky and be able to reflect even more from your database, but you might prefer to use something like http://pypi.python.org/pypi/sqlautocode to just code the entire database schema and avoid the reflection delay.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0001430584_python_sqlalchemy.txt
|
Q:
Generate and parse Python code from C# application
I need to generate Python code to be more specific IronPyton. I also need to be able to parse the code and to load it into AST. I just started looking at some tools. I played with "Oslo" and made a decision that it's not the right tool for me. I just looked very briefly at Coco/R and it looks promising.
Does anyone use Coco/R?
If you did what's your experience with the tool
Can you recommend some other tool?
A:
The IronPython implementation itself includes a parser and an AST representation of Python programs which can be walked with a PythonWalker.
A:
Not really my area of expertise but you might want to try ANTLR 4. It has support for generating Python 2 and Python 3.
A:
I think you should look at the Dynamic Language Runtime. This will be a standard part of some later version of .Net and C# (.Net 4 from memory).
I've used it to compile and execute Python code generated at runtime, but I haven't played with all the AST stuff yet.
|
Generate and parse Python code from C# application
|
I need to generate Python code to be more specific IronPyton. I also need to be able to parse the code and to load it into AST. I just started looking at some tools. I played with "Oslo" and made a decision that it's not the right tool for me. I just looked very briefly at Coco/R and it looks promising.
Does anyone use Coco/R?
If you did what's your experience with the tool
Can you recommend some other tool?
|
[
"The IronPython implementation itself includes a parser and an AST representation of Python programs which can be walked with a PythonWalker.\n",
"Not really my area of expertise but you might want to try ANTLR 4. It has support for generating Python 2 and Python 3.\n",
"I think you should look at the Dynamic Language Runtime. This will be a standard part of some later version of .Net and C# (.Net 4 from memory).\nI've used it to compile and execute Python code generated at runtime, but I haven't played with all the AST stuff yet.\n"
] |
[
11,
2,
0
] |
[] |
[] |
[
"c#",
"cocor",
"code_generation",
"ironpython",
"python"
] |
stackoverflow_0001432998_c#_cocor_code_generation_ironpython_python.txt
|
Q:
Getting text values from XML in Python
from xml.dom.minidom import parseString
dom = parseString(data)
data = dom.getElementsByTagName('data')
the 'data' variable returns as an element object but I cant for the life of me see in the documentation to grab the text value of the element.
For example:
<something><data>I WANT THIS</data></something>
Anyone have any ideas?
A:
So the way to look at it is that "I WANT THIS" is actually another node. It's a text child of "data".
from xml.dom.minidom import parseString
dom = parseString(data)
nodes = dom.getElementsByTagName('data')
At this point, "nodes" is a NodeList and in your example, it has one item in it which is the "data" element. Correspondingly the "data" element also only has one child which is a text node "I WANT THIS".
So you could just do something like this:
print nodes[0].firstChild.nodeValue
Note that in the case where you have more than one tag called "data" in your input, you should use some sort of iteration technique on "nodes" rather than index it directly.
A:
This should do the trick:
dom = parseString('<something><data>I WANT THIS</data></something>')
data = dom.getElementsByTagName('data')[0].childNodes[0].data
i.e. you need to wade deeper into the DOM structure to get at the text child node and then access its value.
|
Getting text values from XML in Python
|
from xml.dom.minidom import parseString
dom = parseString(data)
data = dom.getElementsByTagName('data')
the 'data' variable returns as an element object but I cant for the life of me see in the documentation to grab the text value of the element.
For example:
<something><data>I WANT THIS</data></something>
Anyone have any ideas?
|
[
"So the way to look at it is that \"I WANT THIS\" is actually another node. It's a text child of \"data\".\nfrom xml.dom.minidom import parseString\ndom = parseString(data)\nnodes = dom.getElementsByTagName('data')\n\nAt this point, \"nodes\" is a NodeList and in your example, it has one item in it which is the \"data\" element. Correspondingly the \"data\" element also only has one child which is a text node \"I WANT THIS\".\nSo you could just do something like this:\nprint nodes[0].firstChild.nodeValue\n\nNote that in the case where you have more than one tag called \"data\" in your input, you should use some sort of iteration technique on \"nodes\" rather than index it directly.\n",
"This should do the trick:\ndom = parseString('<something><data>I WANT THIS</data></something>')\ndata = dom.getElementsByTagName('data')[0].childNodes[0].data\n\ni.e. you need to wade deeper into the DOM structure to get at the text child node and then access its value. \n"
] |
[
4,
3
] |
[] |
[] |
[
"parsing",
"python",
"xml"
] |
stackoverflow_0001433907_parsing_python_xml.txt
|
Q:
python help needed
import os
import sys, urllib2, urllib
import re
import time
from threading import Thread
class testit(Thread):
def __init__ (self):
Thread.__init__(self)
def run(self):
url = 'http://games.espnstar.asia/the-greatest-odi/post_brackets.php'
data = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req = urllib2.Request(url)
fd = urllib2.urlopen(req, data)
<TAB>fd.close()
<TAB>"""while 1:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)"""
url2 = 'http://games.espnstar.asia/the-greatest-odi/post_perc.php'
data2 = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req2 = urllib2.Request(url2)
fd2 = urllib2.urlopen(req2, data2)
<TAB>#prints current votes
while 1:
data2 = fd2.read(1024)
if not len(data2):
break
sys.stdout.write(data2)
<TAB>fd2.close()
print time.ctime()
print " ending thread\n"
i=-1
while i<0:
current = testit()
time.sleep(0.001) #decrease this like 0.0001 for more loops
current.start()
Hey can anybody help me finding out the error in the code
Its saying inconsistent use of tabs an spaces in indentation
A:
I edited your post to replace all the tabs with <TAB>. You need to delete the indentation on those lines and line it back up with spaces. Some editors can do that for you, but I don't know which editor you are using.
If you get serious about Python, you should reconfigure your editor to always insert 4 spaces when the tab key is pressed. You can also try changing the amount of indentation provided by the tab character or in some editors print a visible symbol for the tab character so you can see where the problem is.
A:
Unfortunately, it looks like the code formatter here on Stack Overflow turns everything into spaces. But the error is quite self-explanatory. Python, unlike the curly-brace languages (like C, C++, and Java) uses indentation to mark blocks of code. The error means that a block is improperly indented.
|
python help needed
|
import os
import sys, urllib2, urllib
import re
import time
from threading import Thread
class testit(Thread):
def __init__ (self):
Thread.__init__(self)
def run(self):
url = 'http://games.espnstar.asia/the-greatest-odi/post_brackets.php'
data = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req = urllib2.Request(url)
fd = urllib2.urlopen(req, data)
<TAB>fd.close()
<TAB>"""while 1:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)"""
url2 = 'http://games.espnstar.asia/the-greatest-odi/post_perc.php'
data2 = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req2 = urllib2.Request(url2)
fd2 = urllib2.urlopen(req2, data2)
<TAB>#prints current votes
while 1:
data2 = fd2.read(1024)
if not len(data2):
break
sys.stdout.write(data2)
<TAB>fd2.close()
print time.ctime()
print " ending thread\n"
i=-1
while i<0:
current = testit()
time.sleep(0.001) #decrease this like 0.0001 for more loops
current.start()
Hey can anybody help me finding out the error in the code
Its saying inconsistent use of tabs an spaces in indentation
|
[
"I edited your post to replace all the tabs with <TAB>. You need to delete the indentation on those lines and line it back up with spaces. Some editors can do that for you, but I don't know which editor you are using.\nIf you get serious about Python, you should reconfigure your editor to always insert 4 spaces when the tab key is pressed. You can also try changing the amount of indentation provided by the tab character or in some editors print a visible symbol for the tab character so you can see where the problem is.\n",
"Unfortunately, it looks like the code formatter here on Stack Overflow turns everything into spaces. But the error is quite self-explanatory. Python, unlike the curly-brace languages (like C, C++, and Java) uses indentation to mark blocks of code. The error means that a block is improperly indented.\n"
] |
[
5,
4
] |
[] |
[] |
[
"indentation",
"python"
] |
stackoverflow_0001434276_indentation_python.txt
|
Q:
PyQt4 - Image Watermark
I'm trying to open a PNG image and write some text to it (a watermark) via QImage and QPainter. The code works 100% on Linux but when I run it on Windows XP (haven't tested with any other versions of Windows) the text is never written to the image. I have the code in a try/except block, but no errors are returned.
image = QtGui.QImage('demo.png')
painter = QtGui.QPainter()
painter.begin(image)
painter.setOpacity(0.8)
painter.setPen(QtCore.Qt.blue)
painter.setFont(QtGui.QFont('arial', 12))
painter.drawText(image.rect(), QtCore.Qt.AlignCenter, 'Watermark')
painter.end()
image.save('demo.png')
Using Python 2.6.2, PyQt 4.5.4
Any ideas?
A:
First thing that comes to my mind is maybe it isn't finding the specified font on Windows.
A:
My guess would be that whatever png lib you are using on Windows doesn't do tranparency (properly)
|
PyQt4 - Image Watermark
|
I'm trying to open a PNG image and write some text to it (a watermark) via QImage and QPainter. The code works 100% on Linux but when I run it on Windows XP (haven't tested with any other versions of Windows) the text is never written to the image. I have the code in a try/except block, but no errors are returned.
image = QtGui.QImage('demo.png')
painter = QtGui.QPainter()
painter.begin(image)
painter.setOpacity(0.8)
painter.setPen(QtCore.Qt.blue)
painter.setFont(QtGui.QFont('arial', 12))
painter.drawText(image.rect(), QtCore.Qt.AlignCenter, 'Watermark')
painter.end()
image.save('demo.png')
Using Python 2.6.2, PyQt 4.5.4
Any ideas?
|
[
"First thing that comes to my mind is maybe it isn't finding the specified font on Windows.\n",
"My guess would be that whatever png lib you are using on Windows doesn't do tranparency (properly)\n"
] |
[
0,
0
] |
[] |
[] |
[
"pyqt4",
"python",
"qimage",
"qpainter"
] |
stackoverflow_0001434582_pyqt4_python_qimage_qpainter.txt
|
Q:
Invalid syntax error for "print expr"?
import os
import sys, urllib2, urllib
import re
import time
from threading import Thread
class testit(Thread):
def _init_ (self):
Thread.__init__(self)
def run(self):
url = 'http://games.espnstar.asia/the-greatest-odi/post_brackets.php'
data = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req = urllib2.Request(url)
fd = urllib2.urlopen(req, data)
"""while 1:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)"""
fd.close();
url2 = 'http://games.espnstar.asia/the-greatest-odi/post_perc.php'
data2 = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req2 = urllib2.Request(url2)
fd2 = urllib2.urlopen(req2, data2)
while 1:
data2 = fd2.read(1024)
if not len(data2):
break
sys.stdout.write(data2)
fd2.close()
print time.ctime()
print " ending thread\n"
i=-1
while i<0:
current = testit()
time.sleep(0.001)
current.start()
I'm getting an error stating invalid syntax for the line:
print time.ctime()
Please help me out.
A:
This is because (in Python 3.0 onwards at least), print is a function.
Use:
print (time.ctime())
and it should be fine.
|
Invalid syntax error for "print expr"?
|
import os
import sys, urllib2, urllib
import re
import time
from threading import Thread
class testit(Thread):
def _init_ (self):
Thread.__init__(self)
def run(self):
url = 'http://games.espnstar.asia/the-greatest-odi/post_brackets.php'
data = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req = urllib2.Request(url)
fd = urllib2.urlopen(req, data)
"""while 1:
data = fd.read(1024)
if not len(data):
break
sys.stdout.write(data)"""
fd.close();
url2 = 'http://games.espnstar.asia/the-greatest-odi/post_perc.php'
data2 = urllib.urlencode([('id',"btn_13_9_13"), ('matchNo',"13")])
req2 = urllib2.Request(url2)
fd2 = urllib2.urlopen(req2, data2)
while 1:
data2 = fd2.read(1024)
if not len(data2):
break
sys.stdout.write(data2)
fd2.close()
print time.ctime()
print " ending thread\n"
i=-1
while i<0:
current = testit()
time.sleep(0.001)
current.start()
I'm getting an error stating invalid syntax for the line:
print time.ctime()
Please help me out.
|
[
"This is because (in Python 3.0 onwards at least), print is a function.\nUse:\nprint (time.ctime())\n\nand it should be fine.\n"
] |
[
4
] |
[
"From this page:\n\nctime(...)\nctime(seconds) -> string\nConvert a time in seconds since the Epoch to a string in local time.\nThis is equivalent to asctime(localtime(seconds)).\n\nctime requires an argument and you aren't giving it one. If you're trying to get the current time, try time.time() instead. Or, if you're trying to convert the current time in seconds to a string in local time, you should try this:\ntime.ctime(time.time())\n\n"
] |
[
-2
] |
[
"python",
"syntax_error"
] |
stackoverflow_0001434751_python_syntax_error.txt
|
Q:
Python Multiprocessing exit error
I am seeing this when I press Ctrl-C to exit my app
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
p.join()
File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait
return self.poll(0)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 4] Interrupted system call
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
p.join()
File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait
return self.poll(0)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 4] Interrupted system call
I am using twisted on top of my own stuff,
I registered the signal Ctrl-C with the following code
def sigHandler(self, arg1, arg2):
if not self.backuped:
self.stopAll()
else:
out('central', 'backuped ALREADY, now FORCE exiting')
exit()
def stopAll(self):
self.parserM.shutdown()
for each in self.crawlM:
each.shutdown()
self.backup()
reactor.stop()
and when they signal others to shutdown, it tries to tell them to shutdown nicely
through
exit = multiprocessing.Event()
def shutdown(self):
self.exit.set()
where all my processes are in some form,
def run(self):
while not self.exit.is_set():
do something
out('crawler', 'crawler exited sucessfully')
Any idea what this error is? I only get it when I have more than one instance of a particular thread.
A:
This is related to interactions OS system calls, signals and how it's handled in the multiprocessing module. I'm not really sure if it's a bug or a feature, but it's in somewhat tricky territory as it's where python meets the os.
The problem is that multiprocessing is blocking on waitpid until the child it's waiting for has terminated. However, since you've installed a signal-handler for SIGINT and your program gets this signal, it interrupts the system call to execute your signal handler, and waitpid exits indicating that it was interrupted by a signal. The way python handles this case is by exceptions.
As a workaround, you can enclose the offending section(s) in a while-loop and try/catch blocks like this, either around where you wait for threads to finish, or subclass multiprocessing.Popen:
import errno
from multiprocessing import Process
p = Process( target=func, args=stuff )
p.start()
notintr = False
while not notintr:
try:
p.join() # "Offending code"
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
For mucking about with multiprocessing.forking.Popen you'd have to do something like this:
import errno
from multiprocessing import Process
from multiprocessing.forking import Popen
import os
# see /path/to/python/libs/multiprocessing/forking.py
class MyPopen(Popen):
def poll(self, flag=os.WNOHANG): # from forking.py
if self.returncode is None: # from forking.py
notintr = False
while not notintr:
try:
pid, sts = os.waitpid(self.pid, flag) # from forking.py
notintr = True
except OSError, ose:
if ose.errno != errno.EINTR:
raise ose
# Rest of Popen.poll from forking.py goes here
p = Process( target=func args=stuff )
p._Popen = p
p.start()
p.join()
A:
I was seeing this, but it went away when I overrode signal handlers with my own. Use reactor.run(installSignalHandlers=False) and define your own functions for SIGINT, SIGTERM, etc.
|
Python Multiprocessing exit error
|
I am seeing this when I press Ctrl-C to exit my app
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
p.join()
File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait
return self.poll(0)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 4] Interrupted system call
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
p.join()
File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait
return self.poll(0)
File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 4] Interrupted system call
I am using twisted on top of my own stuff,
I registered the signal Ctrl-C with the following code
def sigHandler(self, arg1, arg2):
if not self.backuped:
self.stopAll()
else:
out('central', 'backuped ALREADY, now FORCE exiting')
exit()
def stopAll(self):
self.parserM.shutdown()
for each in self.crawlM:
each.shutdown()
self.backup()
reactor.stop()
and when they signal others to shutdown, it tries to tell them to shutdown nicely
through
exit = multiprocessing.Event()
def shutdown(self):
self.exit.set()
where all my processes are in some form,
def run(self):
while not self.exit.is_set():
do something
out('crawler', 'crawler exited sucessfully')
Any idea what this error is? I only get it when I have more than one instance of a particular thread.
|
[
"This is related to interactions OS system calls, signals and how it's handled in the multiprocessing module. I'm not really sure if it's a bug or a feature, but it's in somewhat tricky territory as it's where python meets the os.\nThe problem is that multiprocessing is blocking on waitpid until the child it's waiting for has terminated. However, since you've installed a signal-handler for SIGINT and your program gets this signal, it interrupts the system call to execute your signal handler, and waitpid exits indicating that it was interrupted by a signal. The way python handles this case is by exceptions.\nAs a workaround, you can enclose the offending section(s) in a while-loop and try/catch blocks like this, either around where you wait for threads to finish, or subclass multiprocessing.Popen:\nimport errno\nfrom multiprocessing import Process\n\np = Process( target=func, args=stuff )\np.start()\nnotintr = False\nwhile not notintr:\n try:\n p.join() # \"Offending code\"\n notintr = True\n except OSError, ose:\n if ose.errno != errno.EINTR:\n raise ose\n\nFor mucking about with multiprocessing.forking.Popen you'd have to do something like this:\nimport errno\nfrom multiprocessing import Process\nfrom multiprocessing.forking import Popen\nimport os\n\n# see /path/to/python/libs/multiprocessing/forking.py\nclass MyPopen(Popen):\n def poll(self, flag=os.WNOHANG): # from forking.py\n if self.returncode is None: # from forking.py\n notintr = False\n while not notintr:\n try:\n pid, sts = os.waitpid(self.pid, flag) # from forking.py\n notintr = True\n except OSError, ose:\n if ose.errno != errno.EINTR:\n raise ose\n # Rest of Popen.poll from forking.py goes here\n\np = Process( target=func args=stuff )\np._Popen = p\np.start()\np.join()\n\n",
"I was seeing this, but it went away when I overrode signal handlers with my own. Use reactor.run(installSignalHandlers=False) and define your own functions for SIGINT, SIGTERM, etc.\n"
] |
[
6,
0
] |
[] |
[] |
[
"exception",
"multiprocessing",
"python"
] |
stackoverflow_0001238349_exception_multiprocessing_python.txt
|
Q:
Using UTM with geodjango
I'm looking into using the UTM coordinate system with geodjango.
And I can't figure out how to get the data in properly.
I've been browsing the documentation and it seems that the "GEOSGeometry(geo_input, srid=None)" or "OGRGeometry" could be used with an EWKT, but I can't figure out how to format the data.
It looks like the UTM SRID is: 2029
From the wikipedia article the format is written like this:
[UTMZone][N or S] [easting] [northing]
17N 630084 4833438
So I tried the following with no luck:
>>> from django.contrib.gis.geos import *
>>> pnt = GEOSGeometry('SRID=2029;POINT(17N 630084 4833438)')
GEOS_ERROR: ParseException: Expected number but encountered word: '17N'
>>>
>>> from django.contrib.gis.gdal import OGRGeometry
>>> pnt = OGRGeometry('SRID=2029;POINT(17N 630084 4833438)')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\site-packages\django\contrib\gis\gdal\geometries.py", line 106, in __init__
ogr_t = OGRGeomType(geom_input)
File "C:\Python26\lib\site-packages\django\contrib\gis\gdal\geomtype.py", line 31, in __init__
raise OGRException('Invalid OGR String Type "%s"' % type_input)
django.contrib.gis.gdal.error.OGRException: Invalid OGR String Type "srid=2029;point(17n 630084 4833438)"
Are there any example available to show how this is done?
May be I should just do any necessary calulations in UTM and convert to decimal degrees?
In this case does GEOS or other tools in geodjango provide convertion utitilites?
A:
The UTM zone (17N) is already specified by the spatial reference system -- SRID 2029, so you don't need to include it in the WKT you pass to the GEOSGeometry constructor.
>>> from django.contrib.gis.geos import *
>>> pnt = GEOSGeometry('SRID=2029;POINT(630084 4833438)')
>>> (pnt.x, pnt.y)
(630084.0, 4833438.0)
>>> pnt.srid
2029
Then, for example:
>>> pnt.transform(4326) # Transform to WGS84
>>> (pnt.x, pnt.y)
(-79.387137066054038, 43.644504290860461)
>>> pnt.srid
4326
|
Using UTM with geodjango
|
I'm looking into using the UTM coordinate system with geodjango.
And I can't figure out how to get the data in properly.
I've been browsing the documentation and it seems that the "GEOSGeometry(geo_input, srid=None)" or "OGRGeometry" could be used with an EWKT, but I can't figure out how to format the data.
It looks like the UTM SRID is: 2029
From the wikipedia article the format is written like this:
[UTMZone][N or S] [easting] [northing]
17N 630084 4833438
So I tried the following with no luck:
>>> from django.contrib.gis.geos import *
>>> pnt = GEOSGeometry('SRID=2029;POINT(17N 630084 4833438)')
GEOS_ERROR: ParseException: Expected number but encountered word: '17N'
>>>
>>> from django.contrib.gis.gdal import OGRGeometry
>>> pnt = OGRGeometry('SRID=2029;POINT(17N 630084 4833438)')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python26\lib\site-packages\django\contrib\gis\gdal\geometries.py", line 106, in __init__
ogr_t = OGRGeomType(geom_input)
File "C:\Python26\lib\site-packages\django\contrib\gis\gdal\geomtype.py", line 31, in __init__
raise OGRException('Invalid OGR String Type "%s"' % type_input)
django.contrib.gis.gdal.error.OGRException: Invalid OGR String Type "srid=2029;point(17n 630084 4833438)"
Are there any example available to show how this is done?
May be I should just do any necessary calulations in UTM and convert to decimal degrees?
In this case does GEOS or other tools in geodjango provide convertion utitilites?
|
[
"The UTM zone (17N) is already specified by the spatial reference system -- SRID 2029, so you don't need to include it in the WKT you pass to the GEOSGeometry constructor.\n>>> from django.contrib.gis.geos import *\n>>> pnt = GEOSGeometry('SRID=2029;POINT(630084 4833438)')\n>>> (pnt.x, pnt.y)\n(630084.0, 4833438.0)\n>>> pnt.srid\n2029\n\nThen, for example:\n>>> pnt.transform(4326) # Transform to WGS84\n>>> (pnt.x, pnt.y)\n(-79.387137066054038, 43.644504290860461)\n>>> pnt.srid\n4326\n\n"
] |
[
6
] |
[] |
[] |
[
"django",
"gdal",
"geodjango",
"geos",
"python"
] |
stackoverflow_0001332376_django_gdal_geodjango_geos_python.txt
|
Q:
Matplotlib Legend for Scatter with custom colours
I'm a bit of newbie at this and am trying to create a scatter chart with custom bubble sizes and colours. The chart displays fine but how do I get a legend saying what the colours refer to. This is as far as I've got:
inc = []
out = []
bal = []
col = []
fig=Figure()
ax=fig.add_subplot(111)
inc = (30000,20000,70000)
out = (80000,30000,40000)
bal = (12000,10000,6000)
col = (1,2,3)
leg = ('proj1','proj2','proj3')
ax.scatter(inc, out, s=bal, c=col)
ax.axis([0, 100000, 0, 100000])
ax.set_xlabel('income', fontsize=20)
ax.set_ylabel('Expenditure', fontsize=20)
ax.set_title('Project FInancial Positions %s' % dt)
ax.grid(True)
canvas=FigureCanvas(fig)
response=HttpResponse(content_type='image/png')
canvas.print_png(response)
This thread was helpful, but couldn't get it to solve my problem: Matplotlib: Legend not displayed properly
A:
Maybe this example is helpful.
In general, the items in the legend are related with some kind of plotted object. The scatter function/method treats all circles as a single object, see:
print type(ax.scatter(...))
Thus the solution is to create multiple objects. Hence, calling scatter multiple times.
Unfortunately, newer version of matplotlib seem not to use a rectangle in the legend. Thus the legend will contain very large circles, since you increased the size of your scatter plot objects.
The legend function as a markerscale keyword argument to control the size of legend markers, but it seems to be broken.
Update:
The Legend guide recommends using Proxy Artist in similar cases. The Color API explains valid fc values.
p1 = Rectangle((0, 0), 1, 1, fc="b")
p2 = Rectangle((0, 0), 1, 1, fc="g")
p3 = Rectangle((0, 0), 1, 1, fc="r")
legend((p1, p2, p3), ('proj1','proj2','proj3'))
To get the colors used previously in a plot, use the above example like:
pl1, = plot(x1, y1, '.', alpha=0.1, label='plot1')
pl2, = plot(x2, y2, '.', alpha=0.1, label='plot2')
p1 = Rectangle((0, 0), 1, 1, fc=pl1.get_color())
p2 = Rectangle((0, 0), 1, 1, fc=pl2.get_color())
legend((p1, p2), (pl1.get_label(), pl2.get_label()), loc='best')
This example will make a plot like:
|
Matplotlib Legend for Scatter with custom colours
|
I'm a bit of newbie at this and am trying to create a scatter chart with custom bubble sizes and colours. The chart displays fine but how do I get a legend saying what the colours refer to. This is as far as I've got:
inc = []
out = []
bal = []
col = []
fig=Figure()
ax=fig.add_subplot(111)
inc = (30000,20000,70000)
out = (80000,30000,40000)
bal = (12000,10000,6000)
col = (1,2,3)
leg = ('proj1','proj2','proj3')
ax.scatter(inc, out, s=bal, c=col)
ax.axis([0, 100000, 0, 100000])
ax.set_xlabel('income', fontsize=20)
ax.set_ylabel('Expenditure', fontsize=20)
ax.set_title('Project FInancial Positions %s' % dt)
ax.grid(True)
canvas=FigureCanvas(fig)
response=HttpResponse(content_type='image/png')
canvas.print_png(response)
This thread was helpful, but couldn't get it to solve my problem: Matplotlib: Legend not displayed properly
|
[
"Maybe this example is helpful.\nIn general, the items in the legend are related with some kind of plotted object. The scatter function/method treats all circles as a single object, see:\nprint type(ax.scatter(...))\n\nThus the solution is to create multiple objects. Hence, calling scatter multiple times.\nUnfortunately, newer version of matplotlib seem not to use a rectangle in the legend. Thus the legend will contain very large circles, since you increased the size of your scatter plot objects.\nThe legend function as a markerscale keyword argument to control the size of legend markers, but it seems to be broken.\nUpdate:\nThe Legend guide recommends using Proxy Artist in similar cases. The Color API explains valid fc values.\np1 = Rectangle((0, 0), 1, 1, fc=\"b\")\np2 = Rectangle((0, 0), 1, 1, fc=\"g\")\np3 = Rectangle((0, 0), 1, 1, fc=\"r\")\nlegend((p1, p2, p3), ('proj1','proj2','proj3'))\n\nTo get the colors used previously in a plot, use the above example like:\npl1, = plot(x1, y1, '.', alpha=0.1, label='plot1')\npl2, = plot(x2, y2, '.', alpha=0.1, label='plot2')\np1 = Rectangle((0, 0), 1, 1, fc=pl1.get_color())\np2 = Rectangle((0, 0), 1, 1, fc=pl2.get_color())\nlegend((p1, p2), (pl1.get_label(), pl2.get_label()), loc='best')\n\nThis example will make a plot like:\n\n"
] |
[
10
] |
[] |
[] |
[
"charts",
"matplotlib",
"python"
] |
stackoverflow_0001435535_charts_matplotlib_python.txt
|
Q:
Getting Started with Tornado
After installing the necessary packages through apt (python 2.5, simplejson etc) I get an error when I try to run the demos.
: Request instance has no attribute 'responseHeaders'
/usr/lib/python2.5/site-packages/tornado/web.py, line 404 in flush
402 for k,v in self._generate_headers():
403 if isinstance(v, list):
404 self.request.responseHeaders.setRawHeaders(k, v)
405 else:
Self
request
twisted.web.server.Request instance @ 0x85da24c
Locals
self
k 'Set-Cookie'
v
List instance @ 0x85da46c
Here is proof that the necessary packages are installed
/web/tmp/tornado/demos/helloworld# dpkg -l | grep python2.5
ii python2.5 2.5.2-2ubuntu6 An interactive high-level object-oriented la
ii python2.5-dev 2.5.2-2ubuntu6 Header files and a static library for Python
ii python2.5-minimal 2.5.2-2ubuntu6 A minimal subset of the Python language
# dpkg -l | grep simplejson
ii python-simplejson 1.7.3-1
# dpkg -l | grep pycurl
ii python-pycurl 7.16.4-1
Seems that not too many people have been trying out this Tornado thing from friendfeed. Anyone have any suggestions/hints to help me get up and running with it?
A:
I was under the impression tornado didn't depend on twisted. Have you tried the "official" version? line 404 is completely different.
http://github.com/facebook/tornado/blob/master/tornado/web.py
def flush(self, include_footers=False):
"""Flushes the current output buffer to the nextwork."""
if self.application._wsgi:
raise Exception("WSGI applications do not support flush()") #line 404
if not self._headers_written:
self._headers_written = True
headers = self._generate_headers()
else:
headers = ""
Other than that, I'd try installing twisted and see what happens
|
Getting Started with Tornado
|
After installing the necessary packages through apt (python 2.5, simplejson etc) I get an error when I try to run the demos.
: Request instance has no attribute 'responseHeaders'
/usr/lib/python2.5/site-packages/tornado/web.py, line 404 in flush
402 for k,v in self._generate_headers():
403 if isinstance(v, list):
404 self.request.responseHeaders.setRawHeaders(k, v)
405 else:
Self
request
twisted.web.server.Request instance @ 0x85da24c
Locals
self
k 'Set-Cookie'
v
List instance @ 0x85da46c
Here is proof that the necessary packages are installed
/web/tmp/tornado/demos/helloworld# dpkg -l | grep python2.5
ii python2.5 2.5.2-2ubuntu6 An interactive high-level object-oriented la
ii python2.5-dev 2.5.2-2ubuntu6 Header files and a static library for Python
ii python2.5-minimal 2.5.2-2ubuntu6 A minimal subset of the Python language
# dpkg -l | grep simplejson
ii python-simplejson 1.7.3-1
# dpkg -l | grep pycurl
ii python-pycurl 7.16.4-1
Seems that not too many people have been trying out this Tornado thing from friendfeed. Anyone have any suggestions/hints to help me get up and running with it?
|
[
"I was under the impression tornado didn't depend on twisted. Have you tried the \"official\" version? line 404 is completely different.\nhttp://github.com/facebook/tornado/blob/master/tornado/web.py\ndef flush(self, include_footers=False):\n \"\"\"Flushes the current output buffer to the nextwork.\"\"\"\n if self.application._wsgi:\n raise Exception(\"WSGI applications do not support flush()\") #line 404\n if not self._headers_written:\n self._headers_written = True\n headers = self._generate_headers()\n else:\n headers = \"\"\n\nOther than that, I'd try installing twisted and see what happens\n"
] |
[
2
] |
[] |
[] |
[
"python",
"tornado",
"twisted"
] |
stackoverflow_0001435896_python_tornado_twisted.txt
|
Q:
manage.py syncdb doesn't add tables for some models
My second not-so-adept question of the day: I have a django project with four installed apps. When I run manage.py syndb, it only creates tables for two of them. To my knowledge, there are no problems in any of my models files, and all the apps are specified in INSTALLED_APPS in my settings file. Manage.py syndb just seems to ignore two of my apps.
One thing that is unique about the two "ignored" apps is that the models files import models from the other two apps and use them as foreign keys (don't know if this is good/bad practice, but helps me stay organized). I don't think that's the problem though, because I commented out the foreign key-having models and the tables still weren't created. I'm stumped.
UPDATE: When I comment out the lines importing models files from other apps, syndb creates my tables. Perhaps I'm not understanding something about how models files in separate apps relate to other other. I though it was ok to use a model from another app as a foreign key by simply importing it. Not true?
A:
I think I ran across something similar.
I had an issue where a model wasn't being reset.
In this case it turned out that there was an error in my models that wasn't being spit out.
Although I think syncdb, when run, spit out some kind of error.
In any case try to import your models file from the shell and see if you can.
$ manage.py shell
>>> from myapp import models
>>>
If theres an error in the file this should point it out.
According to your update, it sounds like you may have a cross-import issue.
Instead of:
from app1.models import X
class ModelA(models.Model):
fk = models.ForeignKey(X)
Try:
class ModelA(models.Model):
fk = models.ForeignKey("app1.X")
... although I think you should get an error on syncdb.
A:
Unfortunately, manage.py silently fails to load an app where there's an import error in its models.py (ticket #10706). Chances are that there's a typo in one of your models.py files... check all of the import statements closely (or use pylint).
Recently syncdb stoped loading a couple of my apps, and sqlall gave me the error "App with label foo could not be found". Not knowing that this sometimes means "App with label foo was found but could not be loaded due to ImportError being raised", it took me half an hour to realise that I was trying to import 'haslib' instead of 'hashlib' in one of my models.py files.
|
manage.py syncdb doesn't add tables for some models
|
My second not-so-adept question of the day: I have a django project with four installed apps. When I run manage.py syndb, it only creates tables for two of them. To my knowledge, there are no problems in any of my models files, and all the apps are specified in INSTALLED_APPS in my settings file. Manage.py syndb just seems to ignore two of my apps.
One thing that is unique about the two "ignored" apps is that the models files import models from the other two apps and use them as foreign keys (don't know if this is good/bad practice, but helps me stay organized). I don't think that's the problem though, because I commented out the foreign key-having models and the tables still weren't created. I'm stumped.
UPDATE: When I comment out the lines importing models files from other apps, syndb creates my tables. Perhaps I'm not understanding something about how models files in separate apps relate to other other. I though it was ok to use a model from another app as a foreign key by simply importing it. Not true?
|
[
"I think I ran across something similar.\nI had an issue where a model wasn't being reset.\nIn this case it turned out that there was an error in my models that wasn't being spit out.\nAlthough I think syncdb, when run, spit out some kind of error.\nIn any case try to import your models file from the shell and see if you can.\n$ manage.py shell\n>>> from myapp import models\n>>>\n\nIf theres an error in the file this should point it out.\nAccording to your update, it sounds like you may have a cross-import issue.\nInstead of:\nfrom app1.models import X\n\nclass ModelA(models.Model):\n fk = models.ForeignKey(X)\n\nTry:\nclass ModelA(models.Model):\n fk = models.ForeignKey(\"app1.X\")\n\n... although I think you should get an error on syncdb.\n",
"Unfortunately, manage.py silently fails to load an app where there's an import error in its models.py (ticket #10706). Chances are that there's a typo in one of your models.py files... check all of the import statements closely (or use pylint). \nRecently syncdb stoped loading a couple of my apps, and sqlall gave me the error \"App with label foo could not be found\". Not knowing that this sometimes means \"App with label foo was found but could not be loaded due to ImportError being raised\", it took me half an hour to realise that I was trying to import 'haslib' instead of 'hashlib' in one of my models.py files.\n"
] |
[
8,
6
] |
[] |
[] |
[
"django",
"django_models",
"django_syncdb",
"python"
] |
stackoverflow_0001435523_django_django_models_django_syncdb_python.txt
|
Q:
Socket in use error when reusing sockets
I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
A:
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
A:
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
A:
Do you close the sockets after using it?
|
Socket in use error when reusing sockets
|
I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
|
[
"The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.\nWindows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:\nimport socket\nsockets = []\nwhile True:\n s = socket.socket()\n s.connect(('some_host', 80))\n sockets.append(s.getsockname())\n s.close()\n\nprint len(sockets) \nsockets.sort()\nprint \"Lowest port: \", sockets[0][1], \" Highest port: \", sockets[-1][1]\n# on Windows you should see something like this...\n3960\nLowest port: 1025 Highest port: 5000\n\nIf you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.\nThere are a few ways around this:\n\nManage your own port assignments and\nuse bind() to explicitly bind your\nclient socket to a specific port\nthat you increment each time your\ncreate a socket. You'll still have\nto handle the case where a port is\nalready in use, but you will not be\nlimited to dynamic ports. e.g.\nport = 5000\nwhile True:\n s = socket.socket()\n s.bind(('your_host', port))\n s.connect(('some_host', 80))\n s.close()\n port += 1\n\nFiddle with the SO_LINGER socket\noption. I have found that this\nsometimes works in Windows (although\nnot exactly sure why):\ns.setsockopt(socket.SOL_SOCKET,\nsocket.SO_LINGER, 1)\nI don't know if this will help in\nyour particular application,\nhowever, it is possible to send\nmultiple XMLRPC requests over the\nsame connection using the\nmulticall method. Basically\nthis allows you to accumulate\nseveral requests and then send them\nall at once. You will not get any\nresponses until you actually send\nthe accumulated requests, so you can\nessentially think of this as batch\nprocessing - does this fit in with\nyour application design?\n\n",
"Update:\nI tossed this into the code and it seems to be working now.\nif(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr))) \n {\n int err = WSAGetLastError();\n if(err == 10048) //if socket in user error, force kill and reopen socket\n {\n closesocket(s_);\n WSACleanup();\n WSADATA info;\n WSAStartup(MAKEWORD(2,0), &info);\n s_ = socket(AF_INET,SOCK_STREAM,0);\n setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));\n }\n }\n\nBasically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt\n(the last sockopt may not be necessary)\ni must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called\nthis error only occurs once every 4000ish calls.\nI am curious as to why this may be, even though this seems to fix it.\nIf anyone has any input on the subject i would be very curious to hear it\n",
"Do you close the sockets after using it?\n"
] |
[
11,
1,
0
] |
[] |
[] |
[
"c++",
"python",
"sockets",
"xml_rpc"
] |
stackoverflow_0001434914_c++_python_sockets_xml_rpc.txt
|
Q:
state of HTML after onload javascript
many webpages use onload JavaScript to manipulate their DOM. Is there a way I can automate accessing the state of the HTML after these JavaScript operations?
A took like wget is not useful here because it just downloads the original source.
Is there perhaps a way to use a web browser rendering engine?
Ideally I am after a solution that I can interface with from Python.
thanks!
A:
The only good way I know to do such things is to automate a browser, for example via Selenium RC. If you have no idea of how to deduce that the page has finished running the relevant javascript, then, just a real live user visiting that page, you'll just have to wait a while, grab a snapshot, wait some more, grab another, and check there was no change between them to convince yourself that it's really finished.
A:
Please see related info at stackoverflow:
screen-scraping
Screen Scraping from a web page with a lot of Javascript
|
state of HTML after onload javascript
|
many webpages use onload JavaScript to manipulate their DOM. Is there a way I can automate accessing the state of the HTML after these JavaScript operations?
A took like wget is not useful here because it just downloads the original source.
Is there perhaps a way to use a web browser rendering engine?
Ideally I am after a solution that I can interface with from Python.
thanks!
|
[
"The only good way I know to do such things is to automate a browser, for example via Selenium RC. If you have no idea of how to deduce that the page has finished running the relevant javascript, then, just a real live user visiting that page, you'll just have to wait a while, grab a snapshot, wait some more, grab another, and check there was no change between them to convince yourself that it's really finished.\n",
"Please see related info at stackoverflow:\n\nscreen-scraping \nScreen Scraping from a web page with a lot of Javascript\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"html",
"javascript",
"python",
"screen_scraping"
] |
stackoverflow_0001436211_html_javascript_python_screen_scraping.txt
|
Q:
Python: bind child class at run time
Can anyone tell me how to bind to specific child class at run time in the following code? I want mCar instance in the following example redirect to class Truck or Compact according to command line options?
class Car(object):
pass
class Truck(Car):
pass
class Compact(Car):
pass
and a instance of Car
mCar = Car()
A:
You mean like this?
car_classes = {
'car' : Car,
'truck' : Truck,
'compact' : Compact
}
if __name__ == '__main__':
option = sys.argv[1]
mCar = car_classes[option]()
print 'I am a', mCar.__class__.__name__
A:
As a side note, while not particularly recommended, it IS possible to assign a different value to self.__class__ -- be that in __init__ or anywhere else. Do notice that this will change the lookups for class-level names (such as methods), but per se it will not alter the instance's state (nor implcitly invoke any kind of initialization -- you'll have to do it explicitly if you need that to happen)... these subtleties are part of why such tricks are not particularly recommended (along with the general cultural bias of Pythonistas against "black magic";-) and a "factory function" (which in especially simple cases can be reduce to a dict lookup, as in GHZ's answer) is the recommended approach.
|
Python: bind child class at run time
|
Can anyone tell me how to bind to specific child class at run time in the following code? I want mCar instance in the following example redirect to class Truck or Compact according to command line options?
class Car(object):
pass
class Truck(Car):
pass
class Compact(Car):
pass
and a instance of Car
mCar = Car()
|
[
"You mean like this?\ncar_classes = {\n'car' : Car,\n'truck' : Truck,\n'compact' : Compact\n}\n\nif __name__ == '__main__':\n option = sys.argv[1]\n mCar = car_classes[option]()\n print 'I am a', mCar.__class__.__name__\n\n",
"As a side note, while not particularly recommended, it IS possible to assign a different value to self.__class__ -- be that in __init__ or anywhere else. Do notice that this will change the lookups for class-level names (such as methods), but per se it will not alter the instance's state (nor implcitly invoke any kind of initialization -- you'll have to do it explicitly if you need that to happen)... these subtleties are part of why such tricks are not particularly recommended (along with the general cultural bias of Pythonistas against \"black magic\";-) and a \"factory function\" (which in especially simple cases can be reduce to a dict lookup, as in GHZ's answer) is the recommended approach.\n"
] |
[
4,
1
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0001434949_class_python.txt
|
Q:
Thread error in Python & PyQt
I noticed that when the function setModel is executed in parallel thread (I tried threading.Timer or threading.thread), I get this:
QObject: Cannot create children for a parent that is in a different thread.
(Parent is QHeaderView(0x1c93ed0), parent's thread is QThread(0xb179c0), current thread is QThread(0x23dce38)
QObject::startTimer: timers cannot be started from another thread
QObject: Cannot create children for a parent that is in a different thread.
(Parent is QTreeView(0xc65060), parent's thread is QThread(0xb179c0), current thread is QThread(0x23dce38)
QObject::startTimer: timers cannot be started from another thread
Is there any way to solve this?
A:
It is indeed a fact of life that multithreaded use of Qt (and other rich frameworks) is a delicate and difficult job, requiring explicit attention and care -- see Qt's docs for an excellent coverage of the subject (for readers experienced in threading in general, with suggested readings for those who yet aren't).
If you possibly can, I would suggest what I always suggest as the soundest architecture for threading in Python: let each subsystem be owned and used by a single dedicated thread; communicate among threads via instances of Queue.Queue, i.e., by message passing. This approach can be a bit restrictive, but it provides a good foundation on which specifically identified and carefully architected exceptions (based on thread pools, occasional new threads being spawned, locks, condition variables, and other such finicky things;-). In the latter category I would also classify Qt-specific things such as cross-thread signal/slot communication via queued connections.
A:
Looks like you've stumped on a Qt limitation there. Try using signals or events if you need objects to communicate across threads.
Or ask the Qt folk about this. It doesn't seem specific to PyQt.
|
Thread error in Python & PyQt
|
I noticed that when the function setModel is executed in parallel thread (I tried threading.Timer or threading.thread), I get this:
QObject: Cannot create children for a parent that is in a different thread.
(Parent is QHeaderView(0x1c93ed0), parent's thread is QThread(0xb179c0), current thread is QThread(0x23dce38)
QObject::startTimer: timers cannot be started from another thread
QObject: Cannot create children for a parent that is in a different thread.
(Parent is QTreeView(0xc65060), parent's thread is QThread(0xb179c0), current thread is QThread(0x23dce38)
QObject::startTimer: timers cannot be started from another thread
Is there any way to solve this?
|
[
"It is indeed a fact of life that multithreaded use of Qt (and other rich frameworks) is a delicate and difficult job, requiring explicit attention and care -- see Qt's docs for an excellent coverage of the subject (for readers experienced in threading in general, with suggested readings for those who yet aren't).\nIf you possibly can, I would suggest what I always suggest as the soundest architecture for threading in Python: let each subsystem be owned and used by a single dedicated thread; communicate among threads via instances of Queue.Queue, i.e., by message passing. This approach can be a bit restrictive, but it provides a good foundation on which specifically identified and carefully architected exceptions (based on thread pools, occasional new threads being spawned, locks, condition variables, and other such finicky things;-). In the latter category I would also classify Qt-specific things such as cross-thread signal/slot communication via queued connections.\n",
"Looks like you've stumped on a Qt limitation there. Try using signals or events if you need objects to communicate across threads.\nOr ask the Qt folk about this. It doesn't seem specific to PyQt.\n"
] |
[
5,
0
] |
[] |
[] |
[
"multithreading",
"pyqt",
"python"
] |
stackoverflow_0001434831_multithreading_pyqt_python.txt
|
Q:
Python - Iterate over all classes
How can I iterate over a list of all classes loaded in memory?
I'm thinking of doing it for a backup, looking for all classes inheriting from db.Model (Google App Engine).
Thanks,
Neal Walters
A:
In "normal" Python, you can reach all objects via the gc.getobjects() function of the gc standard library module; it's then very easy to loop on them, checking which one are classes (rather than instances or anything else -- I do believe you mean instances of classes, but you can very easily get the classes themselves too if that's really what you want), etc.
Unfortunately, the gc module in App Engine does NOT implement getobjects -- which makes it extremely difficult to reach ALL classes. For example, a class created by calling:
def makeaclass():
class sic(object): pass
return sic
and hidden into a list somewhere, IS going to be very difficult to reach.
But fortunately, since you say in your question's text that you only care about subclasses of db.Model, that's even easier than gc would allow:
for amodel in db.Model.__subclasses__():
...
Just make sure you explicitly ignore such classes you don't care about, such as Expando;-).
Note that this DOES give you only and exactly the CLASSES, not the instances -- there is no similarly easy shortcut if those are what you're really after!
A:
Classes are defined in modules. Modules are created by an import statement.
Modules are simply dictionaries. If you want, you can use the dir(x) function on a module named x
Or you can use x.__dict__ on a module named x.
A:
Based on S.Lott's response:
This works if I omit the "if issubclass" except then I get classes I don't want.
import dbModels
self.response.out.write("<br/><br/>Class Names:</br/>")
for item in dbModels.__dict__:
if issubclass(item, db.Model):
self.response.out.write("<br/>" + item)
The above gives error:
TypeError: issubclass() arg 1 must be
a class
So it wants a classname as a parm, not an object name apparently.
Based on Alex's answer, this worked great:
self.response.out.write("<br/><br/>Class Names Inheriting from db.Model:</br/>")
for item in db.Model.__subclasses__():
self.response.out.write("<br/>" + item.__name__)
Thanks to both!
Neal
|
Python - Iterate over all classes
|
How can I iterate over a list of all classes loaded in memory?
I'm thinking of doing it for a backup, looking for all classes inheriting from db.Model (Google App Engine).
Thanks,
Neal Walters
|
[
"In \"normal\" Python, you can reach all objects via the gc.getobjects() function of the gc standard library module; it's then very easy to loop on them, checking which one are classes (rather than instances or anything else -- I do believe you mean instances of classes, but you can very easily get the classes themselves too if that's really what you want), etc.\nUnfortunately, the gc module in App Engine does NOT implement getobjects -- which makes it extremely difficult to reach ALL classes. For example, a class created by calling:\ndef makeaclass():\n class sic(object): pass\n return sic\n\nand hidden into a list somewhere, IS going to be very difficult to reach.\nBut fortunately, since you say in your question's text that you only care about subclasses of db.Model, that's even easier than gc would allow:\nfor amodel in db.Model.__subclasses__():\n ...\n\nJust make sure you explicitly ignore such classes you don't care about, such as Expando;-).\nNote that this DOES give you only and exactly the CLASSES, not the instances -- there is no similarly easy shortcut if those are what you're really after!\n",
"Classes are defined in modules. Modules are created by an import statement.\nModules are simply dictionaries. If you want, you can use the dir(x) function on a module named x\nOr you can use x.__dict__ on a module named x.\n",
"Based on S.Lott's response: \nThis works if I omit the \"if issubclass\" except then I get classes I don't want.\n import dbModels \n self.response.out.write(\"<br/><br/>Class Names:</br/>\")\n for item in dbModels.__dict__:\n if issubclass(item, db.Model):\n self.response.out.write(\"<br/>\" + item) \n\nThe above gives error: \n\nTypeError: issubclass() arg 1 must be\n a class\n\nSo it wants a classname as a parm, not an object name apparently.\nBased on Alex's answer, this worked great: \n self.response.out.write(\"<br/><br/>Class Names Inheriting from db.Model:</br/>\")\n for item in db.Model.__subclasses__():\n self.response.out.write(\"<br/>\" + item.__name__)\n\nThanks to both!\nNeal \n"
] |
[
9,
2,
0
] |
[] |
[] |
[
"google_app_engine",
"loops",
"python"
] |
stackoverflow_0001436384_google_app_engine_loops_python.txt
|
Q:
Python web development - with or without a framework
I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario?
A:
The command-line Python, IMO, definitely comes first. Get that to work, since that's the core of what you're doing.
The issue is that using a web framework's ORM from a command line application isn't obvious. Django provides specific instructions for using their ORM from a command-line app. Those are annoying at first, but I think they're a life-saver in the long run. I use it heavily for giant uploads of customer-supplied files.
Don't use bare CGI. It's not impossible, but too many things can go wrong, and they've all been solved by the frameworks. Why reinvent something? Just use someone else's code.
Frameworks involve learning, but no real "overhead". They're not slow. They're code you don't have to write or debug.
Learn some Python.
Do the Django tutorial.
Start to build a web app.
a. Start a Django project. Build a small application in that project.
b. Build your new model using the Django ORM. Create a Django unit test for the model. Be sure that it works. You'll be able to use the default admin pages and do a lot of playing around. Just don't build the entire web site yet.
Get your command-line app to work using Django ORM. Essentially, you have to finesse the settings file for this app to work nicely. See the settings/configuration section.
Once you've got your command line and the default admin running, you can finish
the web app.
Here's the golden rule of frameworks: It's code you don't have to write, debug or maintain. Use them.
A:
You might consider using something like web.py which would be easy to distribute (since it's small) and it would also be easy to adapt your other tools to it since it doesn't require you to submit to the framework so much like Django does.
Be forewarned, however, it's not the most loved framework in the Python community, but it might be just the thing for you. You might also check out web2py, but I know less about that.
A:
Depends on the size of the project. If you had only a few previous php-scripts which called your stand alone application then I'd probably go for a cgi-app.
If you have use for databases, url rewriting, templating, user management and such, then using a framework is a good idea.
And of course, before you port it, consider if it's worth it just to switch the language or if there are specific Python features you need.
Good luck!
A:
I recently ported a PHP app to Python using web.py. As frameworks go it is extremely lightweight with minimal dependencies, and it tends to stay out of your way, so it might be the compromise you're looking for.
It all depends on your initial application though, because with a large application the advantages of having a full-featured framework handling the plumbing tend to outweigh the disadvantages involved in having to drag around all the framework code.
A:
Django makes it possible to whip out a website rapidly, that's for sure. You don't need to be a Python master to use it, and since it's very pythonic in it's design, and there is not really any "magic" going on, it will help you learn Python along the way.
Start with the examples, check out some django screencasts from TwiD and you'll be on your way.
Start slow, tweaking the admin, and playing with it via shell is the way to start. Once you have a handle on the ORM and get how things work, start building the real stuff!
The framework isn't going to cause any performance problems, like S. Lott said, it's code you don't have to maintain, and that's the best kind.
A:
Go for a framework. Basic stuffs like session handling are a nightmare if you don't use a one because Python is not web specialized like PHP.
If you think django is too much, you can try a lighter one like the very small but still handy web.py.
A:
For the love of pete, use a framework! There are literally dozens of frameworks out there, from cherrypy to django to albatross to ... well.. you name it. In fact, the huge number of web frameworks are what people point to when they whine about the popularity of Rails.
The Python web development community is divided up with no single voice. But that's another topic alltogether! The point is, there are "web toolkits" (e.g. albatross) that are fairly lightweight but powerful enough to get you through the day (e.g. auto-verifying a bot didn't do a simple form submission fake, or helping with keeping MVC clean).
If you want something that's not "too much framework" look here:
http://wiki.python.org/moin/WebFrameworks
Look under "Basic Frameworks Providing Templating". They're all lightweight and do all the "don't reinvent the wheel" stuff without forcing a Mac truck on you.
A:
It depends on the way you are going to distribute your application.
If it will only be used internally, go for django. It's a joy to work with it.
However, django really falls short at the distribution-task; django-applications are a pain to set up.
|
Python web development - with or without a framework
|
I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario?
|
[
"The command-line Python, IMO, definitely comes first. Get that to work, since that's the core of what you're doing.\nThe issue is that using a web framework's ORM from a command line application isn't obvious. Django provides specific instructions for using their ORM from a command-line app. Those are annoying at first, but I think they're a life-saver in the long run. I use it heavily for giant uploads of customer-supplied files.\nDon't use bare CGI. It's not impossible, but too many things can go wrong, and they've all been solved by the frameworks. Why reinvent something? Just use someone else's code.\nFrameworks involve learning, but no real \"overhead\". They're not slow. They're code you don't have to write or debug.\n\nLearn some Python.\nDo the Django tutorial.\nStart to build a web app.\na. Start a Django project. Build a small application in that project.\nb. Build your new model using the Django ORM. Create a Django unit test for the model. Be sure that it works. You'll be able to use the default admin pages and do a lot of playing around. Just don't build the entire web site yet.\nGet your command-line app to work using Django ORM. Essentially, you have to finesse the settings file for this app to work nicely. See the settings/configuration section. \nOnce you've got your command line and the default admin running, you can finish\nthe web app.\n\nHere's the golden rule of frameworks: It's code you don't have to write, debug or maintain. Use them.\n",
"You might consider using something like web.py which would be easy to distribute (since it's small) and it would also be easy to adapt your other tools to it since it doesn't require you to submit to the framework so much like Django does. \nBe forewarned, however, it's not the most loved framework in the Python community, but it might be just the thing for you. You might also check out web2py, but I know less about that.\n",
"Depends on the size of the project. If you had only a few previous php-scripts which called your stand alone application then I'd probably go for a cgi-app.\nIf you have use for databases, url rewriting, templating, user management and such, then using a framework is a good idea.\nAnd of course, before you port it, consider if it's worth it just to switch the language or if there are specific Python features you need.\nGood luck!\n",
"I recently ported a PHP app to Python using web.py. As frameworks go it is extremely lightweight with minimal dependencies, and it tends to stay out of your way, so it might be the compromise you're looking for. \nIt all depends on your initial application though, because with a large application the advantages of having a full-featured framework handling the plumbing tend to outweigh the disadvantages involved in having to drag around all the framework code.\n",
"Django makes it possible to whip out a website rapidly, that's for sure. You don't need to be a Python master to use it, and since it's very pythonic in it's design, and there is not really any \"magic\" going on, it will help you learn Python along the way.\nStart with the examples, check out some django screencasts from TwiD and you'll be on your way.\nStart slow, tweaking the admin, and playing with it via shell is the way to start. Once you have a handle on the ORM and get how things work, start building the real stuff!\nThe framework isn't going to cause any performance problems, like S. Lott said, it's code you don't have to maintain, and that's the best kind.\n",
"Go for a framework. Basic stuffs like session handling are a nightmare if you don't use a one because Python is not web specialized like PHP.\nIf you think django is too much, you can try a lighter one like the very small but still handy web.py.\n",
"For the love of pete, use a framework! There are literally dozens of frameworks out there, from cherrypy to django to albatross to ... well.. you name it. In fact, the huge number of web frameworks are what people point to when they whine about the popularity of Rails. \nThe Python web development community is divided up with no single voice. But that's another topic alltogether! The point is, there are \"web toolkits\" (e.g. albatross) that are fairly lightweight but powerful enough to get you through the day (e.g. auto-verifying a bot didn't do a simple form submission fake, or helping with keeping MVC clean).\nIf you want something that's not \"too much framework\" look here:\nhttp://wiki.python.org/moin/WebFrameworks\nLook under \"Basic Frameworks Providing Templating\". They're all lightweight and do all the \"don't reinvent the wheel\" stuff without forcing a Mac truck on you.\n",
"It depends on the way you are going to distribute your application.\nIf it will only be used internally, go for django. It's a joy to work with it.\nHowever, django really falls short at the distribution-task; django-applications are a pain to set up.\n"
] |
[
15,
11,
4,
3,
2,
2,
2,
0
] |
[] |
[] |
[
"frameworks",
"python"
] |
stackoverflow_0000136069_frameworks_python.txt
|
Q:
Fixing broken urls
Does anyone know of a library for fixing "broken" urls. When I try to open a url such as
http://www.domain.com/../page.html
http://www.domain.com//page.html
http://www.domain.com/page.html#stuff
urllib2.urlopen chokes and gives me an HTTPError traceback. Does anyone know of a library that can fix these sorts of things?
A:
What about something like...:
import re
import urlparse
urls = '''
http://www.domain.com/../page.html
http://www.domain.com//page.html
http://www.domain.com/page.html#stuff
'''.split()
def main():
for u in urls:
pieces = list(urlparse.urlparse(u))
pieces[2] = re.sub(r'^[./]*', '/', pieces[2])
pieces[-1] = ''
print urlparse.urlunparse(pieces)
main()
it does emit, as you desire:
http://www.domain.com/page.html
http://www.domain.com/page.html
http://www.domain.com/page.html
and would appear to roughly match your needs, if I understood them correctly.
|
Fixing broken urls
|
Does anyone know of a library for fixing "broken" urls. When I try to open a url such as
http://www.domain.com/../page.html
http://www.domain.com//page.html
http://www.domain.com/page.html#stuff
urllib2.urlopen chokes and gives me an HTTPError traceback. Does anyone know of a library that can fix these sorts of things?
|
[
"What about something like...:\nimport re\nimport urlparse\n\nurls = '''\nhttp://www.domain.com/../page.html\nhttp://www.domain.com//page.html\nhttp://www.domain.com/page.html#stuff\n'''.split()\n\ndef main():\n for u in urls:\n pieces = list(urlparse.urlparse(u))\n pieces[2] = re.sub(r'^[./]*', '/', pieces[2])\n pieces[-1] = ''\n print urlparse.urlunparse(pieces)\n\nmain()\n\nit does emit, as you desire:\nhttp://www.domain.com/page.html\nhttp://www.domain.com/page.html\nhttp://www.domain.com/page.html\n\nand would appear to roughly match your needs, if I understood them correctly.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"url",
"urllib2"
] |
stackoverflow_0001436382_python_url_urllib2.txt
|
Q:
Python: Passing a class name as a parameter to a function?
class TestSpeedRetrieval(webapp.RequestHandler):
"""
Test retrieval times of various important records in the BigTable database
"""
def get(self):
commandValidated = True
beginTime = time()
itemList = Subscriber.all().fetch(1000)
for item in itemList:
pass
endTime = time()
self.response.out.write("<br/>Subscribers count=" + str(len(itemList)) +
" Duration=" + duration(beginTime,endTime))
How can I turn the above into a function where I pass the name of the class?
In the above example, Subscriber (in the Subscriber.all().fetch statement) is a class name, which is how you define data tables in Google BigTable with Python.
I want to do something like this:
TestRetrievalOfClass(Subscriber)
or TestRetrievalOfClass("Subscriber")
Thanks,
Neal Walters
A:
class TestSpeedRetrieval(webapp.RequestHandler):
"""
Test retrieval times of various important records in the BigTable database
"""
def __init__(self, cls):
self.cls = cls
def get(self):
commandValidated = True
beginTime = time()
itemList = self.cls.all().fetch(1000)
for item in itemList:
pass
endTime = time()
self.response.out.write("<br/>%s count=%d Duration=%s" % (self.cls.__name__, len(itemList), duration(beginTime,endTime))
TestRetrievalOfClass(Subscriber)
A:
If you pass the class object directly, as in your code between "like this" and "or",
you can get its name as the __name__ attribute.
Starting with the name (as in your code after "or") makes it REALLY hard (and not unambiguous) to retrieve the class object unless you have some indication about where the class object may be contained -- so why not pass the class object instead?!
A:
A slight variation of Ned's code that I used. This is a web application,
so I start it by running the get routine via a URL: http://localhost:8080/TestSpeedRetrieval. I didn't see the need of the init.
class TestSpeedRetrieval(webapp.RequestHandler):
"""
Test retrieval times of various important records in the BigTable database
"""
def speedTestForRecordType(self, recordTypeClassname):
beginTime = time()
itemList = recordTypeClassname.all().fetch(1000)
for item in itemList:
pass # just because we almost always loop through the records to put them somewhere
endTime = time()
self.response.out.write("<br/>%s count=%d Duration=%s" %
(recordTypeClassname.__name__, len(itemList), duration(beginTime,endTime)))
def get(self):
self.speedTestForRecordType(Subscriber)
self.speedTestForRecordType(_AppEngineUtilities_SessionData)
self.speedTestForRecordType(CustomLog)
Output:
Subscriber count=11 Duration=0:2
_AppEngineUtilities_SessionData count=14 Duration=0:1
CustomLog count=5 Duration=0:2
|
Python: Passing a class name as a parameter to a function?
|
class TestSpeedRetrieval(webapp.RequestHandler):
"""
Test retrieval times of various important records in the BigTable database
"""
def get(self):
commandValidated = True
beginTime = time()
itemList = Subscriber.all().fetch(1000)
for item in itemList:
pass
endTime = time()
self.response.out.write("<br/>Subscribers count=" + str(len(itemList)) +
" Duration=" + duration(beginTime,endTime))
How can I turn the above into a function where I pass the name of the class?
In the above example, Subscriber (in the Subscriber.all().fetch statement) is a class name, which is how you define data tables in Google BigTable with Python.
I want to do something like this:
TestRetrievalOfClass(Subscriber)
or TestRetrievalOfClass("Subscriber")
Thanks,
Neal Walters
|
[
"class TestSpeedRetrieval(webapp.RequestHandler):\n \"\"\"\n Test retrieval times of various important records in the BigTable database \n \"\"\"\n def __init__(self, cls):\n self.cls = cls\n\n def get(self):\n commandValidated = True \n beginTime = time()\n itemList = self.cls.all().fetch(1000) \n\n for item in itemList: \n pass \n endTime = time()\n self.response.out.write(\"<br/>%s count=%d Duration=%s\" % (self.cls.__name__, len(itemList), duration(beginTime,endTime))\n\nTestRetrievalOfClass(Subscriber) \n\n",
"If you pass the class object directly, as in your code between \"like this\" and \"or\",\nyou can get its name as the __name__ attribute.\nStarting with the name (as in your code after \"or\") makes it REALLY hard (and not unambiguous) to retrieve the class object unless you have some indication about where the class object may be contained -- so why not pass the class object instead?!\n",
"A slight variation of Ned's code that I used. This is a web application, \nso I start it by running the get routine via a URL: http://localhost:8080/TestSpeedRetrieval. I didn't see the need of the init. \nclass TestSpeedRetrieval(webapp.RequestHandler):\n \"\"\"\n Test retrieval times of various important records in the BigTable database \n \"\"\"\n def speedTestForRecordType(self, recordTypeClassname):\n beginTime = time()\n itemList = recordTypeClassname.all().fetch(1000) \n for item in itemList: \n pass # just because we almost always loop through the records to put them somewhere \n endTime = time() \n self.response.out.write(\"<br/>%s count=%d Duration=%s\" % \n (recordTypeClassname.__name__, len(itemList), duration(beginTime,endTime)))\n\n def get(self):\n\n self.speedTestForRecordType(Subscriber) \n self.speedTestForRecordType(_AppEngineUtilities_SessionData) \n self.speedTestForRecordType(CustomLog) \n\nOutput: \nSubscriber count=11 Duration=0:2\n_AppEngineUtilities_SessionData count=14 Duration=0:1 \nCustomLog count=5 Duration=0:2\n\n"
] |
[
30,
12,
3
] |
[] |
[] |
[
"function",
"python"
] |
stackoverflow_0001436444_function_python.txt
|
Q:
unique pin generator
The task is to generate a given number of numeric pins of a given length.
Here's the code I came up with for a particular case of numeric pins that don't start with 0:
def generate_pins(length, count):
return random.sample(range(int('1' + '0' * (length - 1)), int('9' * length)), count)
How would you implement it?
EDIT: Pins shouldn't repeat.
EDIT2: Probably let's extend this example so that pin can contain any alphanumeric symbol.
A:
random.sample guarantees no repetition ("sampling without replacement", per the docs); is this condition part of your specs?
As expressed (without any word "distinct" to indicate lack of repetition), I'd do:
import random
import string
def generate_pins(length, count):
return [''.join(random.choice(string.digits) for x in xrange(length))
for x in xrange(count)]
With an additional condition that all the pins returned be unique:
def generate_pins(length, count, alphabet=string.digits):
alphabet = ''.join(set(alphabet))
if count > len(alphabet)**length:
raise ValueError("Can't generate more than %s > %s pins of length %d out of %r" %
count, len(alphabet)**length, length, alphabet)
def onepin(length):
return ''.join(random.choice(alphabet) for x in xrange(length))
result = set(onepin(length) for x in xrange(count))
while len(result) < count:
result.add(onepin(length))
return list(result)
assuming that the specs require you to return a list.
Edit: given the OP's late clarification and spec changes, the second answer looks good, except string.ascii_lowercase + string.digits (or some variants thereof e.g. if both lowercase and uppercase ASCII letters are desired) should be used in onepin. You should specify better exactly what "alphabet" string you want to draw characters from (maybe pass it to generate_pins as an argument, with None indicating generate_pins should pick a default alphabet such as e.g. string.digits).
Further edit: added optional alphabet argument and checks about number of distinct pins that can be generated given length and that alphabet.
A:
As OP haven't said random PINs, only criteria seems to be unique pins
here is the fastest way
def generate_pins(length, count):
start=10**length
return range(start,start+count,1)
also you can not always guarantee uniqeness, same length and count at same time
e.g. try generate_pins(1,11) for Alex's answer.
|
unique pin generator
|
The task is to generate a given number of numeric pins of a given length.
Here's the code I came up with for a particular case of numeric pins that don't start with 0:
def generate_pins(length, count):
return random.sample(range(int('1' + '0' * (length - 1)), int('9' * length)), count)
How would you implement it?
EDIT: Pins shouldn't repeat.
EDIT2: Probably let's extend this example so that pin can contain any alphanumeric symbol.
|
[
"random.sample guarantees no repetition (\"sampling without replacement\", per the docs); is this condition part of your specs?\nAs expressed (without any word \"distinct\" to indicate lack of repetition), I'd do:\nimport random\nimport string\n\ndef generate_pins(length, count):\n return [''.join(random.choice(string.digits) for x in xrange(length))\n for x in xrange(count)]\n\nWith an additional condition that all the pins returned be unique:\ndef generate_pins(length, count, alphabet=string.digits):\n alphabet = ''.join(set(alphabet))\n if count > len(alphabet)**length:\n raise ValueError(\"Can't generate more than %s > %s pins of length %d out of %r\" %\n count, len(alphabet)**length, length, alphabet)\n def onepin(length):\n return ''.join(random.choice(alphabet) for x in xrange(length))\n result = set(onepin(length) for x in xrange(count))\n while len(result) < count:\n result.add(onepin(length))\n return list(result)\n\nassuming that the specs require you to return a list.\nEdit: given the OP's late clarification and spec changes, the second answer looks good, except string.ascii_lowercase + string.digits (or some variants thereof e.g. if both lowercase and uppercase ASCII letters are desired) should be used in onepin. You should specify better exactly what \"alphabet\" string you want to draw characters from (maybe pass it to generate_pins as an argument, with None indicating generate_pins should pick a default alphabet such as e.g. string.digits).\nFurther edit: added optional alphabet argument and checks about number of distinct pins that can be generated given length and that alphabet.\n",
"As OP haven't said random PINs, only criteria seems to be unique pins\nhere is the fastest way\ndef generate_pins(length, count):\n start=10**length\n return range(start,start+count,1)\n\nalso you can not always guarantee uniqeness, same length and count at same time\ne.g. try generate_pins(1,11) for Alex's answer.\n"
] |
[
7,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001436552_python.txt
|
Q:
Python sort() method on list vs builtin sorted() function
I know that __builtin__ sorted() function works on any iterable. But can someone explain this huge (10x) performance difference between anylist.sort() vs sorted(anylist) ? Also, please point out if I am doing anything wrong with way this is measured.
"""
Example Output:
$ python list_sort_timeit.py
Using sort method: 20.0662879944
Using sorted builin method: 259.009809017
"""
import random
import timeit
print 'Using sort method:',
x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000)").repeat())
print x
print 'Using sorted builin method:',
x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000)").repeat())
print x
As the title says, I was interested in comparing list.sort() vs sorted(list). The above snippet showed something interesting that, python's sort function behaves very well for already sorted data. As pointed out by Anurag, in the first case, the sort method is working on already sorted data and while in second sorted it is working on fresh piece to do work again and again.
So I wrote this one to test and yes, they are very close.
"""
Example Output:
$ python list_sort_timeit.py
Using sort method: 19.0166599751
Using sorted builin method: 23.203567028
"""
import random
import timeit
print 'Using sort method:',
x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000);test_list1.sort()").repeat())
print x
print 'Using sorted builin method:',
x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000);test_list2.sort()").repeat())
print x
Oh, I see Alex Martelli with a response, as I was typing this one.. ( I shall leave the edit, as it might be useful).
A:
Your error in measurement is as follows: after your first call of test_list1.sort(), that list object IS sorted -- and Python's sort, aka timsort, is wickedly fast on already sorted lists!!! That's the most frequent error in using timeit -- inadvertently getting side effects and not accounting for them.
Here's a good set of measurements, using timeit from the command line as it's best used:
$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '
y=list(x); y.sort()'
1000 loops, best of 3: 452 usec per loop
$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '
x.sort()'
10000 loops, best of 3: 37.4 usec per loop
$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '
sorted(x)'
1000 loops, best of 3: 462 usec per loop
As you see, y.sort() and sorted(x) are neck and neck, but x.sort() thanks to the side effects gains over an order of magnitude's advantage -- just because of your measurement error, though: this tells you nothing about sort vs sorted per se! -)
A:
Because list.sort does in place sorting, so first time it sorts but next time you are sorting the sorted list.
e.g. try this and you will get same results
in timeit case most of the time is spent is copying and sorted also does one more copy
import time
import random
test_list1=random.sample(xrange(1000),1000)
test_list2=random.sample(xrange(1000),1000)
s=time.time()
for i in range(100):
test_list1.sort()
print time.time()-s
s=time.time()
for i in range(100):
test_list2=sorted(test_list2)
print time.time()-s
A:
Well, the .sort() method of lists sorts the list in place, while sorted() creates a new list. So if you have a large list, part of your performance difference will be due to copying.
Still, an order of magnitude difference seems larger than I'd expect. Perhaps list.sort() has some special-cased optimization that sorted() can't make use of. For example, since the list class already has an internal Py_Object*[] array of the right size, perhaps it can perform swaps more efficiently.
Edit: Alex and Anurag are right, the order of magnitude difference is due to you accidentally sorting an already-sorted list in your test case. However, as Alex's benchmarks show, list.sort() is about 2% faster than sorted(), which would make sense due to the copying overhead.
|
Python sort() method on list vs builtin sorted() function
|
I know that __builtin__ sorted() function works on any iterable. But can someone explain this huge (10x) performance difference between anylist.sort() vs sorted(anylist) ? Also, please point out if I am doing anything wrong with way this is measured.
"""
Example Output:
$ python list_sort_timeit.py
Using sort method: 20.0662879944
Using sorted builin method: 259.009809017
"""
import random
import timeit
print 'Using sort method:',
x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000)").repeat())
print x
print 'Using sorted builin method:',
x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000)").repeat())
print x
As the title says, I was interested in comparing list.sort() vs sorted(list). The above snippet showed something interesting that, python's sort function behaves very well for already sorted data. As pointed out by Anurag, in the first case, the sort method is working on already sorted data and while in second sorted it is working on fresh piece to do work again and again.
So I wrote this one to test and yes, they are very close.
"""
Example Output:
$ python list_sort_timeit.py
Using sort method: 19.0166599751
Using sorted builin method: 23.203567028
"""
import random
import timeit
print 'Using sort method:',
x = min(timeit.Timer("test_list1.sort()","import random;test_list1=random.sample(xrange(1000),1000);test_list1.sort()").repeat())
print x
print 'Using sorted builin method:',
x = min(timeit.Timer("sorted(test_list2)","import random;test_list2=random.sample(xrange(1000),1000);test_list2.sort()").repeat())
print x
Oh, I see Alex Martelli with a response, as I was typing this one.. ( I shall leave the edit, as it might be useful).
|
[
"Your error in measurement is as follows: after your first call of test_list1.sort(), that list object IS sorted -- and Python's sort, aka timsort, is wickedly fast on already sorted lists!!! That's the most frequent error in using timeit -- inadvertently getting side effects and not accounting for them.\nHere's a good set of measurements, using timeit from the command line as it's best used:\n$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '\ny=list(x); y.sort()'\n1000 loops, best of 3: 452 usec per loop\n$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '\nx.sort()'\n10000 loops, best of 3: 37.4 usec per loop\n$ python -mtimeit -s'import random; x=range(1000); random.shuffle(x)' '\nsorted(x)'\n1000 loops, best of 3: 462 usec per loop\n\nAs you see, y.sort() and sorted(x) are neck and neck, but x.sort() thanks to the side effects gains over an order of magnitude's advantage -- just because of your measurement error, though: this tells you nothing about sort vs sorted per se! -)\n",
"Because list.sort does in place sorting, so first time it sorts but next time you are sorting the sorted list.\ne.g. try this and you will get same results\nin timeit case most of the time is spent is copying and sorted also does one more copy\nimport time\nimport random\ntest_list1=random.sample(xrange(1000),1000)\ntest_list2=random.sample(xrange(1000),1000)\n\ns=time.time()\nfor i in range(100):\n test_list1.sort()\nprint time.time()-s\n\ns=time.time()\nfor i in range(100):\n test_list2=sorted(test_list2)\nprint time.time()-s\n\n",
"Well, the .sort() method of lists sorts the list in place, while sorted() creates a new list. So if you have a large list, part of your performance difference will be due to copying.\nStill, an order of magnitude difference seems larger than I'd expect. Perhaps list.sort() has some special-cased optimization that sorted() can't make use of. For example, since the list class already has an internal Py_Object*[] array of the right size, perhaps it can perform swaps more efficiently.\nEdit: Alex and Anurag are right, the order of magnitude difference is due to you accidentally sorting an already-sorted list in your test case. However, as Alex's benchmarks show, list.sort() is about 2% faster than sorted(), which would make sense due to the copying overhead.\n"
] |
[
60,
14,
11
] |
[] |
[] |
[
"python",
"sorting"
] |
stackoverflow_0001436962_python_sorting.txt
|
Q:
Better to install MySQL 32bit or 64bit on my 64bit Intel-based Mac (Perl/Python user)?
I have had numerous headaches trying to get the MySQL APIs for Perl and Python working on my 64 bit Macbook Pro (Leopard). I installed the 64 bit version of MySQL, but Googling around now I have the impression that this could be the source of my pain. None of the various blogs and SO answers quite seem to work (for example here on SO)
Could the 64 bit MySQL install be the culprit? Can anyone confirm that they have MySQL access via Perl and/or Python on a 64 bit Mac using 64 bit MySQL? Did you do anything special or face some similar problems?
A:
32-bit and 64-bit libraries don't play nice together. So, it depends whether you're using 32-bit Perl/Python or not.
If you are, you'll need 32-bit MySQL. Chances are your Python, at least, is 32-bit, since both the Apple-shipped Python and the binaries from python.org are 32-bit only. You can build 64-bit Python (or- gasp- a 4-way i386/x86_64/ppc/ppc64 Universal Binary) from source, but unless you really need to work with absolutely huge disk files/amounts of memory (I'm talking multi-gigabyte memory maps, for example), chances are you do not need 64-bit anything right now.
A:
You will need the 32vit vs 64bit of the MySQL client libraries to match the application you're trying to link them with. However, this not prevent you from connecting to a 64-bit install of MySQL server.
In Unix, the MySQL client libraries support multiple versions of themselves lying around; you just have to make sure the application is loading the correct one. This should be true to the Mac.
|
Better to install MySQL 32bit or 64bit on my 64bit Intel-based Mac (Perl/Python user)?
|
I have had numerous headaches trying to get the MySQL APIs for Perl and Python working on my 64 bit Macbook Pro (Leopard). I installed the 64 bit version of MySQL, but Googling around now I have the impression that this could be the source of my pain. None of the various blogs and SO answers quite seem to work (for example here on SO)
Could the 64 bit MySQL install be the culprit? Can anyone confirm that they have MySQL access via Perl and/or Python on a 64 bit Mac using 64 bit MySQL? Did you do anything special or face some similar problems?
|
[
"32-bit and 64-bit libraries don't play nice together. So, it depends whether you're using 32-bit Perl/Python or not.\nIf you are, you'll need 32-bit MySQL. Chances are your Python, at least, is 32-bit, since both the Apple-shipped Python and the binaries from python.org are 32-bit only. You can build 64-bit Python (or- gasp- a 4-way i386/x86_64/ppc/ppc64 Universal Binary) from source, but unless you really need to work with absolutely huge disk files/amounts of memory (I'm talking multi-gigabyte memory maps, for example), chances are you do not need 64-bit anything right now.\n",
"You will need the 32vit vs 64bit of the MySQL client libraries to match the application you're trying to link them with. However, this not prevent you from connecting to a 64-bit install of MySQL server.\nIn Unix, the MySQL client libraries support multiple versions of themselves lying around; you just have to make sure the application is loading the correct one. This should be true to the Mac.\n"
] |
[
4,
2
] |
[] |
[] |
[
"mysql",
"perl",
"python"
] |
stackoverflow_0001436422_mysql_perl_python.txt
|
Q:
Lightweight pickle for basic types in python?
All I want to do is serialize and unserialize tuples of strings or ints.
I looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects.
marshal is a little better in terms of space but the result is full of nasty \x00 bytes. Ideally I would like the result to be human readable.
I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?
This is getting stored in a db, not a file. Byte overhead matters because it could make the difference between requiring a TEXT column versus a varchar, and generally data compactness affects all areas of db performance.
A:
Take a look at json, at least the generated dumps are readable with many other languages.
JSON (JavaScript Object Notation) http://json.org is a subset of JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data interchange format.
A:
personally i would use yaml. it's on par with json for encoding size, but it can represent some more complex things (e.g. classes, recursive structures) when necessary.
In [1]: import yaml
In [2]: x = [1, 2, 3, 'pants']
In [3]: print(yaml.dump(x))
[1, 2, 3, pants]
In [4]: y = yaml.load('[1, 2, 3, pants]')
In [5]: y
Out[5]: [1, 2, 3, 'pants']
A:
Maybe you're not using the right protocol:
>>> import pickle
>>> a = range(1, 100)
>>> len(pickle.dumps(a))
492
>>> len(pickle.dumps(a, pickle.HIGHEST_PROTOCOL))
206
See the documentation for pickle data formats.
A:
If you need a space efficient solution you can use Google Protocol buffers.
Protocol buffers - Encoding
Protocol buffers - Python Tutorial
A:
There are some persistence builtins mentioned in the python documentation but I don't think any of these is remarkable smaller in the produced filesize.
You could alway use the configparser but there you only get string, int, float, bool.
A:
"the byte overhead is significant"
Why does this matter? It does the job. If you're running low on disk space, I'd be glad to sell you a 1Tb for $500.
Have you run it? Is performance a problem? Can you demonstrate that the performance of serialization is the problem?
"I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?"
Nothing simpler than repr and eval.
What's wrong with eval?
Is is the "someone could insert malicious code into the file where I serialized my lists" issue?
Who -- specifically -- is going to find and edit this file to put in malicious code? Anything you do to secure this (i.e., encryption) removes "simple" from it.
|
Lightweight pickle for basic types in python?
|
All I want to do is serialize and unserialize tuples of strings or ints.
I looked at pickle.dumps() but the byte overhead is significant. Basically it looks like it takes up about 4x as much space as it needs to. Besides, all I need is basic types and have no need to serialize objects.
marshal is a little better in terms of space but the result is full of nasty \x00 bytes. Ideally I would like the result to be human readable.
I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?
This is getting stored in a db, not a file. Byte overhead matters because it could make the difference between requiring a TEXT column versus a varchar, and generally data compactness affects all areas of db performance.
|
[
"Take a look at json, at least the generated dumps are readable with many other languages.\n\nJSON (JavaScript Object Notation) http://json.org is a subset of JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data interchange format.\n\n",
"personally i would use yaml. it's on par with json for encoding size, but it can represent some more complex things (e.g. classes, recursive structures) when necessary. \nIn [1]: import yaml\nIn [2]: x = [1, 2, 3, 'pants']\nIn [3]: print(yaml.dump(x))\n[1, 2, 3, pants]\n\nIn [4]: y = yaml.load('[1, 2, 3, pants]')\nIn [5]: y\nOut[5]: [1, 2, 3, 'pants']\n\n",
"Maybe you're not using the right protocol:\n>>> import pickle\n>>> a = range(1, 100)\n>>> len(pickle.dumps(a))\n492\n>>> len(pickle.dumps(a, pickle.HIGHEST_PROTOCOL))\n206\n\nSee the documentation for pickle data formats.\n",
"If you need a space efficient solution you can use Google Protocol buffers.\nProtocol buffers - Encoding\nProtocol buffers - Python Tutorial\n",
"There are some persistence builtins mentioned in the python documentation but I don't think any of these is remarkable smaller in the produced filesize.\nYou could alway use the configparser but there you only get string, int, float, bool.\n",
"\"the byte overhead is significant\"\nWhy does this matter? It does the job. If you're running low on disk space, I'd be glad to sell you a 1Tb for $500. \nHave you run it? Is performance a problem? Can you demonstrate that the performance of serialization is the problem?\n\"I thought of just using repr() and eval(), but is there a simple way I could accomplish this without using eval()?\"\nNothing simpler than repr and eval.\nWhat's wrong with eval?\nIs is the \"someone could insert malicious code into the file where I serialized my lists\" issue?\nWho -- specifically -- is going to find and edit this file to put in malicious code? Anything you do to secure this (i.e., encryption) removes \"simple\" from it. \n"
] |
[
13,
8,
8,
6,
1,
0
] |
[
"Luckily there is solution which uses COMPRESSION, and solves \nthe general problem involving any arbitrary Python object \nincluding new classes. Rather than micro-manage mere \ntuples sometimes it's better to use a DRY tool.\nYour code will be more crisp and readily refactored \nin similar future situations. \ny_serial.py module :: warehouse Python objects with SQLite\n\"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful \"standard\" module for a database to store schema-less data.\"\nhttp://yserial.sourceforge.net\n[If you are still concerned, why not stick those tuples in \na dictionary, then apply y_serial to the dictionary. \nProbably any overhead will vanish due to the transparent \ncompression in the background by zlib.]\nAs to readability, the documentation also gives details on \nwhy cPickle was selected over json.\n"
] |
[
-1
] |
[
"pickle",
"python",
"serialization"
] |
stackoverflow_0000532934_pickle_python_serialization.txt
|
Q:
What are the risks (if any) of mixing Psyco into my project?
I work on a large financial pricing application in which some long running calculations. We have identified some functions which can be sped up by the selective application of psyco. My management have requested an assessment of the costs & benefits of adding psyco into our stack.
Given the critical nature of my project, it's not acceptable if a "performance enhancement" can potentially reduce reliability. I've read that using psyco gets additional performance at the cost of more memory used. I'm worried that this could be a problem.
I'm doing it like this:
@psyco.proxy
def my_slow_function(xxx):
In all, we expect to apply psyco to no more than 15 functions - these are used very heavily. There are thousands of functions in this library, so this is only affecting a tiny sub-set of of our code. All of the functions are small, mathematical and stateless.
Is there likely to be a risk that this will use substantially more memory
Are there any other problems we might encounter when adding this component to our long established library?
FYI, platform is Python 2.4.4 on Windows 32bit XP
UPDATE: It seems that the main potential risk is due to a program requiring more memory to run than before psyco was added, so ideally I'd like to find a way to see if adding psyco dramatically changes the memory requirements of the system.
A:
Why not try profiling it? Psyco has a pretty detailed logging facility:
memory usage: x+ kb
Psyco's current notion of how much memory is consumes for the emitted machine code and supporting data structures. This is a rouch estimation of the memory overhead (the + sign is supposed to remind you that this figure is highly underestimated). Use this info to tune the memory limits (section 3.2.2).
Note also that the memory usage is configurable:
memorymax
Stop when the memory consumed by Psyco reached the limit (in kilobytes). This limit includes the memory consumed before this profiler started.
A:
Psyco is a JIT compiler. If your function are stateless, then there should be almost no draw back except more memory.
|
What are the risks (if any) of mixing Psyco into my project?
|
I work on a large financial pricing application in which some long running calculations. We have identified some functions which can be sped up by the selective application of psyco. My management have requested an assessment of the costs & benefits of adding psyco into our stack.
Given the critical nature of my project, it's not acceptable if a "performance enhancement" can potentially reduce reliability. I've read that using psyco gets additional performance at the cost of more memory used. I'm worried that this could be a problem.
I'm doing it like this:
@psyco.proxy
def my_slow_function(xxx):
In all, we expect to apply psyco to no more than 15 functions - these are used very heavily. There are thousands of functions in this library, so this is only affecting a tiny sub-set of of our code. All of the functions are small, mathematical and stateless.
Is there likely to be a risk that this will use substantially more memory
Are there any other problems we might encounter when adding this component to our long established library?
FYI, platform is Python 2.4.4 on Windows 32bit XP
UPDATE: It seems that the main potential risk is due to a program requiring more memory to run than before psyco was added, so ideally I'd like to find a way to see if adding psyco dramatically changes the memory requirements of the system.
|
[
"Why not try profiling it? Psyco has a pretty detailed logging facility:\n\nmemory usage: x+ kb\nPsyco's current notion of how much memory is consumes for the emitted machine code and supporting data structures. This is a rouch estimation of the memory overhead (the + sign is supposed to remind you that this figure is highly underestimated). Use this info to tune the memory limits (section 3.2.2). \n\nNote also that the memory usage is configurable:\n\nmemorymax\nStop when the memory consumed by Psyco reached the limit (in kilobytes). This limit includes the memory consumed before this profiler started. \n\n",
"Psyco is a JIT compiler. If your function are stateless, then there should be almost no draw back except more memory.\n"
] |
[
3,
2
] |
[] |
[] |
[
"psyco",
"python"
] |
stackoverflow_0001437403_psyco_python.txt
|
Q:
Sorting by a field of another table referenced by a foreign key in SQLObject
Is it possible to sort results returned by SQLObject by a value of another table?
I have two tables:
class Foo(SQLObject):
bar = ForeignKey('Bar')
class Bar(SQLObject):
name = StringCol()
foos = MultipleJoin('Foo')
I'd like to get foos sorted by the name of a bar they are related to.
Doing:
foos = Foo.select().orderBy(Foo.q.bar)
...would sort the output by bar's ids, but how do I sort them by bar's name?
A:
Below is the answer of a SQLObject maintainer (he has trouble posting it himself because captcha is not displayed):
Do an explicit join:
foos = Foo.select(Foo.q.barID==Bar.q.id, orderBy=Bar.q.name)
This generates a query:
SELECT foo.id, foo.bar_id FROM foo, bar WHERE ((foo.bar_id) = (bar.id)) ORDER BY bar.name
PS. I am the current maintainer of SQLObject. I don't visit stackoverflow.com; a friend of mine pointed me to the question. If you have more questions about SQLObject I invite you to the SQLObject mailing list .
|
Sorting by a field of another table referenced by a foreign key in SQLObject
|
Is it possible to sort results returned by SQLObject by a value of another table?
I have two tables:
class Foo(SQLObject):
bar = ForeignKey('Bar')
class Bar(SQLObject):
name = StringCol()
foos = MultipleJoin('Foo')
I'd like to get foos sorted by the name of a bar they are related to.
Doing:
foos = Foo.select().orderBy(Foo.q.bar)
...would sort the output by bar's ids, but how do I sort them by bar's name?
|
[
"Below is the answer of a SQLObject maintainer (he has trouble posting it himself because captcha is not displayed):\nDo an explicit join:\nfoos = Foo.select(Foo.q.barID==Bar.q.id, orderBy=Bar.q.name)\n\nThis generates a query:\nSELECT foo.id, foo.bar_id FROM foo, bar WHERE ((foo.bar_id) = (bar.id)) ORDER BY bar.name\n\nPS. I am the current maintainer of SQLObject. I don't visit stackoverflow.com; a friend of mine pointed me to the question. If you have more questions about SQLObject I invite you to the SQLObject mailing list .\n"
] |
[
3
] |
[] |
[] |
[
"python",
"sorting",
"sqlobject"
] |
stackoverflow_0001413101_python_sorting_sqlobject.txt
|
Q:
Encoding problems in PyQt
My program stores file index in file packed by cPickle. There are non-english filenames.
When I just do this
print f [0]
where f [0] is "\xc2\xe8\xf1\xee\xea\xee\xf1\xed\xfb\xe9 \xe3\xee\xe4" ("Високосный год" in normal view), it prints the string in proper way — in russian.
When the program manually adds the string u'Високосный год' to QTreeView, everything is fine.
But when the program puts this string ("\xe3\xee\xe4" etc.) straight from unpickled file to QTreeView, it becomes like that:
alt text http://img170.imageshack.us/img170/9226/encoding.png
Is there any way to solve that?
A:
Have you run decode on the unpickled string using the correct encoding ("cp1251" by the look of it)? If not, you need to do this to make sure you're passing a Unicode string to the GUI.
|
Encoding problems in PyQt
|
My program stores file index in file packed by cPickle. There are non-english filenames.
When I just do this
print f [0]
where f [0] is "\xc2\xe8\xf1\xee\xea\xee\xf1\xed\xfb\xe9 \xe3\xee\xe4" ("Високосный год" in normal view), it prints the string in proper way — in russian.
When the program manually adds the string u'Високосный год' to QTreeView, everything is fine.
But when the program puts this string ("\xe3\xee\xe4" etc.) straight from unpickled file to QTreeView, it becomes like that:
alt text http://img170.imageshack.us/img170/9226/encoding.png
Is there any way to solve that?
|
[
"Have you run decode on the unpickled string using the correct encoding (\"cp1251\" by the look of it)? If not, you need to do this to make sure you're passing a Unicode string to the GUI.\n"
] |
[
2
] |
[] |
[] |
[
"encoding",
"pyqt",
"python"
] |
stackoverflow_0001437838_encoding_pyqt_python.txt
|
Q:
Retrieving values from 2 different tables with Django's QuerySet
For the following models:
class Topping(models.Model):
name = models.CharField(max_length=100)
class Pizza(models.Model):
name = models.CharField(max_length=100)
toppings = models.ManyToManyField(Toppping)
My data looks like the following:
Pizza and Topping tables joined:
ID NAME TOPPINGS
------------------------------------
1 deluxe topping_1, topping_2
2 deluxe topping_3, topping_4
3 hawaiian topping_1
I want to get the pizza id along with their corresponding toppings for all pizza named deluxe. My expected result is:
1 topping_1
1 topping_2
2 topping_3
2 topping_4
The junction table is:
pizza_toppings
--------------
id
pizza_id
topping_id
Here's the SQL equivalent of what I want to achieve:
SELECT p.id, t.name
FROM pizza_toppings AS pt
INNER JOIN pizza AS p ON p.id = pt.pizza_id
INNER JOIN topping AS t ON t.id = pt.topping_id
WHERE p.name = 'deluxe'
Any ideas on what the corresponding Django Queryset looks like? I also want to sort the resulting toppings by name if the above is not challenging enough.
A:
I don't think there is a clean solution to this, since you want data from two different models. Depending on your data structure you might want to use select_related to avoid hitting the database for all the toppings. Going for your desired result, I would do:
result = []
pizzas = Pizza.objects.select_related().filter(name='deluxe')
for pizza in pizzas:
for toppings in pizza.toppings.all():
result.append((pizza.pk, topping.name))
This would generate:
[
(1, topping_1),
(1, topping_2),
(2, topping_3),
(2, topping_4),
]
Now there are different ways to setup the data, using lists, tuples and dictionaries, but I think you get the idea of how you could do it.
A:
There is no direct way to get a pizza when you have a topping from the model above. You could do
pizzas = topping.pizza_set.all()
for all pizzas with a topping or probably (in case the topping exists on only one pizza with the name "deluxe")
pizza = topping.pizza_set.get(name="deluxe")
once you have a topping. Or, you could store the Pizza and the Topping in a list of tuples or a dictionary (if there are no duplicate toppings):
toppings = {}
pizzas = Pizza.objects.filter(name="deluxe")
for pizza in pizzas:
for topping in pizza.toppings.all():
toppings[topping.name] = pizza.name
sorted_toppings = toppings.keys()
sorted_toppings.sort()
Then you can fetch the pizza for a topping with the dictionary.
|
Retrieving values from 2 different tables with Django's QuerySet
|
For the following models:
class Topping(models.Model):
name = models.CharField(max_length=100)
class Pizza(models.Model):
name = models.CharField(max_length=100)
toppings = models.ManyToManyField(Toppping)
My data looks like the following:
Pizza and Topping tables joined:
ID NAME TOPPINGS
------------------------------------
1 deluxe topping_1, topping_2
2 deluxe topping_3, topping_4
3 hawaiian topping_1
I want to get the pizza id along with their corresponding toppings for all pizza named deluxe. My expected result is:
1 topping_1
1 topping_2
2 topping_3
2 topping_4
The junction table is:
pizza_toppings
--------------
id
pizza_id
topping_id
Here's the SQL equivalent of what I want to achieve:
SELECT p.id, t.name
FROM pizza_toppings AS pt
INNER JOIN pizza AS p ON p.id = pt.pizza_id
INNER JOIN topping AS t ON t.id = pt.topping_id
WHERE p.name = 'deluxe'
Any ideas on what the corresponding Django Queryset looks like? I also want to sort the resulting toppings by name if the above is not challenging enough.
|
[
"I don't think there is a clean solution to this, since you want data from two different models. Depending on your data structure you might want to use select_related to avoid hitting the database for all the toppings. Going for your desired result, I would do:\nresult = []\npizzas = Pizza.objects.select_related().filter(name='deluxe')\nfor pizza in pizzas:\n for toppings in pizza.toppings.all():\n result.append((pizza.pk, topping.name))\n\nThis would generate:\n[\n (1, topping_1),\n (1, topping_2),\n (2, topping_3),\n (2, topping_4),\n]\n\nNow there are different ways to setup the data, using lists, tuples and dictionaries, but I think you get the idea of how you could do it.\n",
"There is no direct way to get a pizza when you have a topping from the model above. You could do\npizzas = topping.pizza_set.all()\n\nfor all pizzas with a topping or probably (in case the topping exists on only one pizza with the name \"deluxe\")\npizza = topping.pizza_set.get(name=\"deluxe\")\n\nonce you have a topping. Or, you could store the Pizza and the Topping in a list of tuples or a dictionary (if there are no duplicate toppings):\ntoppings = {}\npizzas = Pizza.objects.filter(name=\"deluxe\")\nfor pizza in pizzas:\n for topping in pizza.toppings.all():\n toppings[topping.name] = pizza.name\nsorted_toppings = toppings.keys()\nsorted_toppings.sort()\n\nThen you can fetch the pizza for a topping with the dictionary.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001435438_django_django_models_python.txt
|
Q:
What's the right way to use Unicode metadata in setup.py?
I was writing a setup.py for a Python package using setuptools and wanted to include a non-ASCII character in the long_description field:
#!/usr/bin/env python
from setuptools import setup
setup(...
long_description=u"...", # in real code this value is read from a text file
...)
Unfortunately, passing a unicode object to setup() breaks either of the following two commands with a UnicodeEncodeError
python setup.py --long-description | rst2html
python setup.py upload
If I use a raw UTF-8 string for the long_description field, then the following command breaks with a UnicodeDecodeError:
python setup.py register
I generally release software by running 'python setup.py sdist register upload', which means ugly hacks that look into sys.argv and pass the right object type are right out.
In the end I gave up and implemented a different ugly hack:
class UltraMagicString(object):
# Catch-22:
# - if I return Unicode, python setup.py --long-description as well
# as python setup.py upload fail with a UnicodeEncodeError
# - if I return UTF-8 string, python setup.py sdist register
# fails with an UnicodeDecodeError
def __init__(self, value):
self.value = value
def __str__(self):
return self.value
def __unicode__(self):
return self.value.decode('UTF-8')
def __add__(self, other):
return UltraMagicString(self.value + str(other))
def split(self, *args, **kw):
return self.value.split(*args, **kw)
...
setup(...
long_description=UltraMagicString("..."),
...)
Isn't there a better way?
A:
It is apparently a distutils bug that has been fixed in python 2.6: http://mail.python.org/pipermail/distutils-sig/2009-September/013275.html
Tarek suggests to patch post_to_server. The patch should pre-process all values in the
"data" argument and turn them into unicode and then call the original method. See http://mail.python.org/pipermail/distutils-sig/2009-September/013277.html
A:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from setuptools import setup
setup(name="fudz",
description="fudzily",
version="0.1",
long_description=u"bläh bläh".encode("UTF-8"), # in real code this value is read from a text file
py_modules=["fudz"],
author="David Fraser",
author_email="[email protected]",
url="http://en.wikipedia.org/wiki/Fudz",
)
I'm testing with the above code - there is no error from --long-description, only from rst2html; upload seems to work OK (although I cancel actually uploading) and register asks me for my username which I don't have. But the traceback in your comment is helpful - it's the automatic conversion to unicode in the register command that causes the problem.
See the illusive setdefaultencoding for more information on this - basically you want the default encoding in Python to be able to convert your encoded string back to unicode, but it's tricky to set this up. In this case I think it's worth the effort:
import sys
reload(sys).setdefaultencoding("UTF-8")
Or even to be correct you can get it from the locale - there's code commented out in /usr/lib/python2.6/site.py that you can find that does this but I'll leave that discussion for now.
A:
You need to change your unicode long description u"bläh bläh bläh" to a normal string "bläh bläh bläh" and add an encoding header as the second line of your file:
#!/usr/bin/env python
# encoding: utf-8
...
...
Obviously, you need to save the file with UTF-8 encoding, too.
|
What's the right way to use Unicode metadata in setup.py?
|
I was writing a setup.py for a Python package using setuptools and wanted to include a non-ASCII character in the long_description field:
#!/usr/bin/env python
from setuptools import setup
setup(...
long_description=u"...", # in real code this value is read from a text file
...)
Unfortunately, passing a unicode object to setup() breaks either of the following two commands with a UnicodeEncodeError
python setup.py --long-description | rst2html
python setup.py upload
If I use a raw UTF-8 string for the long_description field, then the following command breaks with a UnicodeDecodeError:
python setup.py register
I generally release software by running 'python setup.py sdist register upload', which means ugly hacks that look into sys.argv and pass the right object type are right out.
In the end I gave up and implemented a different ugly hack:
class UltraMagicString(object):
# Catch-22:
# - if I return Unicode, python setup.py --long-description as well
# as python setup.py upload fail with a UnicodeEncodeError
# - if I return UTF-8 string, python setup.py sdist register
# fails with an UnicodeDecodeError
def __init__(self, value):
self.value = value
def __str__(self):
return self.value
def __unicode__(self):
return self.value.decode('UTF-8')
def __add__(self, other):
return UltraMagicString(self.value + str(other))
def split(self, *args, **kw):
return self.value.split(*args, **kw)
...
setup(...
long_description=UltraMagicString("..."),
...)
Isn't there a better way?
|
[
"It is apparently a distutils bug that has been fixed in python 2.6: http://mail.python.org/pipermail/distutils-sig/2009-September/013275.html\nTarek suggests to patch post_to_server. The patch should pre-process all values in the\n\"data\" argument and turn them into unicode and then call the original method. See http://mail.python.org/pipermail/distutils-sig/2009-September/013277.html\n",
"#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom setuptools import setup\nsetup(name=\"fudz\",\n description=\"fudzily\",\n version=\"0.1\",\n long_description=u\"bläh bläh\".encode(\"UTF-8\"), # in real code this value is read from a text file\n py_modules=[\"fudz\"],\n author=\"David Fraser\",\n author_email=\"[email protected]\",\n url=\"http://en.wikipedia.org/wiki/Fudz\",\n )\n\nI'm testing with the above code - there is no error from --long-description, only from rst2html; upload seems to work OK (although I cancel actually uploading) and register asks me for my username which I don't have. But the traceback in your comment is helpful - it's the automatic conversion to unicode in the register command that causes the problem.\nSee the illusive setdefaultencoding for more information on this - basically you want the default encoding in Python to be able to convert your encoded string back to unicode, but it's tricky to set this up. In this case I think it's worth the effort:\nimport sys\nreload(sys).setdefaultencoding(\"UTF-8\")\n\nOr even to be correct you can get it from the locale - there's code commented out in /usr/lib/python2.6/site.py that you can find that does this but I'll leave that discussion for now.\n",
"You need to change your unicode long description u\"bläh bläh bläh\" to a normal string \"bläh bläh bläh\" and add an encoding header as the second line of your file:\n#!/usr/bin/env python\n# encoding: utf-8\n...\n...\n\nObviously, you need to save the file with UTF-8 encoding, too.\n"
] |
[
6,
4,
1
] |
[] |
[] |
[
"python",
"setuptools",
"unicode"
] |
stackoverflow_0001162338_python_setuptools_unicode.txt
|
Q:
Error while using multiprocessing module in a python daemon
I'm getting the following error when using the multiprocessing module within a python daemon process (using python-daemon):
Traceback (most recent call last):
File "/usr/local/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/local/lib/python2.6/multiprocessing/util.py", line 262, in _exit_function
for p in active_children():
File "/usr/local/lib/python2.6/multiprocessing/process.py", line 43, in active_children
_cleanup()
File "/usr/local/lib/python2.6/multiprocessing/process.py", line 53, in _cleanup
if p._popen.poll() is not None:
File "/usr/local/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 10] No child processes
The daemon process (parent) spawns a number of processes (children) and then periodically polls the processes to see if they have completed. If the parent detects that one of the processes has completed, it then attempts to restart that process. It is at this point that the above exception is raised. It seems that once one of the processes completes, any operation involving the multiprocessing module will generate this exception. If I run the identical code in a non-daemon python script, it executes with no errors whatsoever.
EDIT:
Sample script
from daemon import runner
class DaemonApp(object):
def __init__(self, pidfile_path, run):
self.pidfile_path = pidfile_path
self.run = run
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
def run():
import multiprocessing as processing
import time
import os
import sys
import signal
def func():
print 'pid: ', os.getpid()
for i in range(5):
print i
time.sleep(1)
process = processing.Process(target=func)
process.start()
while True:
print 'checking process'
if not process.is_alive():
print 'process dead'
process = processing.Process(target=func)
process.start()
time.sleep(1)
# uncomment to run as daemon
app = DaemonApp('/root/bugtest.pid', run)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
#uncomment to run as regular script
#run()
A:
Your problem is a conflict between the daemon and multiprocessing modules, in particular in its handling of the SIGCLD (child process terminated) signal. daemon sets SIGCLD to SIG_IGN when launching, which, at least on Linux, causes terminated children to immediately be reaped (rather than becoming a zombie until the parent invokes wait()). But multiprocessing's is_alive test invokes wait() to see if the process is alive, which fails if the process has already been reaped.
Simplest solution is just to set SIGCLD back to SIG_DFL (default behaviour -- ignore the signal and let the parent wait() for the terminated child process):
def run():
# ...
signal.signal(signal.SIGCLD, signal.SIG_DFL)
process = processing.Process(target=func)
process.start()
while True:
# ...
A:
Ignoring SIGCLD also causes problems with the subprocess module, because of a bug in that module (issue 1731717, still open as of 2011-09-21).
This behaviour is addressed in version 1.4.8 of the python-daemon library; it now omits the default fiddling with SIGCLD, so no longer has this unpleasant interaction with other standard library modules.
A:
I think there was a fix put into trunk and 2.6 maint a little while ago which should help with this can you try running your script in python-trunk or the latest 2.6-maint svn? I'm failing to pull up the bug information
A:
Looks like your error is coming at the very end of your process -- your clue's at the very start of your traceback, and I quote...:
File "/usr/local/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
if atexit._run_exitfuncs is running, this clearly shows that your own process is terminating. So, the error itself is a minor issue in a sense -- just from some function that the multiprocessing module registered to run "at-exit" from your process. The really interesting issue is, WHY is your main process exiting? I think this may be due to some uncaught exception: try setting the exception hook and showing rich diagnostic info before it gets lost by the OTHER exception caused by whatever it is that multiprocessing's registered for at-exit running...
A:
I'm running into this also using the celery distributed task manager under RHEL 5.3 with Python 2.6. My traceback looks a little different but the error the same:
File "/usr/local/lib/python2.6/multiprocessing/pool.py", line 334, in terminate
self._terminate()
File "/usr/local/lib/python2.6/multiprocessing/util.py", line 174, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/local/lib/python2.6/multiprocessing/pool.py", line 373, in _terminate_pool
p.terminate()
File "/usr/local/lib/python2.6/multiprocessing/process.py", line 111, in terminate
self._popen.terminate()
File "/usr/local/lib/python2.6/multiprocessing/forking.py", line 136, in terminate
if self.wait(timeout=0.1) is None:
File "/usr/local/lib/python2.6/multiprocessing/forking.py", line 121, in wait
res = self.poll()
File "/usr/local/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 10] No child processes
Quite frustrating.. I'm running the code through pdb now, but haven't spotted anything yet.
A:
The original sample script has "import signal" but no use of signals. However, I had a script causing this error message and it was due to my signal handling, so I'll explain here in case its what is happening for others. Within a signal handler, I was doing stuff with processes (e.g. creating a new process). Apparently this doesn't work, so I stopped doing that within the handler and fixed the error. (Note: sleep() functions wake up after signal handling so that can be an alternative approach to acting upon signals if you need to do things with processes)
|
Error while using multiprocessing module in a python daemon
|
I'm getting the following error when using the multiprocessing module within a python daemon process (using python-daemon):
Traceback (most recent call last):
File "/usr/local/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/local/lib/python2.6/multiprocessing/util.py", line 262, in _exit_function
for p in active_children():
File "/usr/local/lib/python2.6/multiprocessing/process.py", line 43, in active_children
_cleanup()
File "/usr/local/lib/python2.6/multiprocessing/process.py", line 53, in _cleanup
if p._popen.poll() is not None:
File "/usr/local/lib/python2.6/multiprocessing/forking.py", line 106, in poll
pid, sts = os.waitpid(self.pid, flag)
OSError: [Errno 10] No child processes
The daemon process (parent) spawns a number of processes (children) and then periodically polls the processes to see if they have completed. If the parent detects that one of the processes has completed, it then attempts to restart that process. It is at this point that the above exception is raised. It seems that once one of the processes completes, any operation involving the multiprocessing module will generate this exception. If I run the identical code in a non-daemon python script, it executes with no errors whatsoever.
EDIT:
Sample script
from daemon import runner
class DaemonApp(object):
def __init__(self, pidfile_path, run):
self.pidfile_path = pidfile_path
self.run = run
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
def run():
import multiprocessing as processing
import time
import os
import sys
import signal
def func():
print 'pid: ', os.getpid()
for i in range(5):
print i
time.sleep(1)
process = processing.Process(target=func)
process.start()
while True:
print 'checking process'
if not process.is_alive():
print 'process dead'
process = processing.Process(target=func)
process.start()
time.sleep(1)
# uncomment to run as daemon
app = DaemonApp('/root/bugtest.pid', run)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
#uncomment to run as regular script
#run()
|
[
"Your problem is a conflict between the daemon and multiprocessing modules, in particular in its handling of the SIGCLD (child process terminated) signal. daemon sets SIGCLD to SIG_IGN when launching, which, at least on Linux, causes terminated children to immediately be reaped (rather than becoming a zombie until the parent invokes wait()). But multiprocessing's is_alive test invokes wait() to see if the process is alive, which fails if the process has already been reaped.\nSimplest solution is just to set SIGCLD back to SIG_DFL (default behaviour -- ignore the signal and let the parent wait() for the terminated child process):\ndef run():\n # ...\n\n signal.signal(signal.SIGCLD, signal.SIG_DFL)\n\n process = processing.Process(target=func)\n process.start()\n\n while True:\n # ...\n\n",
"Ignoring SIGCLD also causes problems with the subprocess module, because of a bug in that module (issue 1731717, still open as of 2011-09-21).\nThis behaviour is addressed in version 1.4.8 of the python-daemon library; it now omits the default fiddling with SIGCLD, so no longer has this unpleasant interaction with other standard library modules.\n",
"I think there was a fix put into trunk and 2.6 maint a little while ago which should help with this can you try running your script in python-trunk or the latest 2.6-maint svn? I'm failing to pull up the bug information\n",
"Looks like your error is coming at the very end of your process -- your clue's at the very start of your traceback, and I quote...:\nFile \"/usr/local/lib/python2.6/atexit.py\", line 24, in _run_exitfuncs\n func(*targs, **kargs)\n\nif atexit._run_exitfuncs is running, this clearly shows that your own process is terminating. So, the error itself is a minor issue in a sense -- just from some function that the multiprocessing module registered to run \"at-exit\" from your process. The really interesting issue is, WHY is your main process exiting? I think this may be due to some uncaught exception: try setting the exception hook and showing rich diagnostic info before it gets lost by the OTHER exception caused by whatever it is that multiprocessing's registered for at-exit running...\n",
"I'm running into this also using the celery distributed task manager under RHEL 5.3 with Python 2.6. My traceback looks a little different but the error the same:\n File \"/usr/local/lib/python2.6/multiprocessing/pool.py\", line 334, in terminate\n self._terminate()\n File \"/usr/local/lib/python2.6/multiprocessing/util.py\", line 174, in __call__\n res = self._callback(*self._args, **self._kwargs)\n File \"/usr/local/lib/python2.6/multiprocessing/pool.py\", line 373, in _terminate_pool\n p.terminate()\n File \"/usr/local/lib/python2.6/multiprocessing/process.py\", line 111, in terminate\n self._popen.terminate()\n File \"/usr/local/lib/python2.6/multiprocessing/forking.py\", line 136, in terminate\n if self.wait(timeout=0.1) is None:\n File \"/usr/local/lib/python2.6/multiprocessing/forking.py\", line 121, in wait\n res = self.poll()\n File \"/usr/local/lib/python2.6/multiprocessing/forking.py\", line 106, in poll\n pid, sts = os.waitpid(self.pid, flag)\nOSError: [Errno 10] No child processes\n\nQuite frustrating.. I'm running the code through pdb now, but haven't spotted anything yet.\n",
"The original sample script has \"import signal\" but no use of signals. However, I had a script causing this error message and it was due to my signal handling, so I'll explain here in case its what is happening for others. Within a signal handler, I was doing stuff with processes (e.g. creating a new process). Apparently this doesn't work, so I stopped doing that within the handler and fixed the error. (Note: sleep() functions wake up after signal handling so that can be an alternative approach to acting upon signals if you need to do things with processes)\n"
] |
[
7,
5,
0,
0,
0,
0
] |
[] |
[] |
[
"daemon",
"multiprocessing",
"python"
] |
stackoverflow_0001359795_daemon_multiprocessing_python.txt
|
Q:
Finding closest match in collection of strings representing numbers
I have a sorted list of datetimes in text format. The format of each entry is '2009-09-10T12:00:00'.
I want to find the entry closest to a target. There are many more entries than the number of searches I would have to do.
I could change each entry to a number, then search numerically (for example these approaches), but that would seem excess effort.
Is there a better way than this:
def mid(res, target):
#res is a list of entries, sorted by dt (dateTtime)
#each entry is a dict with a dt and some other info
n = len(res)
low = 0
high = n-1
# find the first res greater than target
while low < high:
mid = (low + high)/2
t = res[int(mid)]['dt']
if t < target:
low = mid + 1
else:
high = mid
# check if the prior value is closer
i = max(0, int(low)-1)
a = dttosecs(res[i]['dt'])
b = dttosecs(res[int(low)]['dt'])
t = dttosecs(target)
if abs(a-t) < abs(b-t):
return int(low-1)
else:
return int(low)
import time
def dttosecs(dt):
# string to seconds since the beginning
date,tim = dt.split('T')
y,m,d = date.split('-')
h,mn,s = tim.split(':')
y = int(y)
m = int(m)
d = int(d)
h = int(h)
mn = int(mn)
s = min(59,int(float(s)+0.5)) # round to neatest second
s = int(s)
secs = time.mktime((y,m,d,h,mn,s,0,0,-1))
return secs
A:
You want the bisect module from the standard library. It will do a binary search and tell you the correct insertion point for a new value into an already sorted list. Here's an example that will print the place in the list where target would be inserted:
from bisect import bisect
dates = ['2009-09-10T12:00:00', '2009-09-11T12:32:00', '2009-09-11T12:43:00']
target = '2009-09-11T12:40:00'
print bisect(dates, target)
From there you can just compare to the thing before and after your insertion point, which in this case would be dates[i-1] and dates[i] to see which one is closest to your target.
A:
"Copy and paste coding" (getting bisect's sources into your code) is not recommended as it carries all sorts of costs down the road (lot of extra source code for you to test and maintain, difficulties dealing with upgrades in the upstream code you've copied, etc, etc); the best way to reuse standard library modules is simply to import them and use them.
However, to do one pass transforming the dictionaries into meaningfully comparable entries is O(N), which (even though each step of the pass is simple) will eventually swamp the O(log N) time of the search proper. Since bisect can't support a key= key extractor like sort does, what the solution to this dilemma -- how can you reuse bisect by import and call, without a preliminary O(N) step...?
As quoted here, the solution is in David Wheeler's famous saying, "All problems in computer science can be solved by another level of indirection". Consider e.g....:
import bisect
listofdicts = [
{'dt': '2009-%2.2d-%2.2dT12:00:00' % (m,d) }
for m in range(4,9) for d in range(1,30)
]
class Indexer(object):
def __init__(self, lod, key):
self.lod = lod
self.key = key
def __len__(self):
return len(self.lod)
def __getitem__(self, idx):
return self.lod[idx][self.key]
lookfor = listofdicts[len(listofdicts)//2]['dt']
def mid(res=listofdicts, target=lookfor):
keys = [r['dt'] for r in res]
return res[bisect.bisect_left(keys, target)]
def midi(res=listofdicts, target=lookfor):
wrap = Indexer(res, 'dt')
return res[bisect.bisect_left(wrap, target)]
if __name__ == '__main__':
print '%d dicts on the list' % len(listofdicts)
print 'Looking for', lookfor
print mid(), midi()
assert mid() == midi()
The output (just running this indexer.py as a check, then with timeit, two ways):
$ python indexer.py
145 dicts on the list
Looking for 2009-06-15T12:00:00
{'dt': '2009-06-15T12:00:00'} {'dt': '2009-06-15T12:00:00'}
$ python -mtimeit -s'import indexer' 'indexer.mid()'
10000 loops, best of 3: 27.2 usec per loop
$ python -mtimeit -s'import indexer' 'indexer.midi()'
100000 loops, best of 3: 9.43 usec per loop
As you see, even in a modest task with 145 entries in the list, the indirection approach can have a performance that's three times better than the "key-extraction pass" approach. Since we're comparing O(N) vs O(log N), the advantage of the indirection approach grows without bounds as N increases. (For very small N, the higher multiplicative constants due to the indirection make the key-extraction approach faster, but this is soon surpassed by the big-O difference). Admittedly, the Indexer class is extra code -- however, it's reusable over ALL tasks of binary searching a list of dicts sorted by one entry in each dict, so having it in your "container-utilities back of tricks" offers good return on that investment.
So much for the main search loop. For the secondary task of converting two entries (the one just below and the one just above the target) and the target to a number of seconds, consider, again, a higher-reuse approach, namely:
import time
adt = '2009-09-10T12:00:00'
def dttosecs(dt=adt):
# string to seconds since the beginning
date,tim = dt.split('T')
y,m,d = date.split('-')
h,mn,s = tim.split(':')
y = int(y)
m = int(m)
d = int(d)
h = int(h)
mn = int(mn)
s = min(59,int(float(s)+0.5)) # round to neatest second
s = int(s)
secs = time.mktime((y,m,d,h,mn,s,0,0,-1))
return secs
def simpler(dt=adt):
return time.mktime(time.strptime(dt, '%Y-%m-%dT%H:%M:%S'))
if __name__ == '__main__':
print adt, dttosecs(), simpler()
assert dttosecs() == simpler()
Here, there is no performance advantage to the reuse approach (indeed, and on the contrary, dttosecs is faster) -- but then, you only need to perform three conversions per search, no matter how many entries are on your list of dicts, so it's not clear whether that performance issue is germane. Meanwhile, with simpler you only have to write, test and maintain one simple line of code, while dttosecs is a dozen lines; given this ratio, in most situations (i.e., excluding absolute bottlenecks), I would prefer simpler. The important thing is to be aware of both approaches and of the tradeoffs between them so as to ensure the choice is made wisely.
A:
import bisect
def mid(res, target):
keys = [r['dt'] for r in res]
return res[bisect.bisect_left(keys, target)]
A:
First, change to this.
import datetime
def parse_dt(dt):
return datetime.strptime( dt, "%Y-%m-%dT%H:%M:%S" )
This removes much of the "effort".
Consider this as the search.
def mid( res, target ):
"""res is a list of entries, sorted by dt (dateTtime)
each entry is a dict with a dt and some other info
"""
times = [ parse_dt(r['dt']) for r in res ]
index= bisect( times, parse_dt(target) )
return times[index]
This doesn't seem like very much "effort". This does not depend on your timestamps being formatted properly, either. You can change to any timestamp format and be assured that this will always work.
|
Finding closest match in collection of strings representing numbers
|
I have a sorted list of datetimes in text format. The format of each entry is '2009-09-10T12:00:00'.
I want to find the entry closest to a target. There are many more entries than the number of searches I would have to do.
I could change each entry to a number, then search numerically (for example these approaches), but that would seem excess effort.
Is there a better way than this:
def mid(res, target):
#res is a list of entries, sorted by dt (dateTtime)
#each entry is a dict with a dt and some other info
n = len(res)
low = 0
high = n-1
# find the first res greater than target
while low < high:
mid = (low + high)/2
t = res[int(mid)]['dt']
if t < target:
low = mid + 1
else:
high = mid
# check if the prior value is closer
i = max(0, int(low)-1)
a = dttosecs(res[i]['dt'])
b = dttosecs(res[int(low)]['dt'])
t = dttosecs(target)
if abs(a-t) < abs(b-t):
return int(low-1)
else:
return int(low)
import time
def dttosecs(dt):
# string to seconds since the beginning
date,tim = dt.split('T')
y,m,d = date.split('-')
h,mn,s = tim.split(':')
y = int(y)
m = int(m)
d = int(d)
h = int(h)
mn = int(mn)
s = min(59,int(float(s)+0.5)) # round to neatest second
s = int(s)
secs = time.mktime((y,m,d,h,mn,s,0,0,-1))
return secs
|
[
"You want the bisect module from the standard library. It will do a binary search and tell you the correct insertion point for a new value into an already sorted list. Here's an example that will print the place in the list where target would be inserted:\nfrom bisect import bisect\ndates = ['2009-09-10T12:00:00', '2009-09-11T12:32:00', '2009-09-11T12:43:00']\ntarget = '2009-09-11T12:40:00'\nprint bisect(dates, target)\n\nFrom there you can just compare to the thing before and after your insertion point, which in this case would be dates[i-1] and dates[i] to see which one is closest to your target.\n",
"\"Copy and paste coding\" (getting bisect's sources into your code) is not recommended as it carries all sorts of costs down the road (lot of extra source code for you to test and maintain, difficulties dealing with upgrades in the upstream code you've copied, etc, etc); the best way to reuse standard library modules is simply to import them and use them.\nHowever, to do one pass transforming the dictionaries into meaningfully comparable entries is O(N), which (even though each step of the pass is simple) will eventually swamp the O(log N) time of the search proper. Since bisect can't support a key= key extractor like sort does, what the solution to this dilemma -- how can you reuse bisect by import and call, without a preliminary O(N) step...?\nAs quoted here, the solution is in David Wheeler's famous saying, \"All problems in computer science can be solved by another level of indirection\". Consider e.g....:\nimport bisect\n\nlistofdicts = [\n {'dt': '2009-%2.2d-%2.2dT12:00:00' % (m,d) }\n for m in range(4,9) for d in range(1,30)\n ]\n\nclass Indexer(object):\n def __init__(self, lod, key):\n self.lod = lod\n self.key = key\n def __len__(self):\n return len(self.lod)\n def __getitem__(self, idx):\n return self.lod[idx][self.key]\n\n\nlookfor = listofdicts[len(listofdicts)//2]['dt']\n\ndef mid(res=listofdicts, target=lookfor):\n keys = [r['dt'] for r in res]\n return res[bisect.bisect_left(keys, target)]\n\ndef midi(res=listofdicts, target=lookfor):\n wrap = Indexer(res, 'dt')\n return res[bisect.bisect_left(wrap, target)]\n\nif __name__ == '__main__':\n print '%d dicts on the list' % len(listofdicts)\n print 'Looking for', lookfor\n print mid(), midi()\nassert mid() == midi()\n\nThe output (just running this indexer.py as a check, then with timeit, two ways):\n$ python indexer.py \n145 dicts on the list\nLooking for 2009-06-15T12:00:00\n{'dt': '2009-06-15T12:00:00'} {'dt': '2009-06-15T12:00:00'}\n$ python -mtimeit -s'import indexer' 'indexer.mid()'\n10000 loops, best of 3: 27.2 usec per loop\n$ python -mtimeit -s'import indexer' 'indexer.midi()'\n100000 loops, best of 3: 9.43 usec per loop\n\nAs you see, even in a modest task with 145 entries in the list, the indirection approach can have a performance that's three times better than the \"key-extraction pass\" approach. Since we're comparing O(N) vs O(log N), the advantage of the indirection approach grows without bounds as N increases. (For very small N, the higher multiplicative constants due to the indirection make the key-extraction approach faster, but this is soon surpassed by the big-O difference). Admittedly, the Indexer class is extra code -- however, it's reusable over ALL tasks of binary searching a list of dicts sorted by one entry in each dict, so having it in your \"container-utilities back of tricks\" offers good return on that investment.\nSo much for the main search loop. For the secondary task of converting two entries (the one just below and the one just above the target) and the target to a number of seconds, consider, again, a higher-reuse approach, namely:\nimport time\n\nadt = '2009-09-10T12:00:00'\n\ndef dttosecs(dt=adt):\n # string to seconds since the beginning\n date,tim = dt.split('T')\n y,m,d = date.split('-')\n h,mn,s = tim.split(':')\n y = int(y)\n m = int(m)\n d = int(d)\n h = int(h)\n mn = int(mn)\n s = min(59,int(float(s)+0.5)) # round to neatest second\n s = int(s)\n secs = time.mktime((y,m,d,h,mn,s,0,0,-1))\n return secs\n\ndef simpler(dt=adt):\n return time.mktime(time.strptime(dt, '%Y-%m-%dT%H:%M:%S'))\n\nif __name__ == '__main__':\n print adt, dttosecs(), simpler()\nassert dttosecs() == simpler()\n\nHere, there is no performance advantage to the reuse approach (indeed, and on the contrary, dttosecs is faster) -- but then, you only need to perform three conversions per search, no matter how many entries are on your list of dicts, so it's not clear whether that performance issue is germane. Meanwhile, with simpler you only have to write, test and maintain one simple line of code, while dttosecs is a dozen lines; given this ratio, in most situations (i.e., excluding absolute bottlenecks), I would prefer simpler. The important thing is to be aware of both approaches and of the tradeoffs between them so as to ensure the choice is made wisely.\n",
"import bisect\n\ndef mid(res, target):\n keys = [r['dt'] for r in res]\n return res[bisect.bisect_left(keys, target)]\n\n",
"First, change to this.\nimport datetime\ndef parse_dt(dt):\n return datetime.strptime( dt, \"%Y-%m-%dT%H:%M:%S\" )\n\nThis removes much of the \"effort\".\nConsider this as the search.\ndef mid( res, target ):\n \"\"\"res is a list of entries, sorted by dt (dateTtime) \n each entry is a dict with a dt and some other info\n \"\"\"\n times = [ parse_dt(r['dt']) for r in res ]\n index= bisect( times, parse_dt(target) )\n return times[index]\n\nThis doesn't seem like very much \"effort\". This does not depend on your timestamps being formatted properly, either. You can change to any timestamp format and be assured that this will always work.\n"
] |
[
4,
4,
2,
1
] |
[] |
[] |
[
"python",
"search"
] |
stackoverflow_0001438924_python_search.txt
|
Q:
have you seen? _mysql_exceptions.OperationalError "Lost connection to MySQL server during query" being ignored
I am just starting out with the MySQLdb module for python, and upon running some SELECT and UPDATE queries, the following gets output:
Exception
_mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL
server during query') in bound method Cursor.del of
MySQLdb.cursors.Cursor object at 0x8c0188c ignored
The exception is apparently getting caught (and "ignored") by MySQLdb itself, so I guess this is not a major issue. Also, the SELECTs generate results and the table gets modified by UPDATE.
But, since I am just getting my feet wet with this, I want to ask: does this message suggest I am doing something wrong? Or have you seen these warnings before in harmless situations?
Thanks for any insight,
lara
A:
Ha! Just realized I was trying to use the cursor after having closed the connection! In any case, it was nice writing! : )
l
|
have you seen? _mysql_exceptions.OperationalError "Lost connection to MySQL server during query" being ignored
|
I am just starting out with the MySQLdb module for python, and upon running some SELECT and UPDATE queries, the following gets output:
Exception
_mysql_exceptions.OperationalError: (2013, 'Lost connection to MySQL
server during query') in bound method Cursor.del of
MySQLdb.cursors.Cursor object at 0x8c0188c ignored
The exception is apparently getting caught (and "ignored") by MySQLdb itself, so I guess this is not a major issue. Also, the SELECTs generate results and the table gets modified by UPDATE.
But, since I am just getting my feet wet with this, I want to ask: does this message suggest I am doing something wrong? Or have you seen these warnings before in harmless situations?
Thanks for any insight,
lara
|
[
"Ha! Just realized I was trying to use the cursor after having closed the connection! In any case, it was nice writing! : )\nl\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001439616_mysql_python.txt
|
Q:
How can I measure the overall memory requirements of a Python program
I have a financial pricing application written in Python 2.4.4 which runs as an Excel plugin. Excel has a 1GB memory limit for all addins, so if any addin process tries to allocate more than 1Gb in total it will cause Excel to crash.
I've recently made a change to the program which may have changed the overall memory requirements of the program. I'd like to work out if anything has changed sigifnicantly, and if not I can reassure my management that there's no chance of a failure due to an increase in memory usage.
Incidentally, it's possible to run the same function which runs in Excel from a command-line:
Effectively I can provide all the arguments that would be generated from Excel from the regular windows prompt. This means that if I have a method to discover the memory requirements of the process on it's own I can safely infer that it will use a similar amount when run from Excel.
I'm not interested in the detail that a memory profiler might give: I do not need to know how much memory each function tries to allocate. What I do need to know is the smallest amount of memory that the program will require in order to run and I need to guarantee that if the program is run within a limit of 1Gb it will run OK.
Any suggestions as to how I can do this?
Platform is Windows XP 32bit, python 2.4.4
A:
I don't believe XP keeps track of the peak memory requirements of a process (the way Linux does in /proc/pid/status for example). You can use third-party utilities such as this one and set it to "poll" the process very frequently to get a good chance of grabbing the correct peak.
A better approach, though it doesn't answer your title question, is to try running the process under a job object with a SetInformationJobObject with a JOBOBJECT_EXTENDED_LIMIT_INFORMATION structure having a suitable ProcessMemoryLimit (e.g 1GB max virtual memory) and the right flag in JOBOBJECT_BASIC_LIMIT_INFORMATION to activate that limit. This way, if the process runs correctly, you will KNOW it has never used more than 1 GB of virtual memory -- otherwise, it will crash with an out-of-memory error.
This programmatic approach also allows measurement of peak values, but it's complicated enough, compared with using already packaged tools, that I don't think it can be recommended for that purpose. I do not know of packaged tools that allow you to use job objects to specifically limit the resources available to processes (though I wouldn't be surprised to hear that some are available, be that freely or commercially, I've never come upon one myself).
A:
Simplest solution might just be to run your app from the command line and use Task Manager to see how much memory it's using.
Edit:
For something more complicated have a look at this question / answer How to get memory usage under Windows in C++
Edit: Another option. Get Process Explorer. It's basically Task Manager on steroids. It'll record peak usage of a process for you. (it's free so you don't have to worry about cost)
|
How can I measure the overall memory requirements of a Python program
|
I have a financial pricing application written in Python 2.4.4 which runs as an Excel plugin. Excel has a 1GB memory limit for all addins, so if any addin process tries to allocate more than 1Gb in total it will cause Excel to crash.
I've recently made a change to the program which may have changed the overall memory requirements of the program. I'd like to work out if anything has changed sigifnicantly, and if not I can reassure my management that there's no chance of a failure due to an increase in memory usage.
Incidentally, it's possible to run the same function which runs in Excel from a command-line:
Effectively I can provide all the arguments that would be generated from Excel from the regular windows prompt. This means that if I have a method to discover the memory requirements of the process on it's own I can safely infer that it will use a similar amount when run from Excel.
I'm not interested in the detail that a memory profiler might give: I do not need to know how much memory each function tries to allocate. What I do need to know is the smallest amount of memory that the program will require in order to run and I need to guarantee that if the program is run within a limit of 1Gb it will run OK.
Any suggestions as to how I can do this?
Platform is Windows XP 32bit, python 2.4.4
|
[
"I don't believe XP keeps track of the peak memory requirements of a process (the way Linux does in /proc/pid/status for example). You can use third-party utilities such as this one and set it to \"poll\" the process very frequently to get a good chance of grabbing the correct peak.\nA better approach, though it doesn't answer your title question, is to try running the process under a job object with a SetInformationJobObject with a JOBOBJECT_EXTENDED_LIMIT_INFORMATION structure having a suitable ProcessMemoryLimit (e.g 1GB max virtual memory) and the right flag in JOBOBJECT_BASIC_LIMIT_INFORMATION to activate that limit. This way, if the process runs correctly, you will KNOW it has never used more than 1 GB of virtual memory -- otherwise, it will crash with an out-of-memory error. \nThis programmatic approach also allows measurement of peak values, but it's complicated enough, compared with using already packaged tools, that I don't think it can be recommended for that purpose. I do not know of packaged tools that allow you to use job objects to specifically limit the resources available to processes (though I wouldn't be surprised to hear that some are available, be that freely or commercially, I've never come upon one myself).\n",
"Simplest solution might just be to run your app from the command line and use Task Manager to see how much memory it's using.\nEdit:\nFor something more complicated have a look at this question / answer How to get memory usage under Windows in C++\nEdit: Another option. Get Process Explorer. It's basically Task Manager on steroids. It'll record peak usage of a process for you. (it's free so you don't have to worry about cost)\n"
] |
[
2,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001438773_python.txt
|
Q:
How do i return a quoted string from a tuple?
I have a tuple of strings that i would want to extract the contents as a quoted string, i.e.
tup=('string1', 'string2', 'string3')
when i do this
main_str = ",".join(tup)
#i get
main_str = 'string1, string2, string3'
#I want the main_str to have something like this
main_str = '"string1", "string2", "string3"'
Gath
A:
", ".join('"{0}"'.format(i) for i in tup)
or
", ".join('"%s"' % i for i in tup)
A:
Well, one answer would be:
', '.join([repr(x) for x in tup])
or
repr(tup)[1:-1]
But that's not really nice. ;)
Updated:
Although, noted, you will not be able to control if resulting string starts with '" or '". If that matters, you need to be more explicit, like the other answers here are:
', '.join(['"%s"' % x for x in tup])
A:
Here's one way to do it:
>>> t = ('s1', 's2', 's3')
>>> ", ".join( s.join(['"','"']) for s in t)
'"s1", "s2", "s3"'
|
How do i return a quoted string from a tuple?
|
I have a tuple of strings that i would want to extract the contents as a quoted string, i.e.
tup=('string1', 'string2', 'string3')
when i do this
main_str = ",".join(tup)
#i get
main_str = 'string1, string2, string3'
#I want the main_str to have something like this
main_str = '"string1", "string2", "string3"'
Gath
|
[
"\", \".join('\"{0}\"'.format(i) for i in tup)\n\nor\n\", \".join('\"%s\"' % i for i in tup)\n\n",
"Well, one answer would be:\n', '.join([repr(x) for x in tup])\n\nor\nrepr(tup)[1:-1]\n\nBut that's not really nice. ;)\nUpdated:\nAlthough, noted, you will not be able to control if resulting string starts with '\" or '\". If that matters, you need to be more explicit, like the other answers here are:\n', '.join(['\"%s\"' % x for x in tup])\n\n",
"Here's one way to do it:\n>>> t = ('s1', 's2', 's3')\n>>> \", \".join( s.join(['\"','\"']) for s in t)\n'\"s1\", \"s2\", \"s3\"'\n\n"
] |
[
10,
2,
0
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0001439862_python_tuples.txt
|
Q:
What is the builtin name of the 'type' of functions, in Python?
What Python builtin returns <type 'function'>?
>>> type(lambda: None)
<type 'function'>
Is there way of avoiding creating this lambda function, in order to get the type of functions in general?
See http://www.finalcog.com/python-memoise-memoize-function-type for more details.
Thanks,
Chris.
A:
You should be able to use types.FunctionType to do what you want:
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import types
>>> help(types.FunctionType)
Help on class function in module __builtin__:
class function(object)
| function(code, globals[, name[, argdefs[, closure]]])
|
| Create a function object from a code object and a dictionary.
| The optional name string overrides the name from the code object.
| The optional argdefs tuple specifies the default argument values.
| The optional closure tuple supplies the bindings for free variables.
But generally, def is considered the default constructor for the function type.
A:
You should get away from the idea of 'types' in Python. Most of the time you don't want to check the 'type' of something. Explicitly checking types is prone to breakage, for example:
>>> s1 = 'hello'
>>> s2 = u'hello'
>>> type(s1) == type(s2)
False
What you want to do is check if the object supports whatever operation you're trying to perform on it.
If you want to see if a given object is a function, do this:
>>> func = lambda x: x*2
>>> something_else = 'not callable'
>>> callable(func)
True
>>> callable(something_else)
False
Or just try calling it, and catch the exception!
A:
"What Python builtin returns <type 'function'>?"
Functions.
"Is there way of avoiding creating this lambda function, in order to get the type of functions in general?"
Yes, types.FunctionType.
or just type(anyfunction)
If you are asking how to get rid of lambdas (but a reread tells me you probably are not) you can if define a function instead of the lambda.
So instead of:
>>> somemethod(lambda x: x+x)
You do
>>> def thefunction(x):
... return x+x
>>> somemethod(thefunction)
A:
built-ins are not functions they are: builtin_function_or_method. Isn't it the whole point of naming?
you can get by doing something like:
>>> type(len)
<class 'builtin_function_or_method'>
|
What is the builtin name of the 'type' of functions, in Python?
|
What Python builtin returns <type 'function'>?
>>> type(lambda: None)
<type 'function'>
Is there way of avoiding creating this lambda function, in order to get the type of functions in general?
See http://www.finalcog.com/python-memoise-memoize-function-type for more details.
Thanks,
Chris.
|
[
"You should be able to use types.FunctionType to do what you want:\n\n Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) \n [GCC 4.2.1 (Apple Inc. build 5646)] on darwin\n Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n >>> import types\n >>> help(types.FunctionType)\n\n Help on class function in module __builtin__:\n\n class function(object)\n | function(code, globals[, name[, argdefs[, closure]]])\n | \n | Create a function object from a code object and a dictionary.\n | The optional name string overrides the name from the code object.\n | The optional argdefs tuple specifies the default argument values.\n | The optional closure tuple supplies the bindings for free variables.\n\nBut generally, def is considered the default constructor for the function type.\n",
"You should get away from the idea of 'types' in Python. Most of the time you don't want to check the 'type' of something. Explicitly checking types is prone to breakage, for example:\n>>> s1 = 'hello'\n>>> s2 = u'hello'\n>>> type(s1) == type(s2)\nFalse\n\nWhat you want to do is check if the object supports whatever operation you're trying to perform on it.\nIf you want to see if a given object is a function, do this:\n>>> func = lambda x: x*2\n>>> something_else = 'not callable'\n>>> callable(func)\nTrue\n>>> callable(something_else)\nFalse\n\nOr just try calling it, and catch the exception!\n",
"\"What Python builtin returns <type 'function'>?\"\nFunctions.\n\"Is there way of avoiding creating this lambda function, in order to get the type of functions in general?\"\nYes, types.FunctionType.\nor just type(anyfunction)\nIf you are asking how to get rid of lambdas (but a reread tells me you probably are not) you can if define a function instead of the lambda.\nSo instead of:\n>>> somemethod(lambda x: x+x)\n\nYou do\n>>> def thefunction(x):\n... return x+x\n>>> somemethod(thefunction)\n\n",
"built-ins are not functions they are: builtin_function_or_method. Isn't it the whole point of naming?\nyou can get by doing something like:\n>>> type(len)\n<class 'builtin_function_or_method'>\n\n"
] |
[
6,
3,
1,
0
] |
[] |
[] |
[
"python",
"types"
] |
stackoverflow_0001439815_python_types.txt
|
Q:
Google App Engine urlfetch to POST files
I am trying to send a file to torrage.com from an app in GAE.
the file is stored in memory after being received from a user upload.
I would like to be able to post this file using the API available here:
http://torrage.com/automation.php but i am having some problems undestanding how the body of the post should be encoded, the most i got from the API is a "file empty" message.
A:
I find torrage's API docs on the POST interface (as opposed to the SOAP one) pretty confusing and conflicting with the sample C code they also supply. It seems to me that in their online example of PHP post they are not sending the file's contents (just like @kender's answer above is not sending it) while they ARE sending it in the SOAP examples and in the example C code.
The relevant part of the C sample (how they compute the headers that you'd be passing to urlfetch.fetch) is:
snprintf(formdata_header, sizeof(formdata_header) - 1,
"Content-Disposition: form-data; name=\"torrent\"; filename=\"%s\"\n"
"Content-Type: " HTTP_UPLOAD_CONTENT_TYPE "\n"
"\n",
torrent_file);
http_content_len = 2 + strlen(content_boundary) + 1 + strlen(formdata_header) + st.st_size + 1 + 2 + strlen(content_boundary) + 3;
LTdebug("http content len %u\n", http_content_len);
snprintf(http_req, sizeof(http_req) - 1,
"POST /%s HTTP/1.1\n"
"Host: %s\n"
"User-Agent: libtorrage/" LTVERSION "\n"
"Connection: close\n"
"Content-Type: multipart/form-data; boundary=%s\n"
"Content-Length: %u\n"
"\n",
cache_uri, cache_host, content_boundary, http_content_len);
"application/x-bittorrent" is the HTTP_UPLOAD_CONTENT_TYPE. st.st_size is the number of bytes in the memory buffer with all the file's data (the C sample reads that data from file, but it doesn't matter how you got it into memory, as long as you know its size). content_boundary is a string that's NOT present in the file's contents, they build it as "---------------------------%u%uLT" with each %u substituted by a random number (repeating until that string hits upon two random numbers that make it not present in the file). Finally, the post body (after opening the HTTP socket and sending the other headers) they write as follows:
if (write_error == 0) if (write(sock, "--", 2) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, content_boundary, strlen(content_boundary)) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, "\n", 1) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, formdata_header, strlen(formdata_header)) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, filebuf, st.st_size) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, "\n--", 3) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, content_boundary, strlen(content_boundary)) <= 0) write_error = 1;
if (write_error == 0) if (write(sock, "--\n", 3) <= 0) write_error = 1;
where filebuf is the buffer with the file's contents.
Hardly sharp and simple, but I hope there's enough info here to work out a way to build the arguments for a urlfetch.fetch (building them for a urllib.urlopen would be just as hard, since the problem is the scarcity of documentation about exactly what headers and what content and how encoded you need -- and that not-well-documented info needs to be reverse engineered from what I'm presenting here, I think).
Alternatively, it may be possible to hack a SOAP request via urlfetch; see here for Carson's long post of his attempts, difficulties and success in the matter. And, good luck!
A:
Why not just use Python's urllib2 module to create a POST request, like they show in an example for PHP. It would be something like this:
import urrlib, urllib2
data = (
('name', 'torrent'),
('type', 'application/x-bittorrent'),
('file', '/path/to/your/file.torrent'),
)
request = urllib2.urlopen('http://torrage.com/autoupload.php', urllib.urlencode(data))
A:
Judging from the C code, it's using the "multipart/form-data" format, which is very complex and it's very easy to get something wrong. I wouldn't hand-code the post body like that.
I used the function from this blog and it worked for me from stand-alone program. You might want give it a try in app engine,
http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html
|
Google App Engine urlfetch to POST files
|
I am trying to send a file to torrage.com from an app in GAE.
the file is stored in memory after being received from a user upload.
I would like to be able to post this file using the API available here:
http://torrage.com/automation.php but i am having some problems undestanding how the body of the post should be encoded, the most i got from the API is a "file empty" message.
|
[
"I find torrage's API docs on the POST interface (as opposed to the SOAP one) pretty confusing and conflicting with the sample C code they also supply. It seems to me that in their online example of PHP post they are not sending the file's contents (just like @kender's answer above is not sending it) while they ARE sending it in the SOAP examples and in the example C code.\nThe relevant part of the C sample (how they compute the headers that you'd be passing to urlfetch.fetch) is:\n snprintf(formdata_header, sizeof(formdata_header) - 1,\n \"Content-Disposition: form-data; name=\\\"torrent\\\"; filename=\\\"%s\\\"\\n\"\n \"Content-Type: \" HTTP_UPLOAD_CONTENT_TYPE \"\\n\"\n \"\\n\",\n torrent_file);\n http_content_len = 2 + strlen(content_boundary) + 1 + strlen(formdata_header) + st.st_size + 1 + 2 + strlen(content_boundary) + 3;\n LTdebug(\"http content len %u\\n\", http_content_len);\n snprintf(http_req, sizeof(http_req) - 1, \n \"POST /%s HTTP/1.1\\n\"\n \"Host: %s\\n\"\n \"User-Agent: libtorrage/\" LTVERSION \"\\n\"\n \"Connection: close\\n\"\n \"Content-Type: multipart/form-data; boundary=%s\\n\"\n \"Content-Length: %u\\n\"\n \"\\n\",\n cache_uri, cache_host, content_boundary, http_content_len);\n\n\"application/x-bittorrent\" is the HTTP_UPLOAD_CONTENT_TYPE. st.st_size is the number of bytes in the memory buffer with all the file's data (the C sample reads that data from file, but it doesn't matter how you got it into memory, as long as you know its size). content_boundary is a string that's NOT present in the file's contents, they build it as \"---------------------------%u%uLT\" with each %u substituted by a random number (repeating until that string hits upon two random numbers that make it not present in the file). Finally, the post body (after opening the HTTP socket and sending the other headers) they write as follows:\n if (write_error == 0) if (write(sock, \"--\", 2) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, content_boundary, strlen(content_boundary)) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, \"\\n\", 1) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, formdata_header, strlen(formdata_header)) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, filebuf, st.st_size) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, \"\\n--\", 3) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, content_boundary, strlen(content_boundary)) <= 0) write_error = 1;\n if (write_error == 0) if (write(sock, \"--\\n\", 3) <= 0) write_error = 1;\n\nwhere filebuf is the buffer with the file's contents.\nHardly sharp and simple, but I hope there's enough info here to work out a way to build the arguments for a urlfetch.fetch (building them for a urllib.urlopen would be just as hard, since the problem is the scarcity of documentation about exactly what headers and what content and how encoded you need -- and that not-well-documented info needs to be reverse engineered from what I'm presenting here, I think).\nAlternatively, it may be possible to hack a SOAP request via urlfetch; see here for Carson's long post of his attempts, difficulties and success in the matter. And, good luck!\n",
"Why not just use Python's urllib2 module to create a POST request, like they show in an example for PHP. It would be something like this:\nimport urrlib, urllib2\ndata = (\n ('name', 'torrent'), \n ('type', 'application/x-bittorrent'),\n ('file', '/path/to/your/file.torrent'),\n)\nrequest = urllib2.urlopen('http://torrage.com/autoupload.php', urllib.urlencode(data))\n\n",
"Judging from the C code, it's using the \"multipart/form-data\" format, which is very complex and it's very easy to get something wrong. I wouldn't hand-code the post body like that. \nI used the function from this blog and it worked for me from stand-alone program. You might want give it a try in app engine,\nhttp://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"google_app_engine",
"http",
"python"
] |
stackoverflow_0001438542_google_app_engine_http_python.txt
|
Q:
How to create simplest p2p remote desktop OR any Robot(Java) equivalent in python
I want to create a simplest remote desktop application using p2p communication.
I did created one small p2p program in python.
My Idea is-
Transmit screenshots of remote computer periodically
Transmit keyboard and mouse events wrapped in xml to remote desktop.
Problem-
I could transmit a information for keyboard and mouse events to remote computer and it will be received. But how should remote program reflect those events to remote machine. I mean how should remote program communicate with operating system.
OS: windows xp
Ok is there any equivalent of Robot(Java) in python to control mouse and keyboard events
A:
You can control the keyboard and the mouse with python on Windows by calling the win32 apis: keybd_event and mouse_event thanks to ctypes
|
How to create simplest p2p remote desktop OR any Robot(Java) equivalent in python
|
I want to create a simplest remote desktop application using p2p communication.
I did created one small p2p program in python.
My Idea is-
Transmit screenshots of remote computer periodically
Transmit keyboard and mouse events wrapped in xml to remote desktop.
Problem-
I could transmit a information for keyboard and mouse events to remote computer and it will be received. But how should remote program reflect those events to remote machine. I mean how should remote program communicate with operating system.
OS: windows xp
Ok is there any equivalent of Robot(Java) in python to control mouse and keyboard events
|
[
"You can control the keyboard and the mouse with python on Windows by calling the win32 apis: keybd_event and mouse_event thanks to ctypes \n"
] |
[
1
] |
[] |
[] |
[
"operating_system",
"p2p",
"python",
"remote_desktop"
] |
stackoverflow_0001440323_operating_system_p2p_python_remote_desktop.txt
|
Q:
Should my python web app use unicode for all strings?
I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.
On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere.
What will be a problem if I don't do this?
Are there any issues that will come up if I do do this?
I'm using Pylons right now as my framework.
A:
You can avoid the u'' in python 2.6 by doing:
from __future__ import unicode_literals
That will make 'string literals' to be unicode objects, just like it is in python 3;
A:
In Python 3, all strings are Unicode. So, you can prepare for this by using u'' strings everywhere you need to, and then when you eventually upgrade to Python 3 using the 2to3 tool all the us will disappear. And you'll be in a better position because you will have already tested your code with Unicode strings.
See Text Vs. Data Instead Of Unicode Vs. 8-bit for more information.
A:
What will be a problem if I don't do this?
I'm a westerner living in Japan, so I've seen first-hand what is needed to work with non-ASCII characters. The problem if you don't use Unicode strings is that your code will be a frustration to the parts of the world that use anything other than A-Z. Our company has had a great deal of frustration getting certain web software to do Japanese characters without making a total mess of it.
It takes a little effort for English speakers to appreciate how great Unicode is, but it really is a terrific bit of work to make computers accessible to all cultures and languages.
"Gotchas":
Make sure your output web pages state the encoding in use properly (e.g. using content-encoding header), and then encode all Unicode strings properly at output. Python 3 Unicode strings is a great improvement to do this right.
Do everything with Unicode strings, and only convert to a specific encoding at the last moment, when doing output. Other languages, such as PHP, are prone to bugs when manipulating Unicode in e.g. UTF-8 form. Say you have to truncate a Unicode string. If it's in UTF-8 form internally, there's a risk you could chop off a multi-byte character half-way through, resulting in rubbish output. Python's use of Unicode strings internally makes it harder to make these mistakes.
A:
Using Unicode internally is a good way to avoid problems with non-ASCII characters. Convert at the boundaries of your application (incoming data to unicode, outgoing data to UTF-8 or whatever). Pylons can do the conversion for you in many cases: e.g. controllers can safely return unicode strings; SQLAlchemy models may declare Unicode columns.
Regarding string literals in your source code: the u prefix is usually not necessary. You can safely mix str objects containing ASCII with unicode objects. Just make sure all your string literals are either pure ASCII or are u"unicode".
|
Should my python web app use unicode for all strings?
|
I see some frameworks like Django using unicode all over the place so it seems like it might be a good idea.
On the other hand, it seems like a big pain to have all these extra 'u's floating around everywhere.
What will be a problem if I don't do this?
Are there any issues that will come up if I do do this?
I'm using Pylons right now as my framework.
|
[
"You can avoid the u'' in python 2.6 by doing:\nfrom __future__ import unicode_literals\n\nThat will make 'string literals' to be unicode objects, just like it is in python 3;\n",
"In Python 3, all strings are Unicode. So, you can prepare for this by using u'' strings everywhere you need to, and then when you eventually upgrade to Python 3 using the 2to3 tool all the us will disappear. And you'll be in a better position because you will have already tested your code with Unicode strings.\nSee Text Vs. Data Instead Of Unicode Vs. 8-bit for more information.\n",
"\nWhat will be a problem if I don't do this?\n\nI'm a westerner living in Japan, so I've seen first-hand what is needed to work with non-ASCII characters. The problem if you don't use Unicode strings is that your code will be a frustration to the parts of the world that use anything other than A-Z. Our company has had a great deal of frustration getting certain web software to do Japanese characters without making a total mess of it.\nIt takes a little effort for English speakers to appreciate how great Unicode is, but it really is a terrific bit of work to make computers accessible to all cultures and languages.\n\"Gotchas\":\n\nMake sure your output web pages state the encoding in use properly (e.g. using content-encoding header), and then encode all Unicode strings properly at output. Python 3 Unicode strings is a great improvement to do this right.\nDo everything with Unicode strings, and only convert to a specific encoding at the last moment, when doing output. Other languages, such as PHP, are prone to bugs when manipulating Unicode in e.g. UTF-8 form. Say you have to truncate a Unicode string. If it's in UTF-8 form internally, there's a risk you could chop off a multi-byte character half-way through, resulting in rubbish output. Python's use of Unicode strings internally makes it harder to make these mistakes.\n\n",
"Using Unicode internally is a good way to avoid problems with non-ASCII characters. Convert at the boundaries of your application (incoming data to unicode, outgoing data to UTF-8 or whatever). Pylons can do the conversion for you in many cases: e.g. controllers can safely return unicode strings; SQLAlchemy models may declare Unicode columns.\nRegarding string literals in your source code: the u prefix is usually not necessary. You can safely mix str objects containing ASCII with unicode objects. Just make sure all your string literals are either pure ASCII or are u\"unicode\".\n"
] |
[
20,
10,
3,
1
] |
[] |
[] |
[
"django",
"pylons",
"python",
"unicode",
"web_applications"
] |
stackoverflow_0000827415_django_pylons_python_unicode_web_applications.txt
|
Q:
How do I return a CSV from a Pylons app?
I'm trying to return a CSV from an action in my webapp, and give the user a prompt to download the file or open it from a spreadsheet app. I can get the CSV to spit out onto the screen, but how do I change the type of the file so that the browser recognizes that this isn't supposed to be displayed as HTML? Can I use the csv module for this?
import csv
def results_csv(self):
data = ['895', '898', '897']
return data
A:
To tell the browser the type of content you're giving it, you need to set the Content-type header to 'text/csv'. In your Pylons function, the following should do the job:
response.headers['Content-type'] = 'text/csv'
A:
PAG is correct, but furthermore if you want to suggest a name for the downloaded file you can also set response.headers['Content-disposition'] = 'attachment; filename=suggest.csv'
A:
Yes, you can use the csv module for this:
import csv
from cStringIO import StringIO
...
def results_csv(self):
response.headers['Content-Type'] = 'text/csv'
s = StringIO()
writer = csv.writer(s)
writer.writerow(['header', 'header', 'header'])
writer.writerow([123, 456, 789])
return s.getvalue()
|
How do I return a CSV from a Pylons app?
|
I'm trying to return a CSV from an action in my webapp, and give the user a prompt to download the file or open it from a spreadsheet app. I can get the CSV to spit out onto the screen, but how do I change the type of the file so that the browser recognizes that this isn't supposed to be displayed as HTML? Can I use the csv module for this?
import csv
def results_csv(self):
data = ['895', '898', '897']
return data
|
[
"To tell the browser the type of content you're giving it, you need to set the Content-type header to 'text/csv'. In your Pylons function, the following should do the job:\nresponse.headers['Content-type'] = 'text/csv'\n",
"PAG is correct, but furthermore if you want to suggest a name for the downloaded file you can also set response.headers['Content-disposition'] = 'attachment; filename=suggest.csv'\n",
"Yes, you can use the csv module for this:\nimport csv\nfrom cStringIO import StringIO\n\n...\ndef results_csv(self):\n response.headers['Content-Type'] = 'text/csv'\n s = StringIO()\n writer = csv.writer(s)\n writer.writerow(['header', 'header', 'header'])\n writer.writerow([123, 456, 789])\n return s.getvalue()\n\n"
] |
[
12,
9,
8
] |
[] |
[] |
[
"csv",
"pylons",
"python"
] |
stackoverflow_0000790019_csv_pylons_python.txt
|
Q:
pycurl fails but curl (from bash) works in ubuntu
I'm using curl and pycurl to connect to a secure 3rd party api and when I use pycurl I'm getting authentication errors back from the server, but when I use curl on the command line and do the same thing it works. I set both to verbose mode and am seeing some differences in the request, but I can't seem to figure out what the error is.
They seem to be using different encryption methods, perhaps that is the problem? If anyone has ideas on different options to try with pycurl or suggestions for recompiling pycurl to work like curl, that would be awesome. Thanks.
Here are my pycurl settings, fyi:
buffer = cStringIO.StringIO()
curl = pycurl.Curl()
curl.setopt(pycurl.VERBOSE,1)
curl.setopt(pycurl.POST, 1)
curl.setopt(pycurl.POSTFIELDS, post_data)
curl.setopt(pycurl.TIMEOUT_MS, self.HTTP_TIMEOUT)
curl.setopt(pycurl.URL, url)
curl.setopt(pycurl.FOLLOWLOCATION, self.HTTP_FOLLOW_REDIRECTS)
curl.setopt(pycurl.MAXREDIRS, self.HTTP_MAX_REDIRECTS)
curl.setopt(pycurl.WRITEFUNCTION, buffer.write)
curl.setopt(pycurl.NOSIGNAL, 1)
curl.setopt(pycurl.SSLCERT, self.path_to_ssl_cert)
curl.setopt(pycurl.SSL_VERIFYPEER, 0)
# 1/0
try:
curl.perform()
...
Oh, last thing: the same python script I'm using works on my Mac laptop but doesn't work on the ubuntu server I'm trying to setup.
python test.py
18:09:13,299 root INFO fetching: https://secure.....
* About to connect() to secure.... 1129 (#0)
* Trying 216....... * connected
* Connected to secure.... port 1129 (#0)
* found 102 certificates in /etc/ssl/certs/ca-certificates.crt
* server certificate verification OK
* common name: secure.... (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: .......
* start date: Sat, 14 Feb 2009 22:45:27 GMT
* expire date: Mon, 15 Feb 2010 22:45:27 GMT
* issuer: ...
* compression: NULL
* cipher: AES 128 CBC
* MAC: SHA
User-Agent: PycURL/7.16.4
Host: secure....
Accept: */*
Content-Length: 387
Content-Type: application/x-www-form-urlencoded
< HTTP/1.1 200 OK
< Content-Length: 291
<
* Connection #0 to host secure.... left intact
* Closing connection #0
curl -v -d '...' --cert cert.pem https://secure....
* About to connect() to secure.... port 1129 (#0)
* Trying 216....... connected
* Connected to secure.... port 1129 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* SSLv2, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS handshake, CERT verify (15):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
* subject: .......
* start date: 2009-02-14 22:45:27 GMT
* expire date: 2010-02-15 22:45:27 GMT
* common name: secure.... (matched)
* issuer: ... Certificate Authority
* SSL certificate verify ok.
> User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e zlib/1.2.3.3 libidn/1.0
> Host: secure....:1129
> Accept: */*
> Content-Length: 387
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 200 OK
< Content-Length: 342
A:
Ubuntu pycurl uses GnuTLS while ubuntu curl command line uses OpenSSL.
There are differences e.g. in supported certificate formats.
I for one cannot comprehend this decision taken by ubuntu devs/packagers. I stumbled on this once and could not work around it, luckily there are other distributions than ubuntu :-)
You could always try to complain to "humanity towards others."
A:
I have a little trouble understanding the code/output fragments you have posted. Is the actual error message included?
Problems with SSL/TLS are often because of the X.509 certificate infrastructure. There are "Certificate Authorities" (CA) like Verisign, RapidSSL etc. which digitally "sign" the certificates of servers. To check these signatures, you need the so called "root certificate" of the CA who signed the certificate of the server ("issuer") you are connecting to.
Usually operating systems come with a fair amount of certificates pre-installed. And often Browsers, the OS and certain libraries all have their own list of certificates. On a Mac you can see them, if you start the program "Keychain Access" and open the "System Roots" keychain.
So I suggest you check if the cert is missing from Ubuntu and if so to add it there. (Maybe that is all saved in /etc/ssl/certs/)
|
pycurl fails but curl (from bash) works in ubuntu
|
I'm using curl and pycurl to connect to a secure 3rd party api and when I use pycurl I'm getting authentication errors back from the server, but when I use curl on the command line and do the same thing it works. I set both to verbose mode and am seeing some differences in the request, but I can't seem to figure out what the error is.
They seem to be using different encryption methods, perhaps that is the problem? If anyone has ideas on different options to try with pycurl or suggestions for recompiling pycurl to work like curl, that would be awesome. Thanks.
Here are my pycurl settings, fyi:
buffer = cStringIO.StringIO()
curl = pycurl.Curl()
curl.setopt(pycurl.VERBOSE,1)
curl.setopt(pycurl.POST, 1)
curl.setopt(pycurl.POSTFIELDS, post_data)
curl.setopt(pycurl.TIMEOUT_MS, self.HTTP_TIMEOUT)
curl.setopt(pycurl.URL, url)
curl.setopt(pycurl.FOLLOWLOCATION, self.HTTP_FOLLOW_REDIRECTS)
curl.setopt(pycurl.MAXREDIRS, self.HTTP_MAX_REDIRECTS)
curl.setopt(pycurl.WRITEFUNCTION, buffer.write)
curl.setopt(pycurl.NOSIGNAL, 1)
curl.setopt(pycurl.SSLCERT, self.path_to_ssl_cert)
curl.setopt(pycurl.SSL_VERIFYPEER, 0)
# 1/0
try:
curl.perform()
...
Oh, last thing: the same python script I'm using works on my Mac laptop but doesn't work on the ubuntu server I'm trying to setup.
python test.py
18:09:13,299 root INFO fetching: https://secure.....
* About to connect() to secure.... 1129 (#0)
* Trying 216....... * connected
* Connected to secure.... port 1129 (#0)
* found 102 certificates in /etc/ssl/certs/ca-certificates.crt
* server certificate verification OK
* common name: secure.... (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: .......
* start date: Sat, 14 Feb 2009 22:45:27 GMT
* expire date: Mon, 15 Feb 2010 22:45:27 GMT
* issuer: ...
* compression: NULL
* cipher: AES 128 CBC
* MAC: SHA
User-Agent: PycURL/7.16.4
Host: secure....
Accept: */*
Content-Length: 387
Content-Type: application/x-www-form-urlencoded
< HTTP/1.1 200 OK
< Content-Length: 291
<
* Connection #0 to host secure.... left intact
* Closing connection #0
curl -v -d '...' --cert cert.pem https://secure....
* About to connect() to secure.... port 1129 (#0)
* Trying 216....... connected
* Connected to secure.... port 1129 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* SSLv2, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS handshake, CERT verify (15):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
* subject: .......
* start date: 2009-02-14 22:45:27 GMT
* expire date: 2010-02-15 22:45:27 GMT
* common name: secure.... (matched)
* issuer: ... Certificate Authority
* SSL certificate verify ok.
> User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e zlib/1.2.3.3 libidn/1.0
> Host: secure....:1129
> Accept: */*
> Content-Length: 387
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 200 OK
< Content-Length: 342
|
[
"Ubuntu pycurl uses GnuTLS while ubuntu curl command line uses OpenSSL.\nThere are differences e.g. in supported certificate formats.\nI for one cannot comprehend this decision taken by ubuntu devs/packagers. I stumbled on this once and could not work around it, luckily there are other distributions than ubuntu :-)\nYou could always try to complain to \"humanity towards others.\"\n",
"I have a little trouble understanding the code/output fragments you have posted. Is the actual error message included?\nProblems with SSL/TLS are often because of the X.509 certificate infrastructure. There are \"Certificate Authorities\" (CA) like Verisign, RapidSSL etc. which digitally \"sign\" the certificates of servers. To check these signatures, you need the so called \"root certificate\" of the CA who signed the certificate of the server (\"issuer\") you are connecting to.\nUsually operating systems come with a fair amount of certificates pre-installed. And often Browsers, the OS and certain libraries all have their own list of certificates. On a Mac you can see them, if you start the program \"Keychain Access\" and open the \"System Roots\" keychain.\nSo I suggest you check if the cert is missing from Ubuntu and if so to add it there. (Maybe that is all saved in /etc/ssl/certs/)\n"
] |
[
3,
1
] |
[] |
[] |
[
"curl",
"https",
"pycurl",
"python",
"ssl"
] |
stackoverflow_0000568247_curl_https_pycurl_python_ssl.txt
|
Q:
Why do I get error, KeyError: 'wsgi.input'?
I'm using WSGI and trying to access the get/post data, using this code:
import os
import cgi
from traceback import format_exception
from sys import exc_info
def application(environ, start_response):
try:
f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)
output = 'Test: %s' % f['test'].value
except:
output = ''.join(format_exception(*exc_info()))
status = '200 OK'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
However I get the following error:
Traceback (most recent call last):
File "/srv/www/vm/custom/gettest.wsgi", line 9, in application
f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)
File "/usr/lib64/python2.4/UserDict.py", line 17, in __getitem__
def __getitem__(self, key): return self.data[key]
KeyError: 'wsgi.input'
Is it because wsgi.input does not exist in my version?
A:
You're misusing the WSGI API.
Please create a minimal ("hello world") function that shows this error so we can comment on your code. [Don't post your entire application, it may be too big and unwieldy for us to comment on.]
The os.environ is not what you should be using. WSGI replaces this with an enriched environment. A WSGI application gets two arguments: one is a dictionary that includes 'wsgi.input'.
In your code...
def application(environ, start_response):
try:
f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)
Per the WSGI API specification (http://www.python.org/dev/peps/pep-0333/#specification-details), don't use os.environ. Use environ, the first positional parameter to your application.
The environ parameter is a dictionary
object, containing CGI-style
environment variables. This object
must be a builtin Python dictionary
(not a subclass, UserDict or other
dictionary emulation), and the
application is allowed to modify the
dictionary in any way it desires. The
dictionary must also include certain
WSGI-required variables (described in
a later section), and may also include
server-specific extension variables,
named according to a convention that
will be described below.
|
Why do I get error, KeyError: 'wsgi.input'?
|
I'm using WSGI and trying to access the get/post data, using this code:
import os
import cgi
from traceback import format_exception
from sys import exc_info
def application(environ, start_response):
try:
f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)
output = 'Test: %s' % f['test'].value
except:
output = ''.join(format_exception(*exc_info()))
status = '200 OK'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
However I get the following error:
Traceback (most recent call last):
File "/srv/www/vm/custom/gettest.wsgi", line 9, in application
f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)
File "/usr/lib64/python2.4/UserDict.py", line 17, in __getitem__
def __getitem__(self, key): return self.data[key]
KeyError: 'wsgi.input'
Is it because wsgi.input does not exist in my version?
|
[
"You're misusing the WSGI API. \nPlease create a minimal (\"hello world\") function that shows this error so we can comment on your code. [Don't post your entire application, it may be too big and unwieldy for us to comment on.]\nThe os.environ is not what you should be using. WSGI replaces this with an enriched environment. A WSGI application gets two arguments: one is a dictionary that includes 'wsgi.input'.\n\nIn your code...\ndef application(environ, start_response):\n\n try:\n f = cgi.FieldStorage(fp=os.environ['wsgi.input'], environ=os.environ)\n\nPer the WSGI API specification (http://www.python.org/dev/peps/pep-0333/#specification-details), don't use os.environ. Use environ, the first positional parameter to your application.\n\nThe environ parameter is a dictionary\n object, containing CGI-style\n environment variables. This object\n must be a builtin Python dictionary\n (not a subclass, UserDict or other\n dictionary emulation), and the\n application is allowed to modify the\n dictionary in any way it desires. The\n dictionary must also include certain\n WSGI-required variables (described in\n a later section), and may also include\n server-specific extension variables,\n named according to a convention that\n will be described below.\n\n"
] |
[
7
] |
[] |
[] |
[
"mod_wsgi",
"python"
] |
stackoverflow_0001441038_mod_wsgi_python.txt
|
Q:
Trailing slashes in Pylons Routes
What is the best way to make trailing slashes not matter in the latest version of Routes (1.10)? I currently am using the clearly non-DRY:
map.connect('/logs/', controller='logs', action='logs')
map.connect('/logs', controller='logs', action='logs')
I think that turning minimization on would do the trick, but am under the impression that it was disabled in the newer versions of Routes for a reason. Unfortunately documentation doesn't seem to have caught up with Routes development, so I can't find any good resources to go to. Any ideas?
A:
The following snippet added as the very last route worked for me:
map.redirect('/*(url)/', '/{url}',
_redirect_code='301 Moved Permanently')
A:
There are two possible ways to solve this:
Do it entirely in pylons.
Add an htaccess rule to rewrite the trailing slash.
Personally I don't like the trailing slash, because if you have a uri like:
http://example.com/people
You should be able to get the same data in xml format by going to:
http://example.com/people.xml
A:
http://www.siafoo.net/snippet/275 has a basic piece of middleware which removes a trailing slash from requests. Clever idea, and I understood the concept of middleware in WSGI applications much better after I realised what this does.
|
Trailing slashes in Pylons Routes
|
What is the best way to make trailing slashes not matter in the latest version of Routes (1.10)? I currently am using the clearly non-DRY:
map.connect('/logs/', controller='logs', action='logs')
map.connect('/logs', controller='logs', action='logs')
I think that turning minimization on would do the trick, but am under the impression that it was disabled in the newer versions of Routes for a reason. Unfortunately documentation doesn't seem to have caught up with Routes development, so I can't find any good resources to go to. Any ideas?
|
[
"The following snippet added as the very last route worked for me:\nmap.redirect('/*(url)/', '/{url}',\n _redirect_code='301 Moved Permanently')\n\n",
"There are two possible ways to solve this:\n\nDo it entirely in pylons.\nAdd an htaccess rule to rewrite the trailing slash.\n\nPersonally I don't like the trailing slash, because if you have a uri like:\nhttp://example.com/people\nYou should be able to get the same data in xml format by going to:\nhttp://example.com/people.xml\n",
"http://www.siafoo.net/snippet/275 has a basic piece of middleware which removes a trailing slash from requests. Clever idea, and I understood the concept of middleware in WSGI applications much better after I realised what this does.\n"
] |
[
16,
7,
2
] |
[] |
[] |
[
"pylons",
"python",
"routes"
] |
stackoverflow_0000235191_pylons_python_routes.txt
|
Q:
Python accessing web service protected by PKI/SSL
I need to use Python to access data from a RESTful web service that requires certificate-based client authentication (PKI) over SSL/HTTPS. What is the recommended way of doing this?
A:
The suggestion by stribika using httplib.HTTPSConnection should work for you provided that you do not need to verify the server's certificate. If you do want/need to verify the server, you'll need to look at a 3rd party module such as pyOpenSSL (which is a Python wrapper around a subset of the OpenSSL library).
A:
I found this: http://code.activestate.com/recipes/117004/
I did not try it so it may not work.
A:
I would recommend using M2Crypto. If you are a Twisted guy, M2Crypto integrates with Twisted so you can let Twisted handle the networking stuff and M2Crypto the SSL/verification/validation stuff.
|
Python accessing web service protected by PKI/SSL
|
I need to use Python to access data from a RESTful web service that requires certificate-based client authentication (PKI) over SSL/HTTPS. What is the recommended way of doing this?
|
[
"The suggestion by stribika using httplib.HTTPSConnection should work for you provided that you do not need to verify the server's certificate. If you do want/need to verify the server, you'll need to look at a 3rd party module such as pyOpenSSL (which is a Python wrapper around a subset of the OpenSSL library).\n",
"I found this: http://code.activestate.com/recipes/117004/\nI did not try it so it may not work.\n",
"I would recommend using M2Crypto. If you are a Twisted guy, M2Crypto integrates with Twisted so you can let Twisted handle the networking stuff and M2Crypto the SSL/verification/validation stuff.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"certificate",
"pki",
"python",
"ssl",
"web_services"
] |
stackoverflow_0001134565_certificate_pki_python_ssl_web_services.txt
|
Q:
java to python conversion: x509 / dsa / sha1withdsa crypto howto?
I've got the following Java code that I'm trying to convert to python, and I'm not really sure how to do this:
import java.security.spec.X509EncodedKeySpec;
import java.security.KeyFactory;
import java.security.PublicKey;
import java.security.Signature;
byte[] key = KeyReader.read(filestream)
//KeyReader.read(inputstream) just reads in the bytes 1 at a time from the filestream
X509EncodedKeySpec pubKey = new X509EncodedKeySpec(key);
KeyFactory keyFactory = KeyFactory.getInstance("DSA");
PublicKey pub = keyFactory.generatePublic(pubKey);
Signature signature = Signature.getInstance("SHA1withDSA");
signature.initVerify(pub)
signature.update(a_byte_string) //doesn't matter
I'm kinda lost as to how to do this in python. Specifically, the SHA1withDSA part. I just don't know enough about the python crypto libs (m2crypto to be exact) to map the functions (nor could I find any decent writeups on how to do this).
A:
I don't exactly understand the Java code, but is this what you are trying to do?
from M2Crypto import X509
x509 = X509.load_cert(filename)
assert x509.verify() == 1
|
java to python conversion: x509 / dsa / sha1withdsa crypto howto?
|
I've got the following Java code that I'm trying to convert to python, and I'm not really sure how to do this:
import java.security.spec.X509EncodedKeySpec;
import java.security.KeyFactory;
import java.security.PublicKey;
import java.security.Signature;
byte[] key = KeyReader.read(filestream)
//KeyReader.read(inputstream) just reads in the bytes 1 at a time from the filestream
X509EncodedKeySpec pubKey = new X509EncodedKeySpec(key);
KeyFactory keyFactory = KeyFactory.getInstance("DSA");
PublicKey pub = keyFactory.generatePublic(pubKey);
Signature signature = Signature.getInstance("SHA1withDSA");
signature.initVerify(pub)
signature.update(a_byte_string) //doesn't matter
I'm kinda lost as to how to do this in python. Specifically, the SHA1withDSA part. I just don't know enough about the python crypto libs (m2crypto to be exact) to map the functions (nor could I find any decent writeups on how to do this).
|
[
"I don't exactly understand the Java code, but is this what you are trying to do?\nfrom M2Crypto import X509\n\nx509 = X509.load_cert(filename)\nassert x509.verify() == 1\n\n"
] |
[
1
] |
[] |
[] |
[
"cryptography",
"java",
"m2crypto",
"python",
"sha1"
] |
stackoverflow_0001338546_cryptography_java_m2crypto_python_sha1.txt
|
Q:
Is it possible to have a python app authentication with a remote linux server?
The idea here is to have a python app, that when started, asks for a user/password combination. This user/password combination should be the same as the user/password of the remote linux server or in such case, and authentication system.
Is this possible? How? Which APIs can I use?
Thanks a lot.
A:
I would recommend looking into LDAP and python-ldap
|
Is it possible to have a python app authentication with a remote linux server?
|
The idea here is to have a python app, that when started, asks for a user/password combination. This user/password combination should be the same as the user/password of the remote linux server or in such case, and authentication system.
Is this possible? How? Which APIs can I use?
Thanks a lot.
|
[
"I would recommend looking into LDAP and python-ldap\n"
] |
[
0
] |
[] |
[] |
[
"authentication",
"python"
] |
stackoverflow_0001441875_authentication_python.txt
|
Q:
Subdomains and Logins
If you multiple subdomains e.g.:
sub1.domain_name.com
sub2.domain_name.com
Is there a way to have a user be able to log into both of these without issues and double login issue?
The platform is Python, Django.
A:
Without information regarding what platform you are using, it is difficult to say. If you use cookies to store authentication information, and you are using subdomains as you describe, then you can force the cookie to be issued for the highest level domain, e.g. domain_name.com.
This will be accessable by both sub1 and sub2, and they could each use that for their authentication.
EDIT:
In the settings.py for each application running under the subdomains, you need to put
SESSION_COOKIE_DOMAIN = ".domain_name.com" as per the django docs
A:
Yes. Just set the cookie on the domain ".domain_name.com" and the cookie will be available to sub1.domain_name.com, and sub2.domain_name.com.
As long as you maintain your session information on both domains, you should be fine.
This is a very common practice, and is why you can log into your Google Account at http://www.google.com/ and still be logged in at http://mail.google.com.
|
Subdomains and Logins
|
If you multiple subdomains e.g.:
sub1.domain_name.com
sub2.domain_name.com
Is there a way to have a user be able to log into both of these without issues and double login issue?
The platform is Python, Django.
|
[
"Without information regarding what platform you are using, it is difficult to say. If you use cookies to store authentication information, and you are using subdomains as you describe, then you can force the cookie to be issued for the highest level domain, e.g. domain_name.com.\nThis will be accessable by both sub1 and sub2, and they could each use that for their authentication.\nEDIT:\nIn the settings.py for each application running under the subdomains, you need to put \nSESSION_COOKIE_DOMAIN = \".domain_name.com\" as per the django docs\n",
"Yes. Just set the cookie on the domain \".domain_name.com\" and the cookie will be available to sub1.domain_name.com, and sub2.domain_name.com.\nAs long as you maintain your session information on both domains, you should be fine.\nThis is a very common practice, and is why you can log into your Google Account at http://www.google.com/ and still be logged in at http://mail.google.com.\n"
] |
[
12,
6
] |
[] |
[] |
[
"authentication",
"django",
"login_control",
"python",
"subdomain"
] |
stackoverflow_0001442017_authentication_django_login_control_python_subdomain.txt
|
Q:
transforming Jython's source / ast
I've got a problem to solve in Jython. The function I've got looks like this:
ok = whatever1(x, ...)
self.assertTrue("whatever1 failed: "+x...(), ok)
ok = whatever2(x, ...)
self.assertTrue("whatever2 failed: "+x...(), ok)
[ many many lines ] ...
There are many tests that look like this, they contain mostly ok=... tests, but there are some other things done too. I know which functions are testable, because they come from only one namespace (or I can leave the "ok = " part). The question is - how to transform the source automatically, so that I write only:
ok = whatever1(x, ...) # this is transformed
ok = whatever2(x, ...) # this too
something_else(...) # this one isn't
and the rest is generated automatically?
I know about unparse and ast - is there any better way to approach this problem? (yeah, I know - Maybe-like monad) I'm looking at the rope library too and cannot decide... which way is the best one to choose here? The transformation I described is the only one I need and I don't mind creating a temporary file that will get included in the real code.
A:
Are you sure you need an AST? If the only lines of interest are the ones starting with "ok = ", then maybe simple string work on the source files would be enough?
|
transforming Jython's source / ast
|
I've got a problem to solve in Jython. The function I've got looks like this:
ok = whatever1(x, ...)
self.assertTrue("whatever1 failed: "+x...(), ok)
ok = whatever2(x, ...)
self.assertTrue("whatever2 failed: "+x...(), ok)
[ many many lines ] ...
There are many tests that look like this, they contain mostly ok=... tests, but there are some other things done too. I know which functions are testable, because they come from only one namespace (or I can leave the "ok = " part). The question is - how to transform the source automatically, so that I write only:
ok = whatever1(x, ...) # this is transformed
ok = whatever2(x, ...) # this too
something_else(...) # this one isn't
and the rest is generated automatically?
I know about unparse and ast - is there any better way to approach this problem? (yeah, I know - Maybe-like monad) I'm looking at the rope library too and cannot decide... which way is the best one to choose here? The transformation I described is the only one I need and I don't mind creating a temporary file that will get included in the real code.
|
[
"Are you sure you need an AST? If the only lines of interest are the ones starting with \"ok = \", then maybe simple string work on the source files would be enough?\n"
] |
[
2
] |
[] |
[] |
[
"abstract_syntax_tree",
"jython",
"python"
] |
stackoverflow_0001442084_abstract_syntax_tree_jython_python.txt
|
Q:
Whats the error in this python code?
What do i do to solve it?
Terminal output is:
abhi@abhi-desktop:~/Desktop/sslstrip-0.1$ python sslstrip.py --listen=3130
Traceback (most recent call last):
File "sslstrip.py", line 254, in
main(sys.argv[1:])
File "sslstrip.py", line 246, in main
server = ThreadingHTTPServer(('', listenPort), StripProxy)
File "/usr/lib/python2.6/SocketServer.py", line 400, in init
self.server_bind()
File "/usr/lib/python2.6/BaseHTTPServer.py", line 108, in server_bind
SocketServer.TCPServer.server_bind(self)
File "/usr/lib/python2.6/SocketServer.py", line 411, in server_bind
self.socket.bind(self.server_address)
File "", line 1, in bind
TypeError: an integer is required
abhi@abhi-desktop:~/Desktop/sslstrip-0.1$
Here is a 21kb code given...
Download link
A:
Does it fail when you don't specify a port?
My guess is that listenPort is coming out of the option parsing as a string and needs to be cast to an in sslstrip.py on line 77.
A:
The provided link is to sslstrip-0.5. You are using sslstrip-0.1. These are very different (sslstrip-0.5 uses twisted). This bug was fixed in sslstrip-0.2. If you don't have twisted or don't want to install twisted, I suggest that you get sslstrip-0.4.
|
Whats the error in this python code?
|
What do i do to solve it?
Terminal output is:
abhi@abhi-desktop:~/Desktop/sslstrip-0.1$ python sslstrip.py --listen=3130
Traceback (most recent call last):
File "sslstrip.py", line 254, in
main(sys.argv[1:])
File "sslstrip.py", line 246, in main
server = ThreadingHTTPServer(('', listenPort), StripProxy)
File "/usr/lib/python2.6/SocketServer.py", line 400, in init
self.server_bind()
File "/usr/lib/python2.6/BaseHTTPServer.py", line 108, in server_bind
SocketServer.TCPServer.server_bind(self)
File "/usr/lib/python2.6/SocketServer.py", line 411, in server_bind
self.socket.bind(self.server_address)
File "", line 1, in bind
TypeError: an integer is required
abhi@abhi-desktop:~/Desktop/sslstrip-0.1$
Here is a 21kb code given...
Download link
|
[
"Does it fail when you don't specify a port?\nMy guess is that listenPort is coming out of the option parsing as a string and needs to be cast to an in sslstrip.py on line 77.\n",
"The provided link is to sslstrip-0.5. You are using sslstrip-0.1. These are very different (sslstrip-0.5 uses twisted). This bug was fixed in sslstrip-0.2. If you don't have twisted or don't want to install twisted, I suggest that you get sslstrip-0.4.\n"
] |
[
2,
2
] |
[] |
[] |
[
"python",
"session_hijacking"
] |
stackoverflow_0001441979_python_session_hijacking.txt
|
Q:
Python Web-Scrape Loop via CSV list of URLs?
HI, I've got a list of 10 websites in CSV. All of the sites have the same general format, including a large table. I only want the the data in the 7th columns. I am able to extract the html and filter the 7th column data (via RegEx) on an individual basis but I can't figure out how to loop through the CSV. I think I'm close but my script won't run. I would really appreciate it if someone could help me figure-out how to do this. Here's what i've got:
#Python v2.6.2
import csv
import urllib2
import re
urls = csv.reader(open('list.csv'))
n =0
while n <=10:
for url in urls:
response = urllib2.urlopen(url[n])
html = response.read()
print re.findall('td7.*?td',html)
n +=1
A:
When I copied your routine, I did get a white space / tab error error. Check your tabs. You were indexing into the URL string incorrectly using your loop counter. This would have also messed you up.
Also, you don't really need to control the loop with a counter. This will loop for each line entry in your CSV file.
#Python v2.6.2
import csv
import urllib2
import re
urls = csv.reader(open('list.csv'))
for url in urls:
response = urllib2.urlopen(url[0])
html = response.read()
print re.findall('td7.*?td',html)
Lastly, be sure that your URLs are properly formed:
http://www.cnn.com
http://www.fark.com
http://www.cbc.ca
|
Python Web-Scrape Loop via CSV list of URLs?
|
HI, I've got a list of 10 websites in CSV. All of the sites have the same general format, including a large table. I only want the the data in the 7th columns. I am able to extract the html and filter the 7th column data (via RegEx) on an individual basis but I can't figure out how to loop through the CSV. I think I'm close but my script won't run. I would really appreciate it if someone could help me figure-out how to do this. Here's what i've got:
#Python v2.6.2
import csv
import urllib2
import re
urls = csv.reader(open('list.csv'))
n =0
while n <=10:
for url in urls:
response = urllib2.urlopen(url[n])
html = response.read()
print re.findall('td7.*?td',html)
n +=1
|
[
"When I copied your routine, I did get a white space / tab error error. Check your tabs. You were indexing into the URL string incorrectly using your loop counter. This would have also messed you up.\nAlso, you don't really need to control the loop with a counter. This will loop for each line entry in your CSV file.\n#Python v2.6.2\n\nimport csv \nimport urllib2\nimport re\n\nurls = csv.reader(open('list.csv'))\nfor url in urls:\n response = urllib2.urlopen(url[0])\n html = response.read()\n print re.findall('td7.*?td',html)\n\nLastly, be sure that your URLs are properly formed:\nhttp://www.cnn.com\nhttp://www.fark.com\nhttp://www.cbc.ca\n\n"
] |
[
2
] |
[] |
[] |
[
"csv",
"list",
"loops",
"python"
] |
stackoverflow_0001442097_csv_list_loops_python.txt
|
Q:
GAE - How Do i edit / update the datastore in python
I have this datastore model
class Project(db.Model)
projectname = db.StringProperty()
projecturl = db.StringProperty()
class Task(db.Model)
project = db.ReferenceProperty(Project)
taskname= db.StringProperty()
taskdesc = db.StringProperty()
How do I edit the value of taskname ? say I have task1 and i want to change it to task1-project
A:
oops sorry, Here is the formatted code:
taskkey = self.request.get("taskkey")
taskid = Task.get(taskkey)
query = db.GqlQuery("SELECt * FROM Task WHERE key =:taskid", taskid=taskid)
if query.count() > 0:
task = Task()
task.taskname = "task1-project"
task.put()
by the way, I get it now. I changed the task=Task() into task = query.get() and it worked.
Thanks for helping by the way.
A:
Given an instance t of Task (e.g. from some get operation on the db) you can perform the alteration you want e.g. by t.taskname = t.taskname + '-project' (if what you want is to "append '-project' to whatever was there before). Eventually, you also probably need to .put t back into the store, of course (but if you make multiple changes you don't need to put it back after each and every change -- only when you're done changing it!-).
|
GAE - How Do i edit / update the datastore in python
|
I have this datastore model
class Project(db.Model)
projectname = db.StringProperty()
projecturl = db.StringProperty()
class Task(db.Model)
project = db.ReferenceProperty(Project)
taskname= db.StringProperty()
taskdesc = db.StringProperty()
How do I edit the value of taskname ? say I have task1 and i want to change it to task1-project
|
[
"oops sorry, Here is the formatted code:\ntaskkey = self.request.get(\"taskkey\")\ntaskid = Task.get(taskkey)\nquery = db.GqlQuery(\"SELECt * FROM Task WHERE key =:taskid\", taskid=taskid)\n\nif query.count() > 0:\n task = Task()\n task.taskname = \"task1-project\"\n task.put()\n\nby the way, I get it now. I changed the task=Task() into task = query.get() and it worked. \nThanks for helping by the way.\n",
"Given an instance t of Task (e.g. from some get operation on the db) you can perform the alteration you want e.g. by t.taskname = t.taskname + '-project' (if what you want is to \"append '-project' to whatever was there before). Eventually, you also probably need to .put t back into the store, of course (but if you make multiple changes you don't need to put it back after each and every change -- only when you're done changing it!-).\n"
] |
[
2,
1
] |
[
"Probably the easiest way is to use the admin console. Locally it's:\nhttp://localhost:8080/_ah/admin\n\nand if you've uploaded it, it's the dashboard:\nhttp://appengine.google.com/dashboard?&app_id=******\n\nHere's a link:\n"
] |
[
-1
] |
[
"google_app_engine",
"gql",
"gqlquery",
"python"
] |
stackoverflow_0001436545_google_app_engine_gql_gqlquery_python.txt
|
Q:
Hiding Vertical Scrollbar in wx.TextCtrl
I have a wx.TextCtrl that I am using to represent a display with a fixed number of character rows and columns. I would like to hide the vertical scrollbar that is displayed to the right of the text pane since it is entirely unnecessary in my application. Is there a way to achieve this?
Also...I would like to hide the blinking cursor that is displayed in the pane. Unfortunately, wx.TextCtrl.GetCaret() is returning None so I cannot call wx.Caret.Hide().
Environment info:
Windows XP
Python 2.5
wxPython 2.8
A:
How about setting the style wx.TE_NO_VSCROLL for the wx.TxtCtrl?
|
Hiding Vertical Scrollbar in wx.TextCtrl
|
I have a wx.TextCtrl that I am using to represent a display with a fixed number of character rows and columns. I would like to hide the vertical scrollbar that is displayed to the right of the text pane since it is entirely unnecessary in my application. Is there a way to achieve this?
Also...I would like to hide the blinking cursor that is displayed in the pane. Unfortunately, wx.TextCtrl.GetCaret() is returning None so I cannot call wx.Caret.Hide().
Environment info:
Windows XP
Python 2.5
wxPython 2.8
|
[
"How about setting the style wx.TE_NO_VSCROLL for the wx.TxtCtrl?\n"
] |
[
4
] |
[] |
[] |
[
"python",
"wxpython",
"wxtextctrl"
] |
stackoverflow_0001441502_python_wxpython_wxtextctrl.txt
|
Q:
Accessing a Python variable in a list
I think this is probably something really simple, but I'd appreciate a hint:
I am using a python list to hold some some database insert statements:
list = [ "table_to_insert_to" ],["column1","column2"],[getValue.value1],["value2"]]
The problem is one of the values isn't evaluated until runtime-- so before the page even gets run, it breaks when it tries to import the function.
How do you handle this?
A:
You've just pointed out one (out of a zillion) problems with global variables: not using global variables is the best solution to this problem and many others. If you still mistakenly believe you must use a global variable, put a placeholder (e.g. None) in the place where the value you don't yet know will go, and assign the right value there when it's finally known.
A:
Just wrap it in a function and call the function when you have the data to initialize it.
# module.py
def setstatement(value1):
global sql
sql = ['select', 'column1', 'column2', value1]
# some other file
import module
module.setstatement(1)
# now you can use it.
>>> module.sql
['select', 'column1', 'column2', 1]
A:
May be put functions instead of value, these functions should be called at run time and will give correct results e.g.
def getValue1Func():
return getValue.value1
my_list = [ "table_to_insert_to" ],["column1","column2"],[getValue1Func],["value2"]]
now I do not know how you use this list(I think there will be better alternatives if you state the whole problem), so while using list just check if value is callable and call it to get value
e.g.
if isinstance(val, collections.Callable):
val = val()
edit: for python < 2.6, you should use operator.isCallable
|
Accessing a Python variable in a list
|
I think this is probably something really simple, but I'd appreciate a hint:
I am using a python list to hold some some database insert statements:
list = [ "table_to_insert_to" ],["column1","column2"],[getValue.value1],["value2"]]
The problem is one of the values isn't evaluated until runtime-- so before the page even gets run, it breaks when it tries to import the function.
How do you handle this?
|
[
"You've just pointed out one (out of a zillion) problems with global variables: not using global variables is the best solution to this problem and many others. If you still mistakenly believe you must use a global variable, put a placeholder (e.g. None) in the place where the value you don't yet know will go, and assign the right value there when it's finally known.\n",
"Just wrap it in a function and call the function when you have the data to initialize it.\n# module.py\n\ndef setstatement(value1):\n global sql\n sql = ['select', 'column1', 'column2', value1]\n\n# some other file\nimport module\nmodule.setstatement(1)\n\n# now you can use it.\n>>> module.sql\n['select', 'column1', 'column2', 1]\n\n",
"May be put functions instead of value, these functions should be called at run time and will give correct results e.g.\ndef getValue1Func():\n return getValue.value1\n\nmy_list = [ \"table_to_insert_to\" ],[\"column1\",\"column2\"],[getValue1Func],[\"value2\"]]\n\nnow I do not know how you use this list(I think there will be better alternatives if you state the whole problem), so while using list just check if value is callable and call it to get value\ne.g.\nif isinstance(val, collections.Callable):\n val = val()\n\nedit: for python < 2.6, you should use operator.isCallable\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"list",
"python",
"variables"
] |
stackoverflow_0001442250_list_python_variables.txt
|
Q:
Radix 64 and encryption
Need to share my problem that is :
A PGP public key server gives me the key in Radix64 format .
And i searching for any method which can encrypt my message using this Radix64 format public key .
any alternate suggestions or documents are welcome .........
A:
exPyCrypto looks good.
This previous SO question addresses Radix64 format specifically for public keys.
To convert the actual base/radix64 encoded characters, see this question:
import base64
decoded_bytes = base64.b64decode(ascii_chars)
A:
You can decode the key by using the base64 module and then encrypt the message.
|
Radix 64 and encryption
|
Need to share my problem that is :
A PGP public key server gives me the key in Radix64 format .
And i searching for any method which can encrypt my message using this Radix64 format public key .
any alternate suggestions or documents are welcome .........
|
[
"exPyCrypto looks good.\nThis previous SO question addresses Radix64 format specifically for public keys.\nTo convert the actual base/radix64 encoded characters, see this question:\nimport base64\ndecoded_bytes = base64.b64decode(ascii_chars)\n\n",
"You can decode the key by using the base64 module and then encrypt the message.\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001442896_python.txt
|
Q:
Failing to insert a record in sqlite using python
Am getting a error when i attempt to insert a record in sqlite using python.
This is my code:
import sqlite3
db = sqlite3.connect('mydb')
ins_str = 'insert into filer_filer(number, ms_date, ms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('752098','09/09/16','17:54:19','K79NN251','5,000',' GWENDA WULFRIC','416','11/9/09','4:23 PM','396', -1)'
try:
db.execute(ins_str)
except:
db.close()
...
I get the following error
str: Traceback (most recent call last):
File "C:\eclipse\plugins\org.python.pydev.debug_1.4.8.2881\pysrc\pydevd_vars.py", line 340, in
evaluateExpression
result = eval(expression, updated_globals, frame.f_locals)
File "", line 1
except
^
SyntaxError: unexpected EOF while parsing
Gath
Edit
My new insert statement looks like this
"insert into filer_filer(number, sms_date, sms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('+254722752098','09/09/16','17:54:19','K79NN251','5,000',' GEOFFREY NZIOKA','254720425416','11/9/09','4:23 PM','396', -1)"
A:
It may be an artifact of SO's code block display but you seem to be missing quotes around the SQL string values.
If that is the case, you may resolve the issue by simply using double quotes for the ins_str variable.
Edit:
My explanation was confusing. I apologize if I misled you. Now in more detail:
Python string literals can either use double quotes of single quotes; the following assignments are equivalent.
myString = 'Hello World'
myString = "Hello World"
The SQL syntax (unrelated to Python) requires single quotes (no choice) for its string variables. It is therefore a good idea to use double quotes for the Python string itself, because the single quotes for the SQL stuff won't interfere.
You can therefore use this (as suggested by Alex; I second this choice too)
ins_str = "insert into filer_filer(number, ms_date, ms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('752098','09/09/16','17:54:19','K79NN251','5,000',' GWENDA WULFRIC','416','11/9/09','4:23 PM','396', -1)"
or you should otherwise use two single quotes for each quote before and after the SQL variables. Having two single quotes is an escape sequence, interpreted as one single quote within the string by Python.
ins_str = 'insert into blah, blah.... values(''752098'', ''09/09/16'', etc... '
A:
In the assignment to ins_str, you appear to be using single-quotes incorrectly: one at the start, another after values ( and right before 752098, and so on, and so forth. You seem to be using Eclipse (even though you don't mention that, the error messages suggests it), so the error you're getting as the Eclipse plug-in tries to make sense of that syntax is peculiar, but Python proper would give you a good plain old SyntaxError -- whatever you think you're doing with that use of single quotes, it's not going to work. What about using double quotes instead of single ones, " instead of ', at the start and end of string, so that the single quotes inside them get preserved and passed on to sqlite...?
A:
Are you sure that you are getting the same error? Unless you have also changed your schema, your new insert statement will fail simply because the fields "sms_date" and "sms_time" should be "ms_date" and "ms_time". Might help if you show us yor schema, which you can do from the command line by:
sqlite3 mydb '.schema filer_filer'
A:
Sorry guys, i fixed it!, there was a typo error on my code- excute instead of execute;
My original code looked like this
try:
db.excute(ins_str) # Notice the typo on the method execute. written as excute instead of execute
except:
db.close()
thanks.
|
Failing to insert a record in sqlite using python
|
Am getting a error when i attempt to insert a record in sqlite using python.
This is my code:
import sqlite3
db = sqlite3.connect('mydb')
ins_str = 'insert into filer_filer(number, ms_date, ms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('752098','09/09/16','17:54:19','K79NN251','5,000',' GWENDA WULFRIC','416','11/9/09','4:23 PM','396', -1)'
try:
db.execute(ins_str)
except:
db.close()
...
I get the following error
str: Traceback (most recent call last):
File "C:\eclipse\plugins\org.python.pydev.debug_1.4.8.2881\pysrc\pydevd_vars.py", line 340, in
evaluateExpression
result = eval(expression, updated_globals, frame.f_locals)
File "", line 1
except
^
SyntaxError: unexpected EOF while parsing
Gath
Edit
My new insert statement looks like this
"insert into filer_filer(number, sms_date, sms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('+254722752098','09/09/16','17:54:19','K79NN251','5,000',' GEOFFREY NZIOKA','254720425416','11/9/09','4:23 PM','396', -1)"
|
[
"It may be an artifact of SO's code block display but you seem to be missing quotes around the SQL string values.\nIf that is the case, you may resolve the issue by simply using double quotes for the ins_str variable.\nEdit:\nMy explanation was confusing. I apologize if I misled you. Now in more detail:\nPython string literals can either use double quotes of single quotes; the following assignments are equivalent.\n myString = 'Hello World'\n myString = \"Hello World\"\n\nThe SQL syntax (unrelated to Python) requires single quotes (no choice) for its string variables. It is therefore a good idea to use double quotes for the Python string itself, because the single quotes for the SQL stuff won't interfere.\nYou can therefore use this (as suggested by Alex; I second this choice too)\nins_str = \"insert into filer_filer(number, ms_date, ms_time, mp_code, Amount, recipient_name,recipient_number, Tran_date, Tran_time, balance, userid_id ) values ('752098','09/09/16','17:54:19','K79NN251','5,000',' GWENDA WULFRIC','416','11/9/09','4:23 PM','396', -1)\"\n\nor you should otherwise use two single quotes for each quote before and after the SQL variables. Having two single quotes is an escape sequence, interpreted as one single quote within the string by Python.\n ins_str = 'insert into blah, blah.... values(''752098'', ''09/09/16'', etc... '\n\n",
"In the assignment to ins_str, you appear to be using single-quotes incorrectly: one at the start, another after values ( and right before 752098, and so on, and so forth. You seem to be using Eclipse (even though you don't mention that, the error messages suggests it), so the error you're getting as the Eclipse plug-in tries to make sense of that syntax is peculiar, but Python proper would give you a good plain old SyntaxError -- whatever you think you're doing with that use of single quotes, it's not going to work. What about using double quotes instead of single ones, \" instead of ', at the start and end of string, so that the single quotes inside them get preserved and passed on to sqlite...?\n",
"Are you sure that you are getting the same error? Unless you have also changed your schema, your new insert statement will fail simply because the fields \"sms_date\" and \"sms_time\" should be \"ms_date\" and \"ms_time\". Might help if you show us yor schema, which you can do from the command line by:\nsqlite3 mydb '.schema filer_filer'\n",
"Sorry guys, i fixed it!, there was a typo error on my code- excute instead of execute;\nMy original code looked like this\ntry:\n db.excute(ins_str) # Notice the typo on the method execute. written as excute instead of execute\nexcept:\n db.close()\n\nthanks.\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"python",
"sqlite"
] |
stackoverflow_0001442675_python_sqlite.txt
|
Q:
Problem with building Boost Graph Library Python bindings under Leopard
I've inherited some Python code which is importing boost.graph and I'm having an issue setting up the following under Mac OS X Leopard (I believe this is what I need to install to get it working):
http://osl.iu.edu/~dgregor/bgl-python/
According to the readme I need to build with bjam, but I see the following error:
[matt@imac ~/Downloads/bgl-python-0.9]$ bjam
error: Could not find parent for project at '.'
error: Did not find Jamfile or project-root.jam in any parent directory.
I'm running a full Macports stack of python25, boost, boost-jam, boost-build.
I don't have any experience with building using bjam. Can anyone offer any help?
A:
This error suggests the project is not a standalone, and is meant to be put inside the Boost source tree.
|
Problem with building Boost Graph Library Python bindings under Leopard
|
I've inherited some Python code which is importing boost.graph and I'm having an issue setting up the following under Mac OS X Leopard (I believe this is what I need to install to get it working):
http://osl.iu.edu/~dgregor/bgl-python/
According to the readme I need to build with bjam, but I see the following error:
[matt@imac ~/Downloads/bgl-python-0.9]$ bjam
error: Could not find parent for project at '.'
error: Did not find Jamfile or project-root.jam in any parent directory.
I'm running a full Macports stack of python25, boost, boost-jam, boost-build.
I don't have any experience with building using bjam. Can anyone offer any help?
|
[
"This error suggests the project is not a standalone, and is meant to be put inside the Boost source tree.\n"
] |
[
0
] |
[] |
[] |
[
"binding",
"boost",
"graph",
"osx_leopard",
"python"
] |
stackoverflow_0001436182_binding_boost_graph_osx_leopard_python.txt
|
Q:
Multiple Windows in PyQt4
I have a PyQt program used to visualize some python objects. I would like to do display multiple objects, each in its own window.
What is the best way to achieve multi-window applications in PyQt4?
Currently I have the following:
from PyQt4 import QtGui
class MainWindow(QtGui.QMainWindow):
windowList = []
def __init__(self, animal):
pass
def addwindow(self, animal)
win = MainWindow(animal)
windowList.append(win)
if __name__=="__main__":
import sys
app = QtGui.QApplication(sys.argv)
win = QMainWindow(dog)
win.addWindow(fish)
win.addWindow(cat)
app.exec_()
However, this approach is not satisfactory, as I am facing problems when I try to factor out the MultipleWindows part in its own class. For example:
class MultiWindows(QtGui.QMainWindow):
windowList = []
def __init__(self, param):
raise NotImplementedError()
def addwindow(self, param)
win = MainWindow(param) # How to call the initializer of the subclass from here?
windowList.append(win)
class PlanetApp(MultiWindows):
def __init__(self, planet):
pass
class AnimalApp(MultiWindows):
def __init__(self, planet):
pass
if __name__=="__main__":
import sys
app = QtGui.QApplication(sys.argv)
win = PlanetApp(mercury)
win.addWindow(venus)
win.addWindow(jupiter)
app.exec_()
The above code will call the initializer of the MainWindow class, rather than that of the appropriate subclass, and will thus throw an exception.
How can I call the initializer of the subclass? Is there a more elegant way to do this?
A:
Why not using dialogs? In Qt you do not need to use the main window unless you want to use docks etc.. Using dialogs will have the same effect.
I can also see a problem in your logic regarding the fact that you want your super class to be calling the constructor of its children, which of course can be any type. I recommend you rewrite it like the following:
class MultiWindows(QtGui.QMainWindow):
def __init__(self, param):
self.__windows = []
def addwindow(self, window):
self.__windows.append(window)
def show():
for current_child_window in self.__windows:
current_child_window.exec_() # probably show will do the same trick
class PlanetApp(QtGui.QDialog):
def __init__(self, parent, planet):
QtGui.QDialog.__init__(self, parent)
# do cool stuff here
class AnimalApp(QtGui.QDialog):
def __init__(self, parent, animal):
QtGui.QDialog.__init__(self, parent)
# do cool stuff here
if __name__=="__main__":
import sys # really need this here??
app = QtGui.QApplication(sys.argv)
jupiter = PlanetApp(None, "jupiter")
venus = PlanetApp(None, "venus")
windows = MultiWindows()
windows.addWindow(jupiter)
windows.addWindow(venus)
windows.show()
app.exec_()
It is not a nice idea to expect the super class to know the parameter to be used in the init of its subclasses since it is really hard to ensure that all the constructor will be the same (maybe the animal dialog/window takes diff parameters).
Hope it helps.
A:
In order to reference the subclass that is inheriting the super-class from inside the super-class, I am using self.__class__(), so the MultiWindows class now reads:
class MultiWindows(QtGui.QMainWindow):
windowList = []
def __init__(self, param):
raise NotImplementedError()
def addwindow(self, param)
win = self.__class__(param)
windowList.append(win)
|
Multiple Windows in PyQt4
|
I have a PyQt program used to visualize some python objects. I would like to do display multiple objects, each in its own window.
What is the best way to achieve multi-window applications in PyQt4?
Currently I have the following:
from PyQt4 import QtGui
class MainWindow(QtGui.QMainWindow):
windowList = []
def __init__(self, animal):
pass
def addwindow(self, animal)
win = MainWindow(animal)
windowList.append(win)
if __name__=="__main__":
import sys
app = QtGui.QApplication(sys.argv)
win = QMainWindow(dog)
win.addWindow(fish)
win.addWindow(cat)
app.exec_()
However, this approach is not satisfactory, as I am facing problems when I try to factor out the MultipleWindows part in its own class. For example:
class MultiWindows(QtGui.QMainWindow):
windowList = []
def __init__(self, param):
raise NotImplementedError()
def addwindow(self, param)
win = MainWindow(param) # How to call the initializer of the subclass from here?
windowList.append(win)
class PlanetApp(MultiWindows):
def __init__(self, planet):
pass
class AnimalApp(MultiWindows):
def __init__(self, planet):
pass
if __name__=="__main__":
import sys
app = QtGui.QApplication(sys.argv)
win = PlanetApp(mercury)
win.addWindow(venus)
win.addWindow(jupiter)
app.exec_()
The above code will call the initializer of the MainWindow class, rather than that of the appropriate subclass, and will thus throw an exception.
How can I call the initializer of the subclass? Is there a more elegant way to do this?
|
[
"Why not using dialogs? In Qt you do not need to use the main window unless you want to use docks etc.. Using dialogs will have the same effect. \nI can also see a problem in your logic regarding the fact that you want your super class to be calling the constructor of its children, which of course can be any type. I recommend you rewrite it like the following:\n\nclass MultiWindows(QtGui.QMainWindow):\n\n def __init__(self, param):\n self.__windows = []\n\n def addwindow(self, window):\n self.__windows.append(window)\n\n def show():\n for current_child_window in self.__windows:\n current_child_window.exec_() # probably show will do the same trick\n\nclass PlanetApp(QtGui.QDialog):\n def __init__(self, parent, planet):\n QtGui.QDialog.__init__(self, parent)\n # do cool stuff here\n\nclass AnimalApp(QtGui.QDialog):\n def __init__(self, parent, animal):\n QtGui.QDialog.__init__(self, parent)\n # do cool stuff here\n\nif __name__==\"__main__\":\n import sys # really need this here??\n\n app = QtGui.QApplication(sys.argv)\n\n jupiter = PlanetApp(None, \"jupiter\")\n venus = PlanetApp(None, \"venus\")\n windows = MultiWindows()\n windows.addWindow(jupiter)\n windows.addWindow(venus)\n\n windows.show()\n app.exec_()\n\n\nIt is not a nice idea to expect the super class to know the parameter to be used in the init of its subclasses since it is really hard to ensure that all the constructor will be the same (maybe the animal dialog/window takes diff parameters).\nHope it helps.\n",
"In order to reference the subclass that is inheriting the super-class from inside the super-class, I am using self.__class__(), so the MultiWindows class now reads:\nclass MultiWindows(QtGui.QMainWindow):\nwindowList = []\n\ndef __init__(self, param):\n raise NotImplementedError()\n\ndef addwindow(self, param)\n win = self.__class__(param)\n windowList.append(win)\n\n"
] |
[
6,
0
] |
[] |
[] |
[
"inheritance",
"pyqt4",
"python"
] |
stackoverflow_0001442128_inheritance_pyqt4_python.txt
|
Q:
Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)
I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work.
A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp.
I am using Python and It should be something like:
"Import ResourceXSimulated as ResourceX"
"ResourceX.getData()", actually calls ResourceXSimultated.getData()
Looking forward to find out if Python supports this kind of redirection.
Cheers.
ADDITIONAL INFO: I have access to the source files.
UPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.
A:
Just change all lines import ResourceX in MyApp to import ResourceXSimulated as ResourceX, and lines like from ResourceX import Y to from ResourceXSimulated import Y.
However if don't have access to MyApp source or there are other reasons not to change it, you can put your module into sys.modules before MyApp is loaded itself:
import ResourceXSimulated
sys.modules['ResourceX'] = ResourceXSimulated
Note: if ResourceX is a package, it might require more effort.
A:
This is called monkey-patching, and it's a fairly widely-used technique in dynamic languages like Python.
So presumably you have a class:
class MyOriginal(object):
def method_x(self):
do_something_expensive_you_dont_want_in_testing()
obj = MyOriginal()
obj.method_x()
so in testing you want to do something else instead of method_x, but it should be transparent. So you just take advantage of Python's dynamic language:
def new_method_x(self):
pretend_were_doing_something_expensive()
test_obj = MyOriginal()
test_obj.method_x = new_method_x # here's the monkeypatch
test_obj_method_x() # calls the new method
A:
Yes, it's possible. Some starters:
You can "divert" modules by manipulating sys.modules. It keeps a list of imported modules, and there you can make your module appear under the same name as the original one. You must do this manipulating before any module that imports the module you want to fake though.
You can also make a package called a different name, but in that package actually use the original module name, for your completely different module. This works well as long as the original module isn't installed.
In none of these cases you can use both modules at the same time. For that you need to monkey-patch the original module.
And of course: It' perfectly possible to just call the new module with the old name. But it might be confusing.
A:
It's possible with the sys.modules hack, as already said.
Note that if you have control over module ResourceX it's certainly better that it takes care of it itself. This is actually a common pattern when writing modules that work better when some resource is present, e.g.:
# foo.py
'''A module that provides interface to foo.
Falls back to a dummy interface if foo is not available.
'''
try:
from _foo import *
except ImportError:
from _foo_dummy import *
Sometimes people do it in a more object-oriented way:
# foo.py
'''A module that provides interface to foo if it exists or to a dummy interface.
Provides:
frobnicate() self-explanatory
...
'''
class DummyFoo:
def frobnicate(self):
pass
...
class UnixFoo(DummyFoo):
def frobnicate(self):
a_posix_call()
...
class GenericFoo(DummyFoo):
def frobnicate(self):
do_something_complicated()
...
# Create a default instance.
try:
if (system == UNIX)
instance = UnixFoo(system)
else:
instance = GenericFoo()
except Exception:
instance = DummyFoo()
# Now export the public interface.
frobnicate = instance.frobnicate
A:
Yes, Python can do that, and so long as the methods exposed in the ResourceXSimulated module "look and smell" like these of the original module, the application should not see much any difference (other than, I'm assuming, bogus data fillers, different response times and such).
|
Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)
|
I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work.
A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp.
I am using Python and It should be something like:
"Import ResourceXSimulated as ResourceX"
"ResourceX.getData()", actually calls ResourceXSimultated.getData()
Looking forward to find out if Python supports this kind of redirection.
Cheers.
ADDITIONAL INFO: I have access to the source files.
UPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.
|
[
"Just change all lines import ResourceX in MyApp to import ResourceXSimulated as ResourceX, and lines like from ResourceX import Y to from ResourceXSimulated import Y.\nHowever if don't have access to MyApp source or there are other reasons not to change it, you can put your module into sys.modules before MyApp is loaded itself:\nimport ResourceXSimulated\nsys.modules['ResourceX'] = ResourceXSimulated\n\nNote: if ResourceX is a package, it might require more effort.\n",
"This is called monkey-patching, and it's a fairly widely-used technique in dynamic languages like Python.\nSo presumably you have a class:\nclass MyOriginal(object):\n\n def method_x(self):\n do_something_expensive_you_dont_want_in_testing()\n\n\nobj = MyOriginal()\nobj.method_x()\n\nso in testing you want to do something else instead of method_x, but it should be transparent. So you just take advantage of Python's dynamic language:\ndef new_method_x(self):\n pretend_were_doing_something_expensive()\n\ntest_obj = MyOriginal()\ntest_obj.method_x = new_method_x # here's the monkeypatch\ntest_obj_method_x() # calls the new method\n\n",
"Yes, it's possible. Some starters:\nYou can \"divert\" modules by manipulating sys.modules. It keeps a list of imported modules, and there you can make your module appear under the same name as the original one. You must do this manipulating before any module that imports the module you want to fake though.\nYou can also make a package called a different name, but in that package actually use the original module name, for your completely different module. This works well as long as the original module isn't installed.\nIn none of these cases you can use both modules at the same time. For that you need to monkey-patch the original module.\nAnd of course: It' perfectly possible to just call the new module with the old name. But it might be confusing.\n",
"It's possible with the sys.modules hack, as already said.\nNote that if you have control over module ResourceX it's certainly better that it takes care of it itself. This is actually a common pattern when writing modules that work better when some resource is present, e.g.: \n# foo.py\n'''A module that provides interface to foo. \n\nFalls back to a dummy interface if foo is not available.\n'''\n\ntry:\n from _foo import *\nexcept ImportError:\n from _foo_dummy import *\n\nSometimes people do it in a more object-oriented way:\n# foo.py\n'''A module that provides interface to foo if it exists or to a dummy interface. \n\nProvides:\n frobnicate() self-explanatory\n ...\n'''\n\nclass DummyFoo:\n def frobnicate(self):\n pass\n ...\n\nclass UnixFoo(DummyFoo):\n def frobnicate(self):\n a_posix_call()\n ...\n\nclass GenericFoo(DummyFoo):\n def frobnicate(self):\n do_something_complicated()\n ...\n\n# Create a default instance.\ntry:\n if (system == UNIX)\n instance = UnixFoo(system)\n else:\n instance = GenericFoo()\nexcept Exception:\n instance = DummyFoo()\n\n# Now export the public interface.\nfrobnicate = instance.frobnicate\n\n",
"Yes, Python can do that, and so long as the methods exposed in the ResourceXSimulated module \"look and smell\" like these of the original module, the application should not see much any difference (other than, I'm assuming, bogus data fillers, different response times and such).\n"
] |
[
4,
1,
1,
1,
0
] |
[] |
[] |
[
"mocking",
"module",
"monkeypatching",
"python",
"testing"
] |
stackoverflow_0001443173_mocking_module_monkeypatching_python_testing.txt
|
Q:
Django : Iterate over a query set without cache
I have a dumb simple loop
for alias in models.Alias.objects.all() :
alias.update_points()
but looking into the django QuerySet it seems to keep around a _result_cache of all the previous results. This is eating Gigs and Gigs of my machine and eventually everything blows up.
How can I throw away all the stuff that I won't ever care about?
A:
Use the queryset's iterator() method to return the models in chunks, without populating the result cache:
for alias in models.Alias.objects.iterator() :
alias.update_points()
A:
You should consider saving your changes back to the database.
for alias in models.Alias.objects.all() :
alias.update_points()
alias.save()
|
Django : Iterate over a query set without cache
|
I have a dumb simple loop
for alias in models.Alias.objects.all() :
alias.update_points()
but looking into the django QuerySet it seems to keep around a _result_cache of all the previous results. This is eating Gigs and Gigs of my machine and eventually everything blows up.
How can I throw away all the stuff that I won't ever care about?
|
[
"Use the queryset's iterator() method to return the models in chunks, without populating the result cache:\nfor alias in models.Alias.objects.iterator() :\n alias.update_points()\n\n",
"You should consider saving your changes back to the database.\nfor alias in models.Alias.objects.all() :\n alias.update_points()\n alias.save()\n\n"
] |
[
11,
0
] |
[] |
[] |
[
"caching",
"django",
"python"
] |
stackoverflow_0001443279_caching_django_python.txt
|
Q:
regular expression to parse network interface config
I am wondering if problem down here can be solved with one regular expression or I should make standard loop and evaluate line by line,
when I run included code I get ['Ethernet0/22', 'Ethernet0/24'],
only result should be ['Ethernet0/23', 'Ethernet0/25'].
any advice on this?
import re
txt='''#
interface Ethernet0/22
stp disable
broadcast-suppression 5
mac-address max-mac-count 1
port access vlan 452
#
interface Ethernet0/23
stp disable
description BTO
broadcast-suppression 5
port access vlan 2421
#
interface Ethernet0/24
stp disable
description Avaya G700
broadcast-suppression 5
port access vlan 452
#
interface Ethernet0/25
stp disable
description BTO
broadcast-suppression 5
port access vlan 2421
#
'''
re1 = '''^interface (.*?$).*?BTO.*?^#$'''
rg = re.compile(re1,re.IGNORECASE|re.DOTALL|re.MULTILINE)
m = rg.findall(txt)
if m:
print m
A:
Here is a little pyparsing parser for your file. Not only does this show a solution to your immediate problem, but the parser gives you a nice set of objects that you can use to easily access the data in each interface.
Here is the parser:
from pyparsing import *
# set up the parser
comment = "#" + Optional(restOfLine)
keyname = Word(alphas,alphanums+'-')
value = Combine(empty + SkipTo(LineEnd() | comment))
INTERFACE = Keyword("interface")
interfaceDef = Group(INTERFACE + value("name") + \
Dict(OneOrMore(Group(~INTERFACE + keyname + value))))
# ignore comments (could be anywhere)
interfaceDef.ignore(comment)
# parse the source text
ifcdata = OneOrMore(interfaceDef).parseString(txt)
Now how to use it:
# use dump() to list all of the named fields created at parse time
for ifc in ifcdata:
print ifc.dump()
# first the answer to the OP's question
print [ifc.name for ifc in ifcdata if ifc.description == "BTO"]
# how to access fields that are not legal Python identifiers
print [(ifc.name,ifc['broadcast-suppression']) for ifc in ifcdata
if 'broadcast-suppression' in ifc]
# using names to index into a mapping with string interpolation
print ', '.join(["(%(name)s, '%(port)s')" % ifc for ifc in ifcdata ])
Prints out:
['interface', 'Ethernet0/22', ['stp', 'disable'], ['broadcast-suppression', '5'], ['mac-address', 'max-mac-count 1'], ['port', 'access vlan 452']]
- broadcast-suppression: 5
- mac-address: max-mac-count 1
- name: Ethernet0/22
- port: access vlan 452
- stp: disable
['interface', 'Ethernet0/23', ['stp', 'disable'], ['description', 'BTO'], ['broadcast-suppression', '5'], ['port', 'access vlan 2421']]
- broadcast-suppression: 5
- description: BTO
- name: Ethernet0/23
- port: access vlan 2421
- stp: disable
['interface', 'Ethernet0/24', ['stp', 'disable'], ['description', 'Avaya G700'], ['broadcast-suppression', '5'], ['port', 'access vlan 452']]
- broadcast-suppression: 5
- description: Avaya G700
- name: Ethernet0/24
- port: access vlan 452
- stp: disable
['interface', 'Ethernet0/25', ['stp', 'disable'], ['description', 'BTO'], ['broadcast-suppression', '5'], ['port', 'access vlan 2421']]
- broadcast-suppression: 5
- description: BTO
- name: Ethernet0/25
- port: access vlan 2421
- stp: disable
['Ethernet0/23', 'Ethernet0/25']
[('Ethernet0/22', '5'), ('Ethernet0/23', '5'), ('Ethernet0/24', '5'), ('Ethernet0/25', '5')]
(Ethernet0/22, 'access vlan 452'), (Ethernet0/23, 'access vlan 2421'), (Ethernet0/24, 'access vlan 452'), (Ethernet0/25, 'access vlan 2421')
A:
Your problem is that the regex is continuing to find the BTO in the next group. As a quick workaround, you could just prohibit the "#" character in the interface id (assuming this isn't valid within records, and only seperates them).
re1 = '''^interface ([^#]*?$)[^#]*?BTO.*?^#$'''
A:
An example without regular expressions:
print [ stanza.split()[0]
for stanza in txt.split("interface ")
if stanza.lower().startswith( "ethernet" )
and stanza.lower().find("bto") > -1 ]
Explanation:
I find compositions are best read "inside-out":
for stanza in txt.split("interface ")
Split the text on each occurrence of "interface " (including the following space). A resulting stanza will look like this:
Ethernet0/22
stp disable
broadcast-suppression 5
mac-address max-mac-count 1
port access vlan 452
#
Next, filter the stanzas:
if stanza.lower().startswith( "ethernet" ) and stanza.lower().find("bto") > -1
This should be self-explanatory.
stanza.split()[0]
Split the mathing stanzas on whitespace, and take the first element into the resulting list. This, in tandem with the filter startswith will prevent IndexErrors
A:
Rather than trying to make a pattern between the ^ and $ anchors, and relying on the # you could use the newlines break down the 'sublines' inside the single block match
e.g. identify the clauses in terms of a sequence of literal not-newlines leading up to a newline.
something like
re1 = '''\ninterface ([^\n]+?)\n[^\n]+?\n[^\n]+BTO\n'''
will produce the result you are after, from the source text provided.
|
regular expression to parse network interface config
|
I am wondering if problem down here can be solved with one regular expression or I should make standard loop and evaluate line by line,
when I run included code I get ['Ethernet0/22', 'Ethernet0/24'],
only result should be ['Ethernet0/23', 'Ethernet0/25'].
any advice on this?
import re
txt='''#
interface Ethernet0/22
stp disable
broadcast-suppression 5
mac-address max-mac-count 1
port access vlan 452
#
interface Ethernet0/23
stp disable
description BTO
broadcast-suppression 5
port access vlan 2421
#
interface Ethernet0/24
stp disable
description Avaya G700
broadcast-suppression 5
port access vlan 452
#
interface Ethernet0/25
stp disable
description BTO
broadcast-suppression 5
port access vlan 2421
#
'''
re1 = '''^interface (.*?$).*?BTO.*?^#$'''
rg = re.compile(re1,re.IGNORECASE|re.DOTALL|re.MULTILINE)
m = rg.findall(txt)
if m:
print m
|
[
"Here is a little pyparsing parser for your file. Not only does this show a solution to your immediate problem, but the parser gives you a nice set of objects that you can use to easily access the data in each interface.\nHere is the parser:\nfrom pyparsing import *\n\n# set up the parser\ncomment = \"#\" + Optional(restOfLine)\nkeyname = Word(alphas,alphanums+'-')\nvalue = Combine(empty + SkipTo(LineEnd() | comment))\nINTERFACE = Keyword(\"interface\")\ninterfaceDef = Group(INTERFACE + value(\"name\") + \\\n Dict(OneOrMore(Group(~INTERFACE + keyname + value))))\n\n# ignore comments (could be anywhere)\ninterfaceDef.ignore(comment)\n\n# parse the source text\nifcdata = OneOrMore(interfaceDef).parseString(txt)\n\nNow how to use it:\n# use dump() to list all of the named fields created at parse time\nfor ifc in ifcdata:\n print ifc.dump()\n\n# first the answer to the OP's question\nprint [ifc.name for ifc in ifcdata if ifc.description == \"BTO\"]\n\n# how to access fields that are not legal Python identifiers\nprint [(ifc.name,ifc['broadcast-suppression']) for ifc in ifcdata \n if 'broadcast-suppression' in ifc]\n\n# using names to index into a mapping with string interpolation\nprint ', '.join([\"(%(name)s, '%(port)s')\" % ifc for ifc in ifcdata ])\n\nPrints out:\n['interface', 'Ethernet0/22', ['stp', 'disable'], ['broadcast-suppression', '5'], ['mac-address', 'max-mac-count 1'], ['port', 'access vlan 452']]\n- broadcast-suppression: 5\n- mac-address: max-mac-count 1\n- name: Ethernet0/22\n- port: access vlan 452\n- stp: disable\n['interface', 'Ethernet0/23', ['stp', 'disable'], ['description', 'BTO'], ['broadcast-suppression', '5'], ['port', 'access vlan 2421']]\n- broadcast-suppression: 5\n- description: BTO\n- name: Ethernet0/23\n- port: access vlan 2421\n- stp: disable\n['interface', 'Ethernet0/24', ['stp', 'disable'], ['description', 'Avaya G700'], ['broadcast-suppression', '5'], ['port', 'access vlan 452']]\n- broadcast-suppression: 5\n- description: Avaya G700\n- name: Ethernet0/24\n- port: access vlan 452\n- stp: disable\n['interface', 'Ethernet0/25', ['stp', 'disable'], ['description', 'BTO'], ['broadcast-suppression', '5'], ['port', 'access vlan 2421']]\n- broadcast-suppression: 5\n- description: BTO\n- name: Ethernet0/25\n- port: access vlan 2421\n- stp: disable\n['Ethernet0/23', 'Ethernet0/25']\n[('Ethernet0/22', '5'), ('Ethernet0/23', '5'), ('Ethernet0/24', '5'), ('Ethernet0/25', '5')]\n(Ethernet0/22, 'access vlan 452'), (Ethernet0/23, 'access vlan 2421'), (Ethernet0/24, 'access vlan 452'), (Ethernet0/25, 'access vlan 2421')\n\n",
"Your problem is that the regex is continuing to find the BTO in the next group. As a quick workaround, you could just prohibit the \"#\" character in the interface id (assuming this isn't valid within records, and only seperates them).\nre1 = '''^interface ([^#]*?$)[^#]*?BTO.*?^#$'''\n\n",
"An example without regular expressions:\nprint [ stanza.split()[0]\n for stanza in txt.split(\"interface \")\n if stanza.lower().startswith( \"ethernet\" )\n and stanza.lower().find(\"bto\") > -1 ]\n\nExplanation:\nI find compositions are best read \"inside-out\":\nfor stanza in txt.split(\"interface \")\n\nSplit the text on each occurrence of \"interface \" (including the following space). A resulting stanza will look like this:\nEthernet0/22\n stp disable\n broadcast-suppression 5\n mac-address max-mac-count 1\n port access vlan 452\n#\n\nNext, filter the stanzas:\nif stanza.lower().startswith( \"ethernet\" ) and stanza.lower().find(\"bto\") > -1\n\nThis should be self-explanatory.\nstanza.split()[0]\n\nSplit the mathing stanzas on whitespace, and take the first element into the resulting list. This, in tandem with the filter startswith will prevent IndexErrors\n",
"Rather than trying to make a pattern between the ^ and $ anchors, and relying on the # you could use the newlines break down the 'sublines' inside the single block match\ne.g. identify the clauses in terms of a sequence of literal not-newlines leading up to a newline.\nsomething like\n re1 = '''\\ninterface ([^\\n]+?)\\n[^\\n]+?\\n[^\\n]+BTO\\n'''\n\nwill produce the result you are after, from the source text provided.\n"
] |
[
5,
4,
2,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001443433_python_regex.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.