Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | Does anyone know of a memory efficient way to generate very large xml files (e.g. 100-500 MiB) in Python?
I've been utilizing lxml, but memory usage is through the roof. | false | 3,049,188 | 0.099668 | 1 | 0 | 2 | Obviously, you've got to avoid having to build the entire tree ( whether DOM or etree or whatever ) in memory. But the best way depends on the source of your data and how complicated and interlinked the structure of your output is.
If it's big because it's got thousands of instances of fairly independent items, then you can generate the outer wrapper, and then build trees for each item and then serialize each fragment to the output.
If the fragments aren't so independent, then you'll need to do some extra bookkeeping -- like maybe manage a database of generated ids & idrefs.
I would break it into 2 or 3 parts: a sax event producer, an output serializer
eating sax events, and optionally, if it seems easier to work with some independent pieces as objects or trees, something to build those objects and then turn them into sax events for the serializer.
Maybe you could just manage it all as direct text output, instead of dealing with sax events: that depends on how complicated it is.
This may also be a good place to use python generators as a way of streaming the output without having to build large structures in memory. | 0 | 4,751 | 0 | 11 | 2010-06-15T21:27:00.000 | python,xml,lxml | Generating very large XML files in Python? | 1 | 2 | 4 | 3,050,007 | 0 |
1 | 0 | just wonder how those air ticket booking website redirect the user to the airline booking website and then fill up(i suppose doing POST) the required information so that the users will land on the booking page with origin/destination/date selected?
Is the technique used is to open up new browser window and do a ajax POST from there?
Thanks. | true | 3,050,477 | 1.2 | 0 | 0 | 0 | It can work like this:
on air ticket booking system you have a html form pointing on certain airline booking website (by action parameter). If user submits data then data lands on airline booking website and this website proceed the request.
Usuallly people want to get back to the first site. This can be done by sending return url with request data. Of course there must be an API on the airline booking site to handle such url.
This is common mechanism when you do online payments, all kind of reservations, etc.
Not sure about your idea to use ajax calls. Simple html form is enough here. Note that also making ajax calls between different domains can be recognized as attempt to access the restricted url. | 0 | 81 | 0 | 0 | 2010-06-16T03:07:00.000 | javascript,python | redirection follow by post | 1 | 1 | 1 | 3,051,343 | 0 |
0 | 0 | I am able to establish the initial telnet session. But from this session I need to create a second. Basically I can not telnet directly to the device I need to access. Interactively this is not an issue but I am attempting to setup an automated test using python.
Does anyone know who to accomplish this? | false | 3,054,086 | 0 | 0 | 0 | 0 | If you log in from A to B to C, do you need the console input from A to go to C ?
If not, it is fairly straightforward, as you can execute commands on the second server to connect to the third.
I do something like that using SSH, where I have paramiko and scripts installed on both A and B. A logs in to B and executes a command to start a python script on B which then connects to C and does whatever. | 0 | 797 | 0 | 0 | 2010-06-16T14:18:00.000 | python,telnet,telnetlib | Using Python: How can I telnet into a server and then from that connection telnet into a second server? | 1 | 2 | 2 | 3,054,195 | 0 |
0 | 0 | I am able to establish the initial telnet session. But from this session I need to create a second. Basically I can not telnet directly to the device I need to access. Interactively this is not an issue but I am attempting to setup an automated test using python.
Does anyone know who to accomplish this? | false | 3,054,086 | 0.099668 | 0 | 0 | 1 | After establishing the first connection, just write the same telnet command you use manually to that connection. | 0 | 797 | 0 | 0 | 2010-06-16T14:18:00.000 | python,telnet,telnetlib | Using Python: How can I telnet into a server and then from that connection telnet into a second server? | 1 | 2 | 2 | 3,054,153 | 0 |
1 | 0 | I have two projects in Eclipse with Java and Python code, using Jython. Also I'm using PyDev. One project can import and use the xml module just fine, and the other gives the error ImportError: No module named xml. As far as I can tell, all the project properties are set identically. The working project was created from scratch and the other comes from code checked out of an svn repository and put into a new project.
What could be the difference?
edit- Same for os, btw. It's just missing some path somewhere... | false | 3,057,382 | 0.379949 | 0 | 0 | 2 | eclipse stores project data in files like
.project
.pydevprojct
.classpath
with checkin / checkout via svn it is possible to lost some of these files
check your dot-files | 0 | 190 | 0 | 2 | 2010-06-16T21:38:00.000 | java,python,eclipse,jython,pydev | Jython project in Eclipse can't find the xml module, but works in an identical project | 1 | 1 | 1 | 3,119,610 | 0 |
0 | 0 | I am trying to put together a bash or python script to play with the facebook graph API. Using the API looks simple, but I'm having trouble setting up curl in my bash script to call authorize and access_token. Does anyone have a working example? | false | 3,058,723 | 1 | 0 | 0 | 6 | There IS a way to do it, I've found it, but it's a lot of work and will require you to spoof a browser 100% (and you'll likely be breaking their terms of service)
Sorry I can't provide all the details, but the gist of it:
assuming you have a username/password for a facebook account, go curl for the oauth/authenticate... page. Extract any cookies returned in the "Set-Cookie" header and then follow any "Location" headers (compiling cookies along the way).
scrape the login form, preserving all fields, and submit it (setting the referer and content-type headers, and inserting your email/pass) same cookie collection from (1) required
same as (2) but now you're going to need to POST the approval form acquired after (2) was submitted, set the Referer header with thr URL where the form was acquired.
follow the redirects until it sends you back to your site, and get the "code" parameter out of that URL
Exchange the code for an access_token at the oauth endpoint
The main gotchas are cookie management and redirects. Basically, you MUST mimic a browser 100%. I think it's hackery but there is a way, it's just really hard! | 0 | 72,749 | 0 | 41 | 2010-06-17T03:41:00.000 | python,bash,facebook | Programmatically getting an access token for using the Facebook Graph API | 1 | 1 | 8 | 3,381,527 | 0 |
0 | 0 | The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable.
Ideally
Works on either Linux or Windows - meaning the solution is OS independent.
Works for any type of file.
Good enough:
Works just on windows but can be done through some library or whatever that can be accessed with Python.
Works only for PDF files.
Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows. | true | 3,070,210 | 1.2 | 0 | 0 | 1 | I ended up resolving it for our situation. As it turns out the process that was writing the files out had them opened exclusively so all we had to do was try opening them for read access - when denied they were in use. | 0 | 8,480 | 1 | 8 | 2010-06-18T13:59:00.000 | python,windows,linux,pdf,file-io | Need a way to determine if a file is done being written to | 1 | 2 | 2 | 3,073,958 | 0 |
0 | 0 | The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable.
Ideally
Works on either Linux or Windows - meaning the solution is OS independent.
Works for any type of file.
Good enough:
Works just on windows but can be done through some library or whatever that can be accessed with Python.
Works only for PDF files.
Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows. | false | 3,070,210 | 1 | 0 | 0 | 8 | There are probably many approaches you can take. I would try to open the file with write access. If that succeeds then no-one else is writing to that file.
Build a web service around this concept if you don't have direct access to the file between machines. | 0 | 8,480 | 1 | 8 | 2010-06-18T13:59:00.000 | python,windows,linux,pdf,file-io | Need a way to determine if a file is done being written to | 1 | 2 | 2 | 3,070,749 | 0 |
0 | 0 | I'm using PAMIE to control IE to automatically browse to a list of URLs. I want to find which URLs return IE's malware warning and which ones don't. I'm new to PAMIE, and PAMIE's documentation is non-existent or cryptic at best. How can I get a page's content from PAMIE so I can work with it in Python? | true | 3,073,151 | 1.2 | 1 | 0 | 0 | Browsing the CPamie.py file did the trick. Turns out, I didn't even need the page content - PAMIE's findText method lets you match any string on the page. Works great! | 0 | 256 | 0 | 0 | 2010-06-18T21:17:00.000 | python,pamie | How do I get the page content from PAMIE? | 1 | 1 | 1 | 3,098,190 | 0 |
0 | 0 | I am interested in making an HTTP Banner Grabber, but when i connect to a server on port 80 and i send something (e.g. "HEAD / HTTP/1.1") recv doesn't return anything to me like when i do it in let's say netcat..
How would i go about this?
Thanks! | false | 3,076,263 | 0.197375 | 0 | 0 | 2 | Are you sending a "\r\n\r\n" to indicate the end of the request? If you're not, the server's still waiting for the rest of the request. | 0 | 2,102 | 0 | 1 | 2010-06-19T16:27:00.000 | python,netcat | HTTP Banner Grabbing with Python | 1 | 1 | 2 | 3,076,282 | 0 |
0 | 0 | As the question says, what would be the difference between:
x.getiterator() and x.iter(), where x is an ElementTree or an Element? Cause it seems to work for both, I have tried it.
If I am wrong somewhere, correct me please. | true | 3,077,010 | 1.2 | 0 | 0 | 0 | getiterator is the ElementTree standard spelling for this method; iter is an equivalent lxml-only method that will stop your code from working in ElementTree if you need it, and appears to have no redeeming qualities whatsoever except saving you from typing 7 more characters for the method name;-). | 1 | 1,970 | 0 | 5 | 2010-06-19T19:53:00.000 | python,lxml | What is the difference between getiterator() and iter() wrt to lxml | 1 | 1 | 2 | 3,077,047 | 0 |
0 | 0 | Is there a free open-source solution taking raw e-mail message (as a piece of text) and returning each header field, each attachment and the message body as separate fields? | false | 3,078,189 | 0.132549 | 1 | 0 | 2 | Yes... For each language you pointed out, I've used the one in Python myself. Try perusing the library documentation for your chosen library.
(Note: You may be expecting a "nice", high-level library for this parsing... That's a tricky area, email has evolved and grown without much design, there are a lot of dark corners, and API's reflect that). | 0 | 873 | 0 | 2 | 2010-06-20T04:15:00.000 | java,php,python,email,parsing | Is there an open-source eMail message (headers, attachments, etc.) parser? | 1 | 1 | 3 | 3,078,197 | 0 |
1 | 0 | I'm looking to create an application in Django which will allow for each client to point their domain to my server. At this point, I would want their domain to be accessed via https protocol and have a valid SSL connection. With OpenSSL, more specifically M2Crypto, can I do this right out the gate? Or, do I still need to purchase an SSL cert? Also, if the former is true (can do without purchase), does this mean I need to have a Python-based web server listening on 443 or does this all somehow still work with NGINX, et al?
Any help is appreciated. | false | 3,078,487 | 0.197375 | 0 | 0 | 2 | You will need a SSL cert, and let the web server handle the HTTPS. | 0 | 703 | 0 | 1 | 2010-06-20T07:17:00.000 | python,ssl,openssl,m2crypto | OpenSSL for HTTPS without a certificate | 1 | 1 | 2 | 3,078,518 | 0 |
0 | 0 | How can I check if a specific ip address or proxy is alive or dead | false | 3,078,704 | 0 | 1 | 0 | 0 | An IP address corresponds to a device. You can't "connect" to a device in the general sense. You can connect to services on the device identified by ports. So, you find the ip address and port of the proxy server you're interested in and then try connecting to it using a simple socket.connect. If it connects fine, you can alteast be sure that something is running on that port of that ip address. Then you go ahead and use it and if things are not as you expect, you can make further decisions. | 0 | 4,148 | 0 | 2 | 2010-06-20T09:03:00.000 | python,sockets,proxy | how to check if an ip address or proxy is working or not | 1 | 2 | 3 | 3,078,730 | 0 |
0 | 0 | How can I check if a specific ip address or proxy is alive or dead | true | 3,078,704 | 1.2 | 1 | 0 | 3 | Because there may be any level of filtering or translation between you and the remote host, the only way to determine whether you can connect to a specific host is to actually try to connect. If the connection succeeds, then you can, else you can't.
Pinging isn't sufficient because ICMP ECHO requests may be blocked yet TCP connections might go through fine. | 0 | 4,148 | 0 | 2 | 2010-06-20T09:03:00.000 | python,sockets,proxy | how to check if an ip address or proxy is working or not | 1 | 2 | 3 | 3,078,719 | 0 |
1 | 0 | I have to build a tag cloud out of a webpage/feed. Once you get the word frequency table of tags, it's easy to build the tagcloud. But my doubt is how do I retrieve the tags/keywords from the webpage/feed?
This is what I'm doing now:
Get the content -> strip HTML -> split them with \s\n\t(space,newline,tab) -> Keyword list
But this does not work great.
Is there a better way? | false | 3,083,784 | 0 | 0 | 0 | 0 | What you have is a rough 1st order approximation. I think if you then go back through the data and search for frequency of 2-word phrases, then 3 word phrases, up till the total number of words that can be considered a tag, you'll get a better representation of keyword frequency.
You can refine this rough search pattern by specifying certain words that can be contained as part of a phrase (pronouns ect). | 0 | 226 | 0 | 1 | 2010-06-21T10:17:00.000 | python,tags,visualization,keyword | How do I get tags/keywords from a webpage/feed? | 1 | 1 | 1 | 3,250,807 | 0 |
0 | 0 | This follows my previous questions on using lxml and Python.
I have a question, as to when I have a choice between using the methods provided by the lxml.etree and where I can make use of XPath, what should I use?
For example, to get a list of all the X tags in a XML document, I could either iterate through it using the getiterator() of lxml.etree, or I could write the XPath expression: //x.
There may be many more examples, this is just one. Question is, which should when I have a choose and why? | true | 3,084,627 | 1.2 | 0 | 0 | 1 | XPath is usually preferable to an explicit iteration over elements. XPath is more succinct, and will likely be faster since it is implemented inside the XML engine.
You'd want to use an explicit iteration if there were complex criteria that couldn't be expressed easily (or at all) in XPath, or if you needed to visit all the nodes for some other processing anyway, or if you wanted to get rich debugging output. | 0 | 96 | 0 | 1 | 2010-06-21T12:34:00.000 | python,lxml | Confused about using XPath or not | 1 | 1 | 1 | 3,084,858 | 0 |
1 | 0 | I'm working with selenium.
while trying to click a button it creates a pop up (alert) and doesn’t return a page object.
Because of that I can’t use “click” alone as this method expects a page object and eventually fails because of a timeout.
I can use the “chooseOkOnNextConfirmation()” but this will click the pop up and i also want to verify that the pop up actually appeared.
Is there any method that will click and verify this alert? | false | 3,084,850 | 0.066568 | 0 | 0 | 1 | as far as I know you have to use always in alerts
selenium.get_confirmation()
from python doc:
If an confirmation is generated but you do not consume it with getConfirmation, the next Selenium action will fail. | 0 | 3,169 | 0 | 2 | 2010-06-21T13:01:00.000 | python,selenium | How to click and verify the existence of a pop up (alert) | 1 | 1 | 3 | 3,103,295 | 0 |
0 | 0 | If Python, if you are developing a system service that communicates with user applications through sockets, and you want to treat sockets connected by different users differently, how would you go about that?
If I know that all connecting sockets will be from localhost, is there a way to lookup through the OS (either on windows or linux) which user is making the connection request? | false | 3,105,705 | 0.066568 | 0 | 0 | 1 | Unfortunately, at this point in time the python libraries don't support the usual SCM_CREDENTIALS method of passing credentials along a Unix socket.
You'll need to use an "ugly" method as described in another answer to find it. | 0 | 195 | 1 | 2 | 2010-06-23T21:30:00.000 | python,sockets | Determine user connecting a local socket with Python | 1 | 1 | 3 | 3,107,066 | 0 |
0 | 0 | Is it possible to filter all outgoing connections through a HTTPS or SOCKS proxy? I have a script that users various apis & calls scripts that use mechanize/urllib. I'd like to filter every connection through a proxy, setting the proxy in my 'main' script (the one that calls all the apis). Is this possible? | false | 3,115,286 | 0 | 0 | 0 | 0 | to use tor with mechanize I use tor+polipo.
set polipo to use parent proxy socksParentProxy=localhost:9050 at confing file.
then use
browser.set_proxies({"http": "localhost:8118"})
where 8118 is your polipo port.
so you are using polipo http proxy that uses sock to use tor
hope it helps :) | 0 | 798 | 0 | 3 | 2010-06-25T03:06:00.000 | python,proxies | Python proxy question | 1 | 1 | 3 | 8,904,814 | 0 |
0 | 0 | Are there any equivalents in objective-c to the following python urllib2 functions?
Request, urlopen, HTTPError, HTTPCookieProRequest, urlopen, HTTPError, HTTPCookieProcessor
Also, how would I able to to this and change the method from "get" to "post"? | true | 3,120,430 | 1.2 | 1 | 0 | 1 | NSMutableHTTPURLRequest, a category of NSMutableURLRequest, is how you set up an HTTP request. Using that class you will specify a method (GET or POST), headers and a url.
NSURLConnection is how you open the connection. You will pass in a request and delegate, and the delegate will receive data, errors and messages related to the connection as they become available.
NSHTTPCookieStorage is how you manage existing cookies. There are a number of related classes in the NSHTTPCookie family.
With urlopen, you open a connection and read from it. There is no direct equivalent to that unless you use something lower level like CFReadStreamCreateForHTTPRequest. In Objective-C everything is passive, where you are notified when events occur on the stream. | 0 | 632 | 0 | 0 | 2010-06-25T18:23:00.000 | python,objective-c | Is there an Objective-C equivalent to Python urllib and urllib2? | 1 | 1 | 2 | 3,120,602 | 0 |
0 | 0 | I'm using XMPP in Python, and I can send messages, but how can I receive? | false | 3,121,518 | 0 | 0 | 0 | 0 | Good post. I notice this code snippet is also in the logger example in xmpppy sourceforge website.
I wonder if it is possible to reply to incoming messages. The code above only receives and the nickname resource ID does not indicate who the sender is (in terms of JID format, user@server) unless xmpppy can translate that appropriately. So how might one take the received message nd "echo" it back to the sender? Or is that not easily possible with the xmpppy library and need to find a different XMPP library? | 0 | 3,190 | 0 | 1 | 2010-06-25T21:17:00.000 | python,xmpp,xmpppy | How can I get a response with XMPP client in Python | 1 | 1 | 2 | 3,553,285 | 0 |
0 | 0 | I have a C program and a Python program on the same machine. The C program generates some data in nested structures. What form of IPC is the best way to get this data across to the python program?
Serializing in C (especially nested structures) is a real bear, from what I hear, due to lack of serialization libraries. I am not very familiar with shared memory, but I assume the formatting of the C structures may not be very palatable to the python program when it comes to memory alignment and following pointers. The ctype and struct library seems to be for non-nested structures only. So far, what I am thinking is:
Wrap all the data in the C program into some xml or json format, write it via socket to python program and then let python program interpret the xml/json formatted data. Looks very cumbersome with lots of overheads.
Any better ideas ? | false | 3,127,467 | 0.379949 | 0 | 0 | 2 | I think you answered your own question. JSON is certainly a good choice. It's also not terribly difficult to do your own serialization in C. | 0 | 766 | 0 | 1 | 2010-06-27T13:26:00.000 | python,c,sockets,serialization | Sending binary data over IPC from C to Python | 1 | 1 | 1 | 3,127,588 | 0 |
1 | 0 | If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags? | false | 3,127,984 | 0.321513 | 0 | 0 | 5 | XML (and XHTML) tags are case-sensitive ... so <this> and <tHis> would be different elements.
However a lot (rough estimate) of HTML (not XHTML) tags are random-case. | 0 | 98 | 0 | 1 | 2010-06-27T16:30:00.000 | python,html,xml | When matching html or xml tags, should one worry about casing? | 1 | 3 | 3 | 3,127,997 | 0 |
1 | 0 | If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags? | false | 3,127,984 | 0.132549 | 0 | 0 | 2 | Only if you're using XHTML as this is case sensitive, whereas HTML is not so you can ignore case differences. Test for the doctype before worrying about checking for case. | 0 | 98 | 0 | 1 | 2010-06-27T16:30:00.000 | python,html,xml | When matching html or xml tags, should one worry about casing? | 1 | 3 | 3 | 3,128,006 | 0 |
1 | 0 | If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags? | true | 3,127,984 | 1.2 | 0 | 0 | 1 | I think you're overly concerned about performance. If you're talking about arbitrary web pages, 90% of them will be HTML, not XHTML, so you should do case-insensitive comparisons. Lowercasing a string is extremely fast, and should be less than 1% of the total time of your parser. If you're not sure, carefully time your parser on a document that's already all lowercase, with and without the lowercase conversions.
Even a pure-Python implementation of lower() would be negligible compared to the rest of the parsing, but it's better than that - CPython implements lower() in C code, so it really is as fast as possible.
Remember, premature optimization is the root of all evil. Make your program correct first, then make it fast. | 0 | 98 | 0 | 1 | 2010-06-27T16:30:00.000 | python,html,xml | When matching html or xml tags, should one worry about casing? | 1 | 3 | 3 | 3,128,078 | 0 |
0 | 0 | My Python application makes a lot of HTTP requests using the urllib2 module. This application might be used over very unreliable networks where latencies could be low and dropped packets and network timeouts might be very common. Is is possible to override a part of the urllib2 module so that each request is retried an X number of times before raising any exceptions? Has anyone seen something like this?
Can i achieve this without modifying my whole application and just creating a wrapper over the urllib2 module. Thus any code making requests using this module automatically gets to use the retry functionality.
Thanks. | false | 3,130,923 | 0 | 0 | 0 | 0 | Modifying parts of a library is never a good idea.
You can write wrappers around the methods you use to fetch data that would provide the desired behavior. Which would be trivial.
You can for example define methods with the same names as in urllib2 in your own module called myurllib2. Then just change the imports everywhere you use urllib2 | 0 | 6,335 | 0 | 2 | 2010-06-28T08:17:00.000 | python,urllib2,urllib | Make urllib retry multiple times | 1 | 1 | 2 | 3,131,037 | 0 |
1 | 0 | I am trying to develop my first python web project. It have multiple tabs (like apple.com have Store, iPhone, iPad etc tabs) and when user click on any tab, the page is served from server.
I want to make sure that the selected tab will have different background color when page is loaded.
Which is a best way to do it? JavaScript/CSS/Directly from server? and How?
Thanks. | false | 3,137,167 | 0.197375 | 0 | 0 | 1 | I think the best way would be through CSS. You can handle it by adding the pseudoclass :active to the CSS.
Other way is serving the page with a new class added to the tab, which will change the background color, but I would not recommend that. | 0 | 149 | 0 | 0 | 2010-06-29T00:56:00.000 | python | Highlight selected Tab - Python webpage | 1 | 1 | 1 | 3,137,299 | 0 |
1 | 0 | I am already aware of tag based HTML parsing in Python using BeautifulSoup, htmllib etc.
However, I want a powerful engine which can do complex tasks like read html tables, lists etc. and present these as simple to use objects within code. Does python have such powerful libraries? | false | 3,167,679 | 0.132549 | 0 | 0 | 2 | BeautifulSoup is a nice library and provides a good way to parse HTML with some handy ways to parse the data very easily.
What you are trying to do, can easily be done using some simple regular expressions. You can write regular expressions to search for a particular pattern of data and extract the data you need. | 0 | 921 | 0 | 4 | 2010-07-02T17:00:00.000 | python,html-parsing | Complex HTML parsing with Python | 1 | 1 | 3 | 3,167,761 | 0 |
0 | 0 | Where?
I'm trying google and any of the proxys I've tried worked...
I'm trying urllib.open with it...
I don't know if urllib need some special proxy type or something like that...
Thank you
ps: I need some proxies to ping a certain website and not got banned from my ip | false | 3,169,425 | 0 | 0 | 0 | 0 | You probably don't even need to use a proxy. The urllib module knows how to contact web servers directly.
You may need to use a proxy if you're behind certain kinds of corporate firewalls, but in that case you can't just choose any proxy to use, you have to use the corporate proxy. In such a case, a list of open proxies on Google isn't going to help you. | 0 | 448 | 0 | 0 | 2010-07-02T22:19:00.000 | python,proxy | Where can I get some proxy list good for use it with Python? | 1 | 2 | 2 | 3,169,461 | 0 |
0 | 0 | Where?
I'm trying google and any of the proxys I've tried worked...
I'm trying urllib.open with it...
I don't know if urllib need some special proxy type or something like that...
Thank you
ps: I need some proxies to ping a certain website and not got banned from my ip | true | 3,169,425 | 1.2 | 0 | 0 | 0 | Try setting up your own proxy and connecting to it... | 0 | 448 | 0 | 0 | 2010-07-02T22:19:00.000 | python,proxy | Where can I get some proxy list good for use it with Python? | 1 | 2 | 2 | 3,169,457 | 0 |
1 | 0 | I am new to python and scrapy .
I am running the scrapy-ctl.py from another python script using
subprocess module.But I want to parse the 'start url' to the spider from
this script itself.Is it possible to parse start_urls(which are
determined in the script from which scrapy-ctl is run) to the spider?
I will be greatful for any suggestions or ideas regarding this....:)
Thanking in advance.... | false | 3,179,979 | 0.379949 | 0 | 0 | 2 | You can override the start_requests() method in your spider to get the starting requests (which, by default, are generated using the urls in the start_urls attribute). | 0 | 179 | 0 | 0 | 2010-07-05T13:49:00.000 | python,windows,web-crawler,scrapy | how to parse a string to spider from another script | 1 | 1 | 1 | 3,186,698 | 0 |
0 | 0 | I want my Python script to access a URL through an IP specified in the script instead of through the default DNS for the domain. Basically I want the equivalent of adding an entry to my /etc/hosts file, but I want the change to apply only to my script instead of globally on the whole server. Any ideas? | true | 3,183,617 | 1.2 | 1 | 0 | 2 | Whether this works or not will depend on whether the far end site is using HTTP/1.1 named-based virtual hosting or not.
If they're not, you can simply replace the hostname part of the URL with their IP address, per @Greg's answer.
If they are, however, you have to ensure that the correct Host: header is sent as part of the HTTP request. Without that, a virtual hosting web server won't know which site's content to give you. Refer to your HTTP client API (Curl?) to see if you can add or change default request headers. | 0 | 911 | 0 | 2 | 2010-07-06T05:08:00.000 | python,dns,urllib,hosts | Alternate host/IP for python script | 1 | 2 | 2 | 3,184,895 | 0 |
0 | 0 | I want my Python script to access a URL through an IP specified in the script instead of through the default DNS for the domain. Basically I want the equivalent of adding an entry to my /etc/hosts file, but I want the change to apply only to my script instead of globally on the whole server. Any ideas? | false | 3,183,617 | 0 | 1 | 0 | 0 | You can use an explicit IP number to connect to a specific machine by embedding that into the URL: http://127.0.0.1/index.html is equivalent to http://localhost/index.html
That said, it isn't a good idea to use IP numbers instead of DNS entries. IPs change a lot more often than DNS entries, meaning your script has a greater chance of breaking if you hard-code the address instead of letting it resolve normally. | 0 | 911 | 0 | 2 | 2010-07-06T05:08:00.000 | python,dns,urllib,hosts | Alternate host/IP for python script | 1 | 2 | 2 | 3,183,666 | 0 |
0 | 0 | Is it possible to write a firewall in python? Say it would block all traffic? | false | 3,189,138 | 0.099668 | 1 | 0 | 3 | I'm sure it's probably possible, but ill-advised. As mcandre mentions, most OSes couple the low level networking capabilities you need for a firewall tightly into the kernel and thus this task is usually done in C/C++ and integrates tightly with the kernel. The microkernel OSes (Mach et al) might be more amenable than linux. You may be able to mix some python and C, but I think the more interesting discussion here is going to be around "why should I"/"why shouldn't I" implement a firewall in python as opposed to just is it technically possible. | 0 | 31,230 | 0 | 13 | 2010-07-06T18:34:00.000 | python,firewall | Is it possible to write a firewall in python? | 1 | 4 | 6 | 3,189,187 | 0 |
0 | 0 | Is it possible to write a firewall in python? Say it would block all traffic? | false | 3,189,138 | 0.066568 | 1 | 0 | 2 | "Yes" - that's usually the answer to "is it possible...?" questions.
How difficult and specific implementations are something else entirely. I suppose technically in a don't do this sort of way, if you were hell-bent on making a quick firewall in Python, you could use the socket libraries and open connections to and from yourself on every port. I have no clue how effective that would be, though it seems like it wouldn't be. Of course, if you're simply interested in rolling your own, and doing this as a learning experience, then cool, you have a long road ahead of you and plenty of education.
OTOH, if you're actually worried about network security there are tons of other products out there that you can use, from iptables on *nix, to ZoneAlarm on windows. Plenty of them are both free and secure so there's really no reason to roll your own except on an "I want to learn" basis. | 0 | 31,230 | 0 | 13 | 2010-07-06T18:34:00.000 | python,firewall | Is it possible to write a firewall in python? | 1 | 4 | 6 | 3,189,232 | 0 |
0 | 0 | Is it possible to write a firewall in python? Say it would block all traffic? | false | 3,189,138 | 0.132549 | 1 | 0 | 4 | I'm sure in theory you could achieve what you want, but I believe in practice your idea is not doable (if you wonder why, it's because it's too hard to "interface" a high level language with the low level kernel).
What you could do instead is some Python tool that controls the firewall of the operating system so you could add rules, delete , etc. (in a similar way to what iptables does in Linux). | 0 | 31,230 | 0 | 13 | 2010-07-06T18:34:00.000 | python,firewall | Is it possible to write a firewall in python? | 1 | 4 | 6 | 3,189,540 | 0 |
0 | 0 | Is it possible to write a firewall in python? Say it would block all traffic? | false | 3,189,138 | 0.099668 | 1 | 0 | 3 | Interesting thread. I stumbled on it looking for Python NFQUEUE examples.
My take is you could create a great firewall in python and use the kernel.
E.g.
Add a linux fw rule through IP tables that forward sys packets (the first) to NFQUEUE for python FW to decide what to do.
If you like it mark the tcp stream/flow with a FW mark using NFQUEUE and then have an iptables rule that just allows all traffic streams with the mark.
This way you can have a powerful high-level python program deciding to allow or deny traffic, and the speed of the kernel to forward all other packets in the same flow. | 0 | 31,230 | 0 | 13 | 2010-07-06T18:34:00.000 | python,firewall | Is it possible to write a firewall in python? | 1 | 4 | 6 | 15,045,900 | 0 |
1 | 0 | I need an advice. What python framework I can use to develop a SOAP web service? I know about SOAPpy and ZSI but that libraries aren't under active development. Is there something better?
Thanks. | false | 3,195,437 | 0 | 0 | 0 | 0 | If library is not under active development, then there are two options: it was abandoned, it has no errors anymore.
Why are you looking something else? Did you test these two? | 0 | 1,428 | 0 | 0 | 2010-07-07T13:55:00.000 | python,soap | Python framework for SOAP web services | 1 | 1 | 3 | 3,195,467 | 0 |
0 | 0 | I am trying to open a page using urllib2 but i keep getting connection timed out errors.
The line which i am using is:
f = urllib2.urlopen(url)
exact error is:
URLError: <urlopen error [Errno 110] Connection timed out> | false | 3,197,299 | 0.099668 | 0 | 0 | 1 | As a general strategy, open wireshark and watch the traffic generated by urllib2.urlopen(url). You may be able to see where the error is coming from. | 0 | 17,707 | 0 | 3 | 2010-07-07T17:30:00.000 | python,urllib2 | urllib2 connection timed out error | 1 | 1 | 2 | 3,410,537 | 0 |
0 | 0 | I'd like to integrate a web site written in Python (using Pylons) with an existing SAML based authentication service. From reading about SAML, I believe that the IdP (which already exists in this scenario) will send an XML document (via browser post) to the Service Provider (which I am implementing). The Service Provider will need to parse this XML and verify the identity of the user.
Are there any existing Python libraries that implement this functionality?
Thank you, | false | 3,198,104 | -0.197375 | 0 | 0 | -1 | I know you are looking for a Python based solution but there are quite a few "server" based solutions that would potentially solve your problem as well and require few ongoing code maintenance issues.
For example, using the Apache or IIS Integration kits in conjunction with the PingFederate server from www.pingidentity.com would allow you to pretty quickly and easily support SAML 1.0, 1.1, 2.0, WS-Fed and OpenID for your SP Application.
Hope this helps | 0 | 3,557 | 0 | 7 | 2010-07-07T19:23:00.000 | python,authentication,saml,single-sign-on | Implementing a SAML client in Python | 1 | 1 | 1 | 3,255,786 | 0 |
0 | 0 | To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.
I could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :) | false | 3,201,446 | 0 | 0 | 0 | 0 | Don't re-invent the bicycle!
Run jobs via cron script, and create a separate web interface using, for example, Django or Tornado.
Connect them via a database. Even sqlite will do the job if you don't want to scale on more machines. | 0 | 400 | 1 | 0 | 2010-07-08T07:26:00.000 | python,web-services | what's a good module for writing an http web service interface for a daemon? | 1 | 2 | 4 | 3,201,519 | 0 |
0 | 0 | To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.
I could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :) | false | 3,201,446 | 0 | 0 | 0 | 0 | I believed all kinds of python web framework is useful.
You can pick up one like CherryPy, which is small enough to integrate into your system. Also CherryPy includes a pure python WSGI server for production.
Also the performance may not be as good as apache, but it's already very stable. | 0 | 400 | 1 | 0 | 2010-07-08T07:26:00.000 | python,web-services | what's a good module for writing an http web service interface for a daemon? | 1 | 2 | 4 | 3,201,631 | 0 |
1 | 0 | I have been using BeautifulSoup but as I understand it that library is no longer being maintained. So what should I use ? I have heard about Xpath but what else is there ? | false | 3,244,335 | 0 | 0 | 0 | 0 | Well, if you're not duty-bound to python, you could always use a TagSoup parser. It's a Java library, but it gives very good results. You could also just use Tidy to clean your input before trying to parse it. | 0 | 653 | 0 | 3 | 2010-07-14T08:05:00.000 | python,parsing | No more BeautifulSoup | 1 | 1 | 4 | 3,244,345 | 0 |
1 | 0 | My application has a xml based configuration. It has also a xsd file. Before my application starts, xmllint will check the configuration against the xsd file.
With the growth of my application, the configuration structure has changed a bit. Now I have to face this problem: When I provide a new version of my application to customer, I have to upgrade the existing configuration.
How to make this done easy and clever?
My idea is to build a configuration object using python, and then read configuration v1 from file and save it as v2. But if later the structure is changed again, I have to build another configuration object model. | true | 3,247,516 | 1.2 | 0 | 0 | 1 | For all configuration settings that remain the same between configurations, have your installation script copy those over from the old config file if it exists. For the rest, just have some defaults that the user can change if necessary, as usual for a config file. Unless I've misunderstood the question, it sounds like you're making a bigger deal out of this than it needs to be.
By the way, you'd really only need one "updater" script, because you could parametrize the XML tagging such that it go through your new config file/config layout file, and then just check the tags in the old file against that and copy the data from the ones that are present in the new file. I haven't worked with XSD files before, so I don't know the specifics of working with them, but I don't think it should be that difficult. | 0 | 102 | 0 | 0 | 2010-07-14T15:06:00.000 | python,xml,configuration,xsd,upgrade | Approach to upgrade application configuration | 1 | 1 | 1 | 3,247,629 | 0 |
0 | 0 | Some web pages, having their urls, have "Download" Text, which are hyperlinks.
How can I get the hyperlinks form the urls/pages by python or ironpython.
And can I download the files with these hyperlinks by python or ironpython?
How can I do that?
Are there any C# tools?
I am not native english speaker, so sorry for my english. | false | 3,261,198 | 0.099668 | 0 | 0 | 1 | The easiest way would be to pass the HTML page into an XML/HTML parser, and then call getElementsByTagName("A") on the root node. Once you get that, iterate through the list and pull out the href parameter. | 0 | 210 | 0 | 0 | 2010-07-16T00:56:00.000 | c#,python,ironpython | How can I download files form web pages? | 1 | 1 | 2 | 3,261,217 | 0 |
1 | 0 | How do i check if the page has pending AJAX or HTTP GET/POST requests? I use javascript and/or python for this checking.
what i wanted to do is execute a script if a page has finished all requests. onload doesn't work for me, if you used firebugs net panel, you would know. onload fires when the page is loaded but there is a possibility that there are still pending request hanging around somewhere.
thank you in advance. | false | 3,262,473 | 0 | 0 | 0 | 0 | You would need to keep track of each XMLHttpRequest and monitor whether it completes or the asynchronous callback is executed. | 0 | 25,834 | 0 | 17 | 2010-07-16T06:38:00.000 | javascript,python,html | Check Pending AJAX requests or HTTP GET/POST request | 1 | 2 | 8 | 3,262,533 | 0 |
1 | 0 | How do i check if the page has pending AJAX or HTTP GET/POST requests? I use javascript and/or python for this checking.
what i wanted to do is execute a script if a page has finished all requests. onload doesn't work for me, if you used firebugs net panel, you would know. onload fires when the page is loaded but there is a possibility that there are still pending request hanging around somewhere.
thank you in advance. | false | 3,262,473 | 0.099668 | 0 | 0 | 4 | I see you mention you are using Prototype.js. You can track active requests with Prototype by checking the Ajax.activeRequestCount value. You could check this using setTimeout or setInterval to make sure that any requests triggered on page load have completed (if that's what you're looking to do) | 0 | 25,834 | 0 | 17 | 2010-07-16T06:38:00.000 | javascript,python,html | Check Pending AJAX requests or HTTP GET/POST request | 1 | 2 | 8 | 3,263,704 | 0 |
0 | 0 | I'm running complex tests that create many cookies for different sections of my web site.
Occasionally I have to restart the browser in the middle a long test and since the Selenium server doesn't modify the base Firefox profile, the cookies evaporate.
Is there any way I can save all of the cookies to a Python variable before terminating the browser and restore them after starting a new browser instance? | false | 3,265,062 | 0 | 1 | 0 | 0 | Yes, sure. Look at getCookie, getCookieByName and createCookie methods. | 0 | 1,991 | 0 | 2 | 2010-07-16T13:02:00.000 | python,cookies,selenium,selenium-rc | How to save and restore all cookies with Selenium RC? | 1 | 1 | 2 | 3,314,427 | 0 |
0 | 0 | I would like to invoke my chrome or firefox browser when a file that I specify is modified. How could I "watch" that file to do something when it gets modified?
Programmatically it seems the steps are.. basically set a never ending interval every second or so and cache the initial modification date, then compare the date every second, when it changes invoke X. | false | 3,274,334 | 0.085505 | 0 | 0 | 3 | Install inotify-tools and write a simple shell script to watch a file. | 0 | 27,121 | 0 | 14 | 2010-07-18T04:39:00.000 | python,linux | How can I "watch" a file for modification / change? | 1 | 1 | 7 | 3,274,680 | 0 |
0 | 0 | I'm new to python programming, and want to try to edit scripts in IDLE instead of the OSX command line. However, when I try to start it, it gives me the error "Idle Subprocess didn't make a connection. Either Idle can't start a subprocess or personal firewall software is blocking the connection." I don't have a firewall configured, so what could the problem be? | false | 3,277,946 | 0.197375 | 0 | 0 | 2 | You can try running IDLE with the "-n" option. From the IDLE help:
Running without a subprocess:
If IDLE is started with the -n command line switch it will run in a
single process and will not create the subprocess which runs the RPC
Python execution server. This can be useful if Python cannot create
the subprocess or the RPC socket interface on your platform. However,
in this mode user code is not isolated from IDLE itself. Also, the
environment is not restarted when Run/Run Module (F5) is selected. If
your code has been modified, you must reload() the affected modules and
re-import any specific items (e.g. from foo import baz) if the changes
are to take effect. For these reasons, it is preferable to run IDLE
with the default subprocess if at all possible. | 0 | 13,342 | 1 | 3 | 2010-07-19T01:27:00.000 | python,macos,subprocess | No IDLE Subprocess connection | 1 | 1 | 2 | 3,277,996 | 0 |
0 | 0 | I want to make a python script that tests the bandwidth of a connection. I am thinking of downloading/uploading a file of a known size using urllib2, and measuring the time it takes to perform this task. I would also like to measure the delay to a given IP address, such as is given by pinging the IP. Is this possible using urllib2? | false | 3,280,391 | 0 | 1 | 0 | 0 | You could download an empty file to measure the delay. You measure more the only the network delay, but the difference should be too big I expect. | 0 | 1,534 | 0 | 4 | 2010-07-19T10:52:00.000 | python,urllib2,bandwidth | Bandwidth test, delay test using urllib2 | 1 | 1 | 2 | 3,280,448 | 0 |
0 | 0 | On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.
Using kill fails with Operation not permitted.
Can I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.
May be there's entirely different way to approach this problem I can't think of right now. | false | 3,281,107 | 0.049958 | 1 | 0 | 1 | If you do not want to execute the kill command with the correct permissions, you can send any other signal to the other script. It is then the other scripts' responsibility to terminate. You cannot force it, unless you have the permissions to do so.
This can happen with a network connection, or a 'kill' file whose existence is checked by the other script, or anything else the script is able to listen to. | 0 | 332 | 0 | 3 | 2010-07-19T12:46:00.000 | python,linux,signals,kill | terminate script of another user | 1 | 3 | 4 | 3,281,123 | 0 |
0 | 0 | On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.
Using kill fails with Operation not permitted.
Can I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.
May be there's entirely different way to approach this problem I can't think of right now. | false | 3,281,107 | 0.049958 | 1 | 0 | 1 | Off the top of my head, one solution would be threading the script and waiting for a kill signal via some form or another. Or rather than threading, you could have a file that the script checks every N times through a loop - then you just write a kill signal to that file (which of course has write permissions by the web user).
I'm not terribly familiar with kill, other than killing my own scripts, so there may be a better solution. | 0 | 332 | 0 | 3 | 2010-07-19T12:46:00.000 | python,linux,signals,kill | terminate script of another user | 1 | 3 | 4 | 3,281,132 | 0 |
0 | 0 | On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.
Using kill fails with Operation not permitted.
Can I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.
May be there's entirely different way to approach this problem I can't think of right now. | false | 3,281,107 | 0 | 1 | 0 | 0 | You could use sudo to perform the kill command as root, but that is horrible practice.
How about having the long-running script check some condition every x seconds, for example the existence of a file like /tmp/stop-xyz.txt? If that file is found, the script terminates itself immediately.
(Or any other means of inter-process communication - it doesn't matter.) | 0 | 332 | 0 | 3 | 2010-07-19T12:46:00.000 | python,linux,signals,kill | terminate script of another user | 1 | 3 | 4 | 3,281,121 | 0 |
1 | 0 | I am working on a application, and my job just is to develop a sample Python interface for the application. The application can provide XML-based document, I can get the document via HTTP Get method, but the problem is the XML-based document is endless which means there will be no end element. I know that the document should be handled by SAX, but how to deal with the endless problem? Any idea, sample code? | false | 3,284,289 | 0 | 0 | 0 | 0 | If the document is endless why not add end tag (of main element) manually before opening it in parser? I don't know Python but why not add </endtag> to string? | 0 | 1,933 | 0 | 5 | 2010-07-19T19:24:00.000 | python,xml | python handle endless XML | 1 | 1 | 7 | 3,284,880 | 0 |
1 | 0 | This is my first questions here, so I hope it will be done correctly ;)
I've been assigned the task to give a web interface to some "home made" python script.
This script is used to check some web sites/applications availability, via curl commands. A very important aspect of this script is that it gives its results in real-time, writing line by line to the standard output.
By giving a web interface to this script, the main goal is that the script can be easily used from anywhere, for example via a smartphone. So the web interface must be quite basic, and work "plugin-free".
My problem is that most solutions I thought or found on the web (ajax, django, even a simple post) seem to be needing a full generation of the page before sending it to the browser, losing this important "real-time" aspect.
Any idea on how to do this properly ?
Thanks in advance. | false | 3,289,584 | 0 | 0 | 0 | 0 | Your task sounds interesting. :-) A scenario that just came into mind: You continuosly scrape the resources with your home-brew scripts, and push the results into your persistent database and a caching system -- like Redis -- simultanously. Your caching system/layer serves as primary data source when serving client requests. Redis f.e. is a high-performant key-value-store that is capable to handle 100k connections per second. Though only the n latest (say f.e. 50k entries) matter the caching system will only hold these entries and let you solely focus on developing the server-side API (handling connections, processing requests, reading from Redis) and the frontend. The communication between frontend and backend-API could be driven by WebSocket connections. A pretty new part of the HTML5 spec. Though, however, already supported by many browser versions released these days. Alternatively you could fallback on some asynchronous Flash Socket-stuff. Websockets basically allow for persistent connections between a client and a server; you can register event listener that are called for every incoming data/-packet -- no endless polling or other stuff. | 0 | 3,095 | 0 | 8 | 2010-07-20T11:50:00.000 | python | Web-ifing a python command line script? | 1 | 1 | 3 | 3,289,731 | 0 |
0 | 0 | I need to use Python 2.4.4 to convert XML to and from a Python dictionary. All I need are the node names and values, I'm not worried about attributes because the XML I'm parsing doesn't have any. I can't use ElementTree because that isn't available for 2.4.4, and I can't use 3rd party libraries due to my work environment. What's the easiest way for me to do this? Are there any good snippets?
Also, if there isn't an easy way to do this, are there any alternative serialization formats that Python 2.4.4 has native support for? | false | 3,292,973 | 0.039979 | 0 | 0 | 1 | Grey's link includes some solutions that look pretty robust. If you want to roll your own though, you could use xml.dom.node's childNode member recursively, terminating when node.childNode = None. | 0 | 24,276 | 0 | 4 | 2010-07-20T18:11:00.000 | python,xml,serialization,xml-serialization,python-2.4 | XML to/from a Python dictionary | 1 | 1 | 5 | 3,294,357 | 0 |
0 | 0 | When I put my Mac to sleep while an interactive Python session with a selenium instance and corresponding browser is running, after waking up the browser (or selenium server?) are no longer responding to any commands from the Python shell.
This forces me to restart the browser, losing the state of my test.
Is there a way to overcome this? | false | 3,293,788 | 0 | 0 | 0 | 0 | You might be able to make this work by setting ridiculously large timeout values on your Selenium commands. But, you may still run into a problem where MacOS X kills the network connection when it goes to sleep. Once the connection is severed, you're only real option would be to grab the test session ID and try to reconnect to it, providing Selenium hasn't time the commands out yet. | 0 | 429 | 0 | 2 | 2010-07-20T19:50:00.000 | python,macos,selenium,selenium-rc | How to resume a Selenium RC test after a computer sleep? | 1 | 1 | 1 | 4,579,133 | 0 |
1 | 0 | I am developing an online browser game, based on google maps, with Django backend, and I am getting close to the point where I need to make a decision on how to implement the (backend) timed events - i.e. NPC possession quantity raising (e.g. city population should grow based on some variables - city size, application speed).
The possible solutions I found are:
Putting the queued actions in a table and processing them along with every request.
Problems: huge overhead, harder to implement
Using cron or something similar
Problem: this is an external tool, and I want as little external tools as possible.
Any other solutions? | true | 3,294,682 | 1.2 | 0 | 0 | 5 | Running a scheduled task to perform updates in your game, at any interval, will give you a spike of heavy database use. If your game logic relies on all of those database values to be up to date at the same time (which is very likely, if you're running an interval based update), you'll have to have scheduled downtime for as long as that cronjob is running. When that time becomes longer, as your player base grows, this becomes extremely annoying.
If you're trying to reduce database overhead, you should store values with their last update time and growth rates, and only update those rows when the quantity or rate of growth changes.
For example, a stash of gold, that grows at 5 gold per minute, only updates when a player withdraws gold from it. When you need to know the current amount, it is calculated based on the last update time, the current time, the amount stored at the last update, and the rate of growth.
Data that changes over time, without requiring interaction, does not belong in the database. It belongs in the logic end of your game. When a player performs an activity you need to remember, or a calculation becomes too cumbersome to generate again, that's when you store it. | 0 | 770 | 0 | 3 | 2010-07-20T21:50:00.000 | python,django,cron | Browser-based MMO best-practice | 1 | 1 | 2 | 3,294,995 | 0 |
1 | 0 | I developed a system that consists of software and hardware interaction. Basically its a transaction system where the transaction details are encrypted on a PCI device then returned back to my web based system where it is stored in a DB then displayed using javascript/extjs in the browser. How I do this now is the following:
Transaction encoding process
1.The user selects a transaction from a grid and presses "encode" button,extjs/js then sends the string to PHP where it is formatted and inserted into requests[incoming_request]. At this stage I start a extjs taskmanager to do interval checks on the requests[response] column for a result, and I display a "please wait..." message.
2.I have created a python daemon service that monitors the requests table for any transactions to encode.The python daemon then picks up any requests[incoming_request] then encodes the request and stores the result in requests[response] table.
3.The extjs taskmanager then picks up the requests[response] for the transaction and displays it to the user and then removes the "please wait..." message and terminates the taskmanager.
Now my question is: Is there a better way of doing this encryption process by using 3rd party Messaging and Queuing middleware systems? If so please help.
Thank You! | true | 3,297,110 | 1.2 | 0 | 0 | 0 | I would change it this way:
make PHP block and wait until Python daemon finishes processing the transaction
increase the timeout in the Ext.data.Connection() so it would wait until PHP responds
remove the Ext.MessageBox and handle possible errors in the callback handler in Ext.data.Connection()
I.e. instead of waiting for the transaction to complete in JavaScript (which requires several calls to the webserver) you are now waiting in PHP.
This is assuming you are using Ext.data.Connection() to call the PHP handler - if any other Ext object is used the principle is the same but the timeout setting / completion handling would differ. | 0 | 362 | 0 | 1 | 2010-07-21T07:34:00.000 | php,javascript,python,ajax,extjs | I need help with messaging and queuing middleware systems for extjs | 1 | 1 | 1 | 3,309,648 | 0 |
0 | 0 | When would someone use httplib and when urllib?
What are the differences?
I think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other? | false | 3,305,250 | 1 | 0 | 0 | 10 | urllib/urllib2 is built on top of httplib. It offers more features than writing to httplib directly.
however, httplib gives you finer control over the underlying connections. | 0 | 42,140 | 0 | 56 | 2010-07-22T01:58:00.000 | python,http,urllib,httplib | Python urllib vs httplib? | 1 | 3 | 6 | 3,305,508 | 0 |
0 | 0 | When would someone use httplib and when urllib?
What are the differences?
I think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other? | false | 3,305,250 | 1 | 0 | 0 | 6 | If you're dealing solely with http/https and need access to HTTP specific stuff, use httplib.
For all other cases, use urllib2. | 0 | 42,140 | 0 | 56 | 2010-07-22T01:58:00.000 | python,http,urllib,httplib | Python urllib vs httplib? | 1 | 3 | 6 | 3,305,339 | 0 |
0 | 0 | When would someone use httplib and when urllib?
What are the differences?
I think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other? | true | 3,305,250 | 1.2 | 0 | 0 | 46 | urllib (particularly urllib2) handles many things by default or has appropriate libs to do so. For example, urllib2 will follow redirects automatically and you can use cookiejar to handle login scripts. These are all things you'd have to code yourself if you were using httplib. | 0 | 42,140 | 0 | 56 | 2010-07-22T01:58:00.000 | python,http,urllib,httplib | Python urllib vs httplib? | 1 | 3 | 6 | 3,305,261 | 0 |
1 | 0 | i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result
i am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url. | false | 3,309,695 | 0.197375 | 0 | 0 | 3 | It is not possible to get the 'final' URL by parsing, in order to resolve it, you would need to at least perform an HTTP HEAD operation | 0 | 1,704 | 1 | 1 | 2010-07-22T14:05:00.000 | python,google-app-engine,feedparser | how to get final redirected url | 1 | 2 | 3 | 3,309,766 | 0 |
1 | 0 | i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result
i am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url. | false | 3,309,695 | 0 | 0 | 0 | 0 | You can do this by handling redirects manually. When calling fetch, pass in follow_redirects=False. If your response object's HTTP status is a redirect code, either 301 or 302, grab the Location response header and fetch again until the HTTP status is something else. Add a sanity check (perhaps 5 redirects max) to avoid redirect loops. | 0 | 1,704 | 1 | 1 | 2010-07-22T14:05:00.000 | python,google-app-engine,feedparser | how to get final redirected url | 1 | 2 | 3 | 3,309,853 | 0 |
1 | 0 | The big mission: I am trying to get a few lines of summary of a webpage. i.e. I want to have a function that takes a URL and returns the most informative paragraph from that page. (Which would usually be the first paragraph of actual content text, in contrast to "junk text", like the navigation bar.)
So I managed to reduce an HTML page to a bunch of text by cutting out the tags, throwing out the <HEAD> and all the scripts. But some of the text is still "junk text". I want to know where the actual paragraphs of text begin. (Ideally it should be human-language-agnostic, but if you have a solution only for English, that might help too.)
How can I figure out which of the text is "junk text" and which is actual content?
UPDATE: I see some people have pointed me to use an HTML parsing library. I am using Beautiful Soup. My problem isn't parsing HTML; I already got rid of all the HTML tags, I just have a bunch of text and I want to separate the context text from the junk text. | false | 3,325,817 | 0.099668 | 0 | 0 | 2 | A general solution to this problem is a non-trivial problem to solve.
To put this in context, a large part of Google's success with search has come from their ability to automatically discern some semantic meaning from arbitrary Web pages, namely figuring out where the "content" is.
One idea that springs to mind is if you can crawl many pages from the same site then you will be able to identify patterns. Menu markup will be largely the same between all pages. If you zero this out somehow (and it will need to fairly "fuzzy") what's left is the content.
The next step would be to identify the text and what constitutes a boundary. Ideally that would be some HTML paragraphs but you won't get that lucky most of the time.
A better approach might be to find the RSS feeds for the site and get the content that way because that will be stripped down as is. Ignore any AdSense (or similar) content and you should be able to get the text.
Oh and absolutely throw out your regex code for this. This requires an HTML parser absolutely without question. | 0 | 2,958 | 0 | 2 | 2010-07-24T16:15:00.000 | python,html,text,screen-scraping | Python: Detecting the actual text paragraphs in a string | 1 | 1 | 4 | 3,325,874 | 0 |
0 | 0 | I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas. | false | 3,328,315 | 0.039979 | 1 | 0 | 1 | Again, this is an utterly personal suggestion, but I would really like to see eggdrop rewritten in Python.
Such a project could use Twisted to provide the base IRC interaction, but would then need to support add-on scripts.
This would be great for allowing easy IRC bot functionality to be built upon using python, instead of TCL, scripts. | 0 | 3,546 | 0 | 4 | 2010-07-25T06:40:00.000 | python,irc | IRC bot functionalities | 1 | 4 | 5 | 3,329,252 | 0 |
0 | 0 | I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas. | false | 3,328,315 | 0 | 1 | 0 | 0 | That is very subjective and totally depends upon where the bot will be used. I'm sure others will have nice suggestions. But whatever you do, please do not query users arbitrarily. And do not spam the main chat periodically. | 0 | 3,546 | 0 | 4 | 2010-07-25T06:40:00.000 | python,irc | IRC bot functionalities | 1 | 4 | 5 | 3,328,343 | 0 |
0 | 0 | I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas. | false | 3,328,315 | 0.039979 | 1 | 0 | 1 | I'm also in the process of writing a bot in node.js. Here are some of my goals/functions:
map '@' command so the bot detects the last URI in message history and uses the w3 html validation service
setup a trivia game by invoking !ask, asks a question with 3 hints, have the ability to load custom questions based on category
get the weather with weather [zip/name]
hook up jseval command to evaluate javascript, same for python and perl and haskell
seen command that reports the last time the bot has "seen" a person online
translate command to translate X language string to Y language string
map dict to a dictionary service
map wik to wiki service | 0 | 3,546 | 0 | 4 | 2010-07-25T06:40:00.000 | python,irc | IRC bot functionalities | 1 | 4 | 5 | 3,328,322 | 0 |
0 | 0 | I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas. | false | 3,328,315 | 0 | 1 | 0 | 0 | Make a google search to get a library that implements IRC protocol for you. That way you only need to add the features, those are already something enough to bother you.
Common functions:
Conduct a search from a wiki or google
Notify people on project/issue updates
Leave a message
Toy for spamming the channel
Pick a topic
Categorize messages
Search from channel logs | 0 | 3,546 | 0 | 4 | 2010-07-25T06:40:00.000 | python,irc | IRC bot functionalities | 1 | 4 | 5 | 3,328,376 | 0 |
1 | 0 | Suppose I downloaded the HTML code, and I can parse it.
How do I get the "best" description of that website, if that website does not have meta-description tag? | false | 3,332,494 | 0.066568 | 0 | 0 | 1 | It's very hard to come up with a rule that works 100% of the time, obviously, but my suggestion as a starting point would be to look for the first <h1> tag (or <h2>, <h3>, etc - the highest one you can find) then the bit of text after that can be used as the description. As long as the site is semantically marked-up, that should give you a good description (I guess you could also take the contents of the <h1> itself, but that's more like the "title").
It's interesting to note that Google (for example) uses a keyword-specific extract of the page contents to display as the description, rather than a static description. Not sure if that'll work for your situation, though. | 0 | 519 | 0 | 6 | 2010-07-26T05:59:00.000 | python,html,string,templates,parsing | What's the best way to get a description of the website, in Python? | 1 | 1 | 3 | 3,332,528 | 0 |
1 | 0 | Hi i need to parse xml file which is more then 1 mb in size, i know GAE can handle request and response up to 10 MB but as we need to use SAX parser API and API GAE has limit of 1 MB so is there way we can parse file more then 1 mb any ways. | false | 3,332,897 | 0.379949 | 0 | 0 | 2 | The 1MB limit doesn't apply to parsing; however, you can't fetch more than 1MB from URLfetch; you'll only get the first 1MB from the API.
It's probably not going to be possible to get the XML into your application using the URLfetch API. If the data is smaller than 10MB, you can arrange for an external process to POST it to your application and then process it. If it's between 10MB and 2GB, you'd need to use the Blobstore API to upload it, read it in to your application in 1MB chunks, and process the concatenation of those chunks. | 0 | 485 | 1 | 1 | 2010-07-26T07:14:00.000 | python,google-app-engine,parsing | Google app engine parsing xml more then 1 mb | 1 | 1 | 1 | 3,334,425 | 0 |
0 | 0 | I use python ftplib to connect to a ftp server which is running on active mode; That means the server will connect my client machine on a random port when data is sent between us.
Considering security issue, Can I specify the client's data port (or port range) and let the server connect the certain port?
Many Thanks for your response. | false | 3,333,929 | -0.197375 | 0 | 0 | -2 | Since Python 3.3, ftplib functions that establish connections take a source_addr argument that allows you to do exactly this. | 0 | 1,268 | 0 | 0 | 2010-07-26T10:16:00.000 | python,networking,ftplib | How can I specify the client's data port for a ftp server in active mode? | 1 | 1 | 2 | 68,453,582 | 0 |
0 | 0 | I'm using long polling for a chat with gevent. I'm using Event.wait() when waiting for new messages to be posted on the chat.
I would like to handle the occasion a client disconnects with some functionality:
e.g. Return "client has disconnected" as a message for other chat users
Is this possible? =) | true | 3,334,777 | 1.2 | 0 | 0 | 1 | This depends on which WSGI server you use. AFAIK gevent.wsgi will not notify your handler in any way when the client closes the connection, because libevent-http does not do that. However, with gevent.pywsgi it should be possible. You'll probably need to start an additional greenlet to monitor the socket condition and somehow notify the greenlet that runs the handler, e.g. by killing it. I could be missing an easier way to do this though. | 0 | 1,701 | 0 | 7 | 2010-07-26T12:31:00.000 | python,django,long-polling,gevent | Capturing event of client disconnecting! - Gevent/Python | 1 | 1 | 3 | 3,338,935 | 0 |
0 | 0 | So I have a simple socket server on an android emulator. When I'm only sending data to it, it works just fine. But then if I want to echo that data back to the python script, it doesn't work at all. Here's the code that works:
android:
try {
serverSocket = new ServerSocket(port);
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
while (checkingForClients) {
try {
clientSocket = serverSocket.accept();
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(
clientSocket.getInputStream()));
line = null;
while ((line = in.readLine()) != null) {
Log.d("ServerActivity", line);
/* THIS IS THE LINE THAT DOESN'T WORK*/
//out.println(line);
handler.post(new Runnable() {
@Override
public void run() {
if(incomingData == null){
Log.e("Socket Thingey", "Null Error");
}
//out.println(line);
incomingData.setText("Testing");
incomingData.setText(line);
}
});
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
python:
import socket
host = 'localhost'
port = 5000
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host,port))
s.send('Hello, World!')
# data = s.recv(size) THIS LINE CAUSES PROBLEMS
s.close()
print 'Received:' , data
So there are 2 commented lines. Without those lines, it works perfectly. But if I add in s.recv(size) in python it just freezes and I assume waits for the received data. But the problem is that the android code never gets the sent data. So I have no idea what to do.
Keep in mind I'm new to python and to sockets. | false | 3,339,971 | 0 | 0 | 0 | 0 | The Android code is reading lines, so you need probably to send a \n or possibly \r\n at the end of your Python send string. | 0 | 2,058 | 1 | 0 | 2010-07-27T00:39:00.000 | java,python,android,sockets | python receiving from a socket | 1 | 1 | 4 | 8,259,497 | 0 |
0 | 0 | I want to download an image file from potentially 5 sites.
Meaning that if the image wasn't found in site#1, try site#2, etc.
How can I test if the file was downloaded? | false | 3,340,152 | 0.53705 | 0 | 0 | 3 | You can call getcode() on the object you get back from urlopen().
getcode() gives you the HTTP status response from the server, so you can test to see if you got an HTTP 200 response, which would mean the download was successful. | 0 | 153 | 0 | 1 | 2010-07-27T01:28:00.000 | python,urllib | When download an image, does urllib have a return code if it's successful or not? | 1 | 1 | 1 | 3,340,185 | 0 |
0 | 0 | Can I ask for few question in one post to XML-RPC server?
If yes, how can I do it in python and xmlrpclib?
I'm using XML-RPC server on slow connection, so I would like to call few functions at once, because each call costs me 700ms. | false | 3,343,082 | 0 | 0 | 0 | 0 | Whether or not possible support of multicall makes any difference to you depends on where the 700ms is going.
How did you measure your 700ms?
Run a packet capture of a query and analyse the results. It should be possible to infer roughly round-trip-time, bandwidth constraints, whether it's the application layer of the server or even the name resolution of your client machine. | 0 | 231 | 0 | 0 | 2010-07-27T11:25:00.000 | python,soap,xml-rpc,xmlrpclib | Does XML-RPC in general allows to call few functions at once? | 1 | 1 | 2 | 3,343,497 | 0 |
0 | 0 | how and with which python library is it possible to make an httprequest (https) with a user:password or a token?
basically the equivalent to curl -u user:pwd https://www.mysite.com/
thank you | true | 3,355,822 | 1.2 | 0 | 0 | 0 | class urllib2.HTTPSHandler
A class to handle opening of HTTPS URLs.
21.6.7. HTTPPasswordMgr Objects
These methods are available on HTTPPasswordMgr and HTTPPasswordMgrWithDefaultRealm objects.
HTTPPasswordMgr.add_password(realm, uri, user, passwd)
uri can be either a single URI, or a sequence of URIs. realm, user and passwd must be strings. This causes (user, passwd) to be used as authentication tokens when authentication for realm and a super-URI of any of the given URIs is given.
HTTPPasswordMgr.find_user_password(realm, authuri)
Get user/password for given realm and URI, if any. This method will return (None, None) if there is no matching user/password.
For HTTPPasswordMgrWithDefaultRealm objects, the realm None will be searched if the given realm has no matching user/password. | 0 | 2,459 | 0 | 7 | 2010-07-28T17:51:00.000 | python,authentication,httprequest,token | python http request with token | 1 | 1 | 4 | 3,355,925 | 0 |
1 | 0 | I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.
I've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that.
So far i've been using Mechanize and it works on websites that don't require Javascript.
Is there any way to access websites that use Javascript by using urllib2 or something similar?
I'm also willing to learn Javascript, if that's what it takes. | false | 3,362,859 | 1 | 0 | 0 | 6 | I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a "user perspective however it is basically a "FireFox" driver. I've actually used it for this purpose ... although I was scraping an dynamic AJAX webpage. As long as the Javascript form has a recognizable "Anchor Text" that Selenium can "click" everything should sort itself out.
Hope that helps | 0 | 23,351 | 0 | 17 | 2010-07-29T13:18:00.000 | javascript,python,screen-scraping | Scraping websites with Javascript enabled? | 1 | 1 | 6 | 3,364,608 | 0 |
0 | 0 | Is it possible to control a web browser like Firefox using Python?
I would want to do things like
launch the browser
force clicks on URLs
take screenshots
etc. | false | 3,369,073 | 0 | 0 | 0 | 0 | Depends what do you actually want to achieve. If you need to do some automatic stuff w/out user interference, you can just use underlying engine of the browser, like Gecko or WebKit, w/out loading browser itself. There are ready Python bindings to these engines available.
Browsers themself do not provide this kind of API to outside processes. For Firefox, you would need to inject some browser-side code into chrome, either as extension or plugin. | 0 | 50,784 | 0 | 27 | 2010-07-30T06:16:00.000 | python,browser,webbrowser-control | Controlling Browser using Python? | 1 | 2 | 6 | 3,370,550 | 0 |
0 | 0 | Is it possible to control a web browser like Firefox using Python?
I would want to do things like
launch the browser
force clicks on URLs
take screenshots
etc. | false | 3,369,073 | 0.033321 | 0 | 0 | 1 | Ag great way to control a browser in Python is to use PyQt4.QtWebKit. | 0 | 50,784 | 0 | 27 | 2010-07-30T06:16:00.000 | python,browser,webbrowser-control | Controlling Browser using Python? | 1 | 2 | 6 | 3,370,579 | 0 |
0 | 0 | Got a situation where I'm going to be parsing websites. each site has to have it's own "parser" and possibly it's own way of dealing with cookies/etc..
I'm trying to get in my head which would be a better choice.
Choice I:
I can create a multiprocessing function, where the (masterspawn) app gets an input url, and in turn it spans a process/function within the masterspawn app that then handles all the setup/fetching/parsing of the page/URL.
This approach would have one master app running, and it in turn creates multiple instances of the internal function.. Should be fast, yes/no?
Choice II:
I could create a "Twisted" kind of server, that would essentially do the same thing as Choice I. The difference being that using "Twisted" would also impose some overhead. I'm trying to evaluate Twisted, with regards to it being a "Server" but i don't need it to perform the fetching of the url.
Choice III:
I could use scrapy. I'm inclined not to go this route as I don't want/need to use the overhead that scrapy appears to have. As i stated, each of the targeted URLs needs its own parse function, as well as dealing with the cookies...
My goal is to basically have the "architected" solution spread across multiple boxes, where each client box interfaces with a master server that allocates the urls to be parsed.
thanks for any comments on this..
-tom | false | 3,374,943 | 0.197375 | 0 | 0 | 2 | There are two dimensions to this question: concurrency and distribution.
Concurrency: either Twisted or multiprocessing will do the job of concurrently handling fetching/parsing jobs. I'm not sure though where your premise of the "Twisted overhead" comes from. On the contrary, the multiprocessing path would incur much more overhead, since a (relatively heavy-weight) OS-process would have to be spawned. Twisteds' way of handling concurrency is much more light-weight.
Distribution: multiprocessing won't distribute your fetch/parse jobs to different boxes. Twisted can do this, eg. using the AMP protocol building facilities.
I cannot comment on scrapy, never having used it. | 1 | 815 | 0 | 1 | 2010-07-30T20:02:00.000 | python,twisted,multiprocessing | question comparing multiprocessing vs twisted | 1 | 1 | 2 | 3,379,621 | 0 |
0 | 0 | i developed an aplication built on twitter api , but i get erorrs like a mesage that i've parsed and deleted to be parsed again at the next execution , could that be because i left the twitter connection opened or is just a fault of the twitter API. I also tried to delete all direct messages because it seemed too full for me but instead the Api has just reset the count of my messages , the messages haven't been deleted:(( | true | 3,385,990 | 1.2 | 1 | 0 | 2 | Twitter's API is over HTTP, which is a stateless protocol. you don't really need to close the connection, since connections made and closed for each request | 0 | 105 | 0 | 1 | 2010-08-02T07:59:00.000 | python,twitter | twitter connection needs to be closed? | 1 | 1 | 1 | 3,388,478 | 0 |
0 | 0 | Do you know how I could get one of the IPv6 adress of one of my interface in python2.6. I tried something with the socket module which lead me nowhere.
Thanks. | false | 3,388,911 | 0 | 1 | 0 | 0 | You could just simply run 'ifconfig' with a subprocess.* call and parse the output. | 0 | 3,133 | 0 | 6 | 2010-08-02T14:53:00.000 | python,linux,ipv6 | How to get the IPv6 address of an interface under linux | 1 | 1 | 3 | 3,388,966 | 0 |
1 | 0 | I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.
So my question is can i verify the links in a loop or i need to try every link on my own?
i tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one. | false | 3,397,850 | 0 | 0 | 0 | 0 | What exactly is "Testing links"?
If it means they lead to non-4xx URIs, I'm afraid You must visit them.
As for existence of given links (like "Contact"), You may look for them using xpath. | 0 | 412 | 0 | 2 | 2010-08-03T15:05:00.000 | python,testing,black-box | how can i verify all links on a page as a black-box tester | 1 | 2 | 4 | 3,397,887 | 0 |
1 | 0 | I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.
So my question is can i verify the links in a loop or i need to try every link on my own?
i tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one. | false | 3,397,850 | 0 | 0 | 0 | 0 | You could (as yet another alternative), use BeautifulSoup to parse the links on your page and try to retrieve them via urllib2. | 0 | 412 | 0 | 2 | 2010-08-03T15:05:00.000 | python,testing,black-box | how can i verify all links on a page as a black-box tester | 1 | 2 | 4 | 3,399,490 | 0 |
0 | 0 | For the past 10 hours I've been trying to accomplish this:
Translation of my blocking httpclient using standard lib...
Into a twisted nonblocking/async version of it.
10 hours later... scoring through their APIs-- it appears no one has EVER needed to do be able to do that. Nice framework, but seems ...a bit overwhelming to just set a socket to a different interface.
Can any python gurus shed some light on this and/or send me in the right direction? or any docs that I could have missed? THANKS! | false | 3,399,185 | 0 | 0 | 0 | 0 | Well, it doesn't look like you've missed anything. client.getPage doesn't directly support setting the bind address. I'm just guessing here but I would suspect it's one of those cases where it just never occured to the original developer that someone would want to specify the bind address.
Even though there isn't built-in support for doing this, it should be pretty easy to do. The way you specify binding addresses for outgoing connections in twisted is by passing the bind address to the reactor.connectXXX() functions. Fortunately, the code for getPage() is really simple. I'd suggest three things:
Copy the code for getPage() and it's associated helper function into your project
Modify them to pass through the bind address
Create a patch to fix this oversight and send it to the Twisted folks :) | 0 | 314 | 0 | 0 | 2010-08-03T17:48:00.000 | python,twisted.web | Overloading twisted.client.getPage to set the client socket's bindaddress ! | 1 | 1 | 1 | 3,400,337 | 0 |
0 | 0 | Which library/module is the best to use for downloading large 500mb+ files in terms of speed, memory, cpu? I was also contemplating using pycurl. | false | 3,402,271 | 0 | 0 | 0 | 0 | At sizes of 500MB+ one has to worry about data integrity, and HTTP is not designed with data integrity in mind.
I'd rather use python bindings for rsync (if they exist) or even bittorrent, which was initially implemented in python. Both rsync and bittorrent address the data integrity issue. | 1 | 2,537 | 0 | 1 | 2010-08-04T02:56:00.000 | python,curl,urllib2 | best way to download large files with python | 1 | 1 | 1 | 3,402,359 | 0 |
1 | 0 | I'm scraping a html page, then using xml.dom.minidom.parseString() to create a dom object.
however, the html page has a '&'. I can use cgi.escape to convert this into & but it also converts all my html <> tags into <> which makes parseString() unhappy.
how do i go about this? i would rather not just hack it and straight replace the "&"s
thanks | false | 3,403,168 | 0 | 0 | 0 | 0 | You shouldn't use an XML parser to parse data that isn't XML. Find an HTML parser instead, you'll be happier in the long run. The standard library has a few (HTMLParser and htmllib), and BeautifulSoup is a well-loved third-party package. | 0 | 576 | 0 | 1 | 2010-08-04T06:40:00.000 | python,escaping,html-entities | need to selectively escape html entities (&) | 1 | 1 | 4 | 3,405,525 | 0 |
0 | 0 | I am trying to create a python script that opens a single page at a time, however python + mozilla make it so everytime I do this, it opens up a new tab. I want it to keep just a single window open so that it can loop forever without crashing due to too many windows or tabs. It will be going to about 6-7 websites and the current code imports time and webbrowser.
webbrowser.open('url')
time.sleep(100)
webbrowser.open('next url')
//but here it will open a new tab, when I just want it to change the page.
Any information would be greatful,
Thank you. | true | 3,408,891 | 1.2 | 0 | 0 | 1 | In firefox, if you go to about:config and set browser.link.open_newwindow to "1", that will cause a clicked link that would open in a new window or tab to stay in the current tab. I'm not sure if this applies to calls from 3rd-party apps, but it might be worth a try.
Of course, this will now apply to everything you do in firefox (though ctrl + click will still open links in a new tab) | 0 | 2,001 | 0 | 0 | 2010-08-04T19:01:00.000 | python,browser | How do I edit the url in python and open a new page without having a new window or tab opened? | 1 | 1 | 1 | 3,408,987 | 0 |
0 | 0 | I would like to open a new tab in my web browser using python's webbrowser. However, now my browser is brought to the top and I am directly moved to the opened tab. I haven't found any information about this in documentation, but maybe there is some hidden api. Can I open this tab in the possible most unobtrusive way, which means:
not bringing browser to the top if it's minimzed,
not moving me the opened tab (especially if I am at the moment working in other tab - my process is working in the background and it would be very annoying to have suddenly my work interrupted by a new tab)? | true | 3,417,756 | 1.2 | 0 | 0 | 0 | On WinXP, at least, it appears that this is not possible (from my tests with IE).
From what I can see, webbrowser is a fairly simple convenience module that creates (probably ) a subprocess-style call to the browser executable.
If you want that sort of granularity you'll have to see if your browser accepts command line arguments to that effect, or exposes that control in some other way. | 0 | 392 | 0 | 3 | 2010-08-05T18:09:00.000 | python,tabs,python-webbrowser | python: open unfocused tab with webbrowser | 1 | 1 | 1 | 3,418,619 | 0 |
0 | 0 | I am now using python base64 module to decode a base64 coded XML file, what I did was to find each of the data (there are thousands of them as for exmaple in "ABC....", the "ABC..." was the base64 encoded data) and add it to a string, lets say s, then I use base64.b64decode(s) to get the result, I am not sure of the result of the decoding, was it a string, or bytes? In addition, how should convert such decoded data from the so-called "network byte order" to a "host byte order"? Thanks! | false | 3,422,457 | 0.132549 | 0 | 0 | 2 | Each base64 encoded string should be decoded separately - you can't concatenate encoded strings (and get a correct decoding).
The result of the decode is a string, of byte-buffer - in Python, they're equivalent.
Regarding the network/host order - sequences of bytes, have no such 'order' (or endianity) - it only matters when interpreting these bytes as words / ints of larger width (i.e. more than 8 bits). | 0 | 7,855 | 0 | 0 | 2010-08-06T09:20:00.000 | python,base64,byte | Python base64 data decode and byte order convert | 1 | 1 | 3 | 3,422,530 | 0 |
0 | 0 | I have a requirement to create a Python application that accepts dial up connections over ISDN from client software and relays messages from this connection to a website application running on a LAMP webserver.
Do we have some modules or support for this kind of implementation in python?
Please suggest.
Thanks in advance. | false | 3,464,996 | 0.197375 | 0 | 0 | 1 | You should have system hardware and software that handles establishing ISDN links, that's not something you should be trying to reimplement yourself.
You need to consult the documentation for that hardware and software, and the documentation for the client software, to determine how that connection can be made available to your application, and what communications protocol the client will be using over the ISDN link.
(If you're really lucky, the client actually uses PPP to establish a TCP/IP connection.) | 0 | 974 | 0 | 0 | 2010-08-12T05:36:00.000 | python,dial-up,isdn | ISDN dial up connection with python | 1 | 1 | 1 | 3,465,106 | 0 |
1 | 0 | I have used 3 languages for Web Scraping - Ruby, PHP and Python and honestly none of them seems to perfect for the task.
Ruby has an excellent mechanize and XML parsing library but the spreadsheet support is very poor.
PHP has excellent spreadsheet and HTML parsing library but it does not have an equivalent of WWW:Mechanize.
Python has a very poor Mechanize library. I had many problems with it and still unable to solve them. Its spreadsheet library also is more or less decent since it unable to create XLSX files.
Is there anything which is just perfect for webscraping.
PS: I am working on windows platform. | false | 3,468,028 | 0.049958 | 0 | 0 | 1 | Short answer is no.
The problem is that HTML is a large family of formats - and only the more recent variants are consistent (and XML based). If you're going to use PHP then I would recommend using the DOM parser as this can handle a lot of html which does not qualify as well-formed XML.
Reading between the lines of your post - you seem to be:
1) capturing content from the web with a requirement for complex interaction management
2) parsing the data into a consistent machine readable format
3) writing the data to a spreadsheet
Which is certainly 3 seperate problems - if no one language meets all 3 requirements then why not use the best tool for the job and just worry about an suitable interim format/medium for the data?
C. | 0 | 2,443 | 0 | 7 | 2010-08-12T13:18:00.000 | php,python,ruby,web-scraping | Is there any language which is just "perfect" for web scraping? | 1 | 1 | 4 | 3,469,962 | 0 |
0 | 0 | Hey all, I have a site that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last?
The urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.
Thanks! | false | 3,468,248 | 0 | 1 | 0 | 0 | How often do the price(s) change? If they're pretty constant (say once a day, or every hour or so), just go ahead and write a cron script (or equivalent) that retrieves the values and stores it in a database or text file or whatever it is you need.
I don't know if you can check the timestamp data from the Amazon API - if they report that sort of thing. | 0 | 805 | 0 | 3 | 2010-08-12T13:40:00.000 | python,sql,caching,urlopen | Caching options in Python or speeding up urlopen | 1 | 1 | 5 | 3,468,315 | 0 |
0 | 0 | As the title suggests, I'm working on a site written in python and it makes several calls to the urllib2 module to read websites. I then parse them with BeautifulSoup.
As I have to read 5-10 sites, the page takes a while to load.
I'm just wondering if there's a way to read the sites all at once? Or anytricks to make it faster, like should I close the urllib2.urlopen after each read, or keep it open?
Added: also, if I were to just switch over to php, would that be faster for fetching and Parsi g HTML and XML files from other sites? I just want it to load faster, as opposed to the ~20 seconds it currently takes | false | 3,472,515 | 0 | 0 | 0 | 0 | How about using pycurl?
You can apt-get it by
$ sudo apt-get python-pycurl | 0 | 32,925 | 0 | 15 | 2010-08-12T22:26:00.000 | python,http,concurrency,urllib2 | Python urllib2.urlopen() is slow, need a better way to read several urls | 1 | 1 | 9 | 3,472,568 | 0 |
1 | 0 | I'm working on a site, colorurl.com, and I need users to be able to type in colorurl.com/00ff00 (or some variation of that), and see the correct page. However, with the naked domain issue, users who type in colorurl.com/somepath will instead be redirected to www.colorurl.com/.
Is there a way to detect this in python, and then redirect the user to where they meant to go (With the www. added?)
EDIT:
Clarification: In my webhost's configuration I have colorurl.com forward to www.colorurl.com. They do not support keeping the path (1and1). I have to detect the previous path and redirect users to it.
User goes to colorurl.com/path
User is redirected to www.colorurl.com
App needs to detect what the path was.
App sends user to www.colorurl.com/path | false | 3,482,152 | 0 | 0 | 0 | 0 | You need to use a third-party site to do the redirection to www.*; many registrars offer this service. Godaddy's service (which is even free with domain registration) forwards foo.com/bar to www.foo.com/bar; I can't speak to the capabilities of the others but it seems to me that any one that doesn't behave this way is broken. | 0 | 830 | 1 | 4 | 2010-08-14T05:27:00.000 | python,google-app-engine,redirect | Google App Engine - Naked Domain Path Redirect in Python | 1 | 1 | 2 | 3,483,631 | 0 |
0 | 0 | Is there a way to monitor server ports using SNMP (I'm using net-snmp-python to check this with python).
So far I've checked pretty simple with "nc" command, however I want to see if I can do this with SNMP.
Thank you for your answers and patience. | false | 3,485,203 | 0 | 0 | 0 | 0 | You might try running nmap against the ports you want to check, but that won't necessarily give you an indication that the server process on the other side of an open port is alive. | 0 | 9,078 | 0 | 3 | 2010-08-14T21:34:00.000 | python,networking,snmp | Check ports with SNMP (net-snmp) | 1 | 2 | 3 | 3,486,005 | 0 |
0 | 0 | Is there a way to monitor server ports using SNMP (I'm using net-snmp-python to check this with python).
So far I've checked pretty simple with "nc" command, however I want to see if I can do this with SNMP.
Thank you for your answers and patience. | false | 3,485,203 | 0 | 0 | 0 | 0 | It's hard to see where SNMP might fit in.
The best way to monitor would be to use a protocol specific client (i.e., run a simple query v.s. MySQL, retrieve a test file using FTP, etc.)
If that doesn't work, you can open a TCP or UDP socket to the ports and see if anyone is listening. | 0 | 9,078 | 0 | 3 | 2010-08-14T21:34:00.000 | python,networking,snmp | Check ports with SNMP (net-snmp) | 1 | 2 | 3 | 3,485,524 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.